WO2023231344A1 - 一种运行联盟链网络的方法 - Google Patents
一种运行联盟链网络的方法 Download PDFInfo
- Publication number
- WO2023231344A1 WO2023231344A1 PCT/CN2022/135443 CN2022135443W WO2023231344A1 WO 2023231344 A1 WO2023231344 A1 WO 2023231344A1 CN 2022135443 W CN2022135443 W CN 2022135443W WO 2023231344 A1 WO2023231344 A1 WO 2023231344A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- nodes
- link
- multicast distribution
- distribution tree
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 120
- 238000009826 distribution Methods 0.000 claims abstract description 149
- 238000004891 communication Methods 0.000 claims description 40
- 230000015654 memory Effects 0.000 claims description 34
- 230000008569 process Effects 0.000 claims description 33
- 230000004044 response Effects 0.000 claims description 23
- 230000001934 delay Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 4
- 238000003860 storage Methods 0.000 description 26
- 238000010586 diagram Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 14
- 230000006872 improvement Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 239000000047 product Substances 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000003936 working memory Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000011144 upstream manufacturing Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 239000006227 byproduct Substances 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1044—Group management mechanisms
- H04L67/1051—Group master selection mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1059—Inter-group management mechanisms, e.g. splitting, merging or interconnection of groups
Definitions
- Embodiments of the present disclosure belong to the field of computer technology, and particularly relate to methods of operating a consortium chain network, methods of operating a blockchain network, and nodes used in the blockchain network.
- the alliance chain can be only for members of a specific group and limited third parties, and multiple service agencies can form an alliance.
- Authorized nodes are allowed to join the alliance chain network.
- the server (or server cluster) of each service organization can be a node in the alliance chain network. Nodes on the chain can view information according to their permissions.
- the business scale supported by the alliance chain becomes larger and larger, the number of consensus (or non-consensus) nodes that need to be connected in the alliance chain is also increasing.
- the purpose of this disclosure is to provide a method of operating a consortium chain network, a method of operating a blockchain network, and nodes for the blockchain network.
- a method for operating a consortium chain network including: establishing a tree node with each link node in the consortium chain network as a tree node by a master node in the consortium chain network that is about to initiate a consensus proposal.
- Multicast distribution tree the multicast distribution tree includes a root node, one or more layers of trunk nodes, and one layer of leaf nodes, where the root node is the master node; the master node in the alliance chain network broadcast the multicast distribution tree in Layer trunk nodes, and each trunk node multicasts the received data packets to its child nodes.
- a method for operating a consortium chain network including: electing a master node by each chain node in the consortium chain network, and the master node can initiate a consensus proposal;
- Each link node in the network establishes, periodically updates and broadcasts its local routing table, which indicates the uplink bandwidth of each neighbor node of the corresponding link node and the network delay between the corresponding link node and each neighbor node.
- the master node establishes and broadcasts in the alliance chain network a multicast distribution tree with each link node in the alliance chain network as a tree node according to the local routing table of each link node in the alliance chain network, so
- the multicast distribution tree includes a root node, one or more layers of trunk nodes, and one layer of leaf nodes, where the root node is the master node; and each link node in the alliance chain network operates according to the multicast
- the distribution tree multicasts packets containing the set of transactions within the consensus period to its child nodes.
- a method of running a blockchain network is provided.
- Each node in the blockchain network is in a fully connected mode.
- the method includes: The first chain node establishes a multicast distribution tree with each chain node in the blockchain network as a tree node according to the communication status of each chain node.
- the multicast distribution tree includes a root node, one or more layers of tree trunks node, and a layer of leaf nodes, wherein the root node is the first node, and the root node and each trunk node have multiple child nodes; in response to the size of the first data packet to be sent to each other link node Greater than a threshold, the first link node and each trunk node multicasts a first data packet to its child nodes according to the multicast distribution tree; and in response to a second data packet to be sent to each other link node is less than the threshold, the first chain node broadcasts the second data packet in the blockchain network.
- a node for a blockchain network including a circuit system configured to: establish and periodically update and broadcast its local routing table, the local The routing table indicates the uplink bandwidth of each neighbor node of the corresponding link node, and the network delay between the corresponding link node and each neighbor node; in response to the node being elected as the master node: according to the link node in the alliance chain network
- the local routing table establishes and broadcasts in the alliance chain network a multicast distribution tree with each link node in the alliance chain network as a tree node.
- the multicast distribution tree includes a root node and one or more layers of trunk nodes.
- the first level tree trunk node in response to the node not being elected as the master node: receiving the multicast distribution tree broadcast by the master node; and according to the multicast distribution tree, receiving the received data including the transaction set within the consensus period The packet is multicast to the node's child nodes.
- Figure 1 is a schematic diagram of an example alliance chain network
- Figure 2 is a schematic diagram of the topology structure of the example alliance chain network of Figure 1;
- Figure 3 is a schematic diagram of the master node sending the Pre-Prepare message in the alliance chain network
- Figure 4 is a schematic diagram of a master node sending a Pre-Prepare message in a method of running a consortium chain network according to an embodiment of the present disclosure
- Figure 5 is a schematic diagram of the multicast distribution tree where the master node sends the Pre-Prepare message in Figure 4;
- Figure 6 is a schematic diagram of the message routing table used by the master node to send Pre-Prepare messages in Figure 4;
- Figure 7 is a schematic structural diagram of at least part of a node device for a blockchain network according to an embodiment of the present disclosure
- FIG. 8 is an exemplary block diagram applicable to a general hardware system according to embodiments of the present disclosure.
- FIG. 1 is a schematic diagram of an example alliance chain network.
- This example alliance chain network has multiple participants (participants A, B, C), and each participant can have one or more alliance chain nodes (node 0 to node 5).
- Each alliance chain node can receive transactions from the customers served by the participant, and each node in the alliance chain network performs distributed execution and certificate storage.
- chain nodes the on-chain nodes in the alliance chain network
- the root nodes, trunk nodes and leaf nodes in the multicast distribution tree are collectively referred to as "chain nodes”. "Tree node”.
- the network topology of the alliance chain can be a full-mesh network structure, as shown in Figure 2.
- each chain node communicates directly with other chain nodes.
- the consensus algorithm used by the alliance chain can be the Practical Byzantine Fault Tolerance (PBFT) algorithm.
- PBFT Practical Byzantine Fault Tolerance
- a link node in the alliance chain, such as node 0, can be determined as the master node through election.
- the master node is responsible for initiating consensus proposals and broadcasting the transaction set (Pre-Prepare message) within a consensus cycle to other chain nodes (chain nodes participating in the consensus during the consensus phase), as shown in Figure 3 Show.
- the master node generates a status tree, a transaction tree, and a receipt tree based on the transaction content and execution results of each transaction stored in this node, and records the root hash corresponding to the root node of these three trees into the block header; then, After the master node packages this set of transactions and generates a new block, it broadcasts the block (or block header) to other chain nodes.
- Other chain nodes such as Node 1 to Node 15, after receiving the Pre-Prepare message, verify the root hash in the block header by executing a set of transactions in the Pre-Prepare message. After the consensus proposal is verified, a Prepare message is sent to other nodes.
- the transaction described in this article refers to a piece of data that is created by the user through the client of the blockchain and needs to be finally published to the distributed database of the blockchain.
- a transaction is a data structure agreed in the blockchain protocol. To store a piece of data in the blockchain, it needs to be encapsulated into a transaction.
- Transactions in the blockchain can be divided into narrow transactions and broad transactions.
- a transaction in a narrow sense refers to a value transfer issued by a user to the blockchain; for example, in the traditional Bitcoin blockchain network, a transaction can be a transfer initiated by the user in the blockchain.
- a broad transaction refers to a piece of business data with business intentions released by users to the blockchain; for example, the operator can build a consortium chain based on actual business needs, and rely on the consortium chain to deploy other types that have nothing to do with value transfer.
- Online businesses such as house rental business, vehicle dispatching business, insurance claims business, credit services, medical services, etc.
- the transaction can be a business with business intention published by the user in the alliance chain Message or business request.
- the master node needs to broadcast the Pre-Prepare message including all transactions within a consensus cycle to all other consensus nodes in the alliance chain network, as shown in Figure 3.
- the Pre-Prepare message contains all transaction data in a block, generally ranging from a few hundred KB to 10MB.
- the master node uses the PBFT algorithm to send Pre-Prepare messages to all other consensus nodes.
- the bandwidth required by the master node increases exponentially.
- the upstream bandwidth of the master node often becomes the bottleneck of the throughput of the entire alliance chain.
- the bandwidth each node needs to provide is: 2000TPS *1KB*8bit*(100-1) nodes ⁇ 1600Mbps. If the number of chain nodes in the alliance chain network increases, the requirements for the bandwidth provided by each node will also increase.
- the method for running a consortium chain network is to construct a multicast distribution tree for the link nodes in the consortium chain network for the process of the master node distributing Pre-Prepare messages, using the master node as the data source, that is, The root node of the multicast distribution tree, some nodes serve as one or more intermediate layers (i.e. one or more layers of trunk nodes) for message distribution to distribute messages to more edge nodes (i.e. leaf nodes), thereby Causes Pre-Prepare messages to be distributed from the master node to each chain node.
- intermediate layers i.e. one or more layers of trunk nodes
- the master node does not need to send the Pre-Prepare message to all other chain nodes, but only to some chain nodes (i.e., the first layer trunk node), so it can be significantly reduced. Requirements for master node bandwidth. In this way, even when the number of chain nodes is huge, the bandwidth of the master node can be guaranteed to meet the requirements, which helps to expand the alliance chain network.
- the method for operating a consortium chain network provided by one or more embodiments of the present disclosure can establish a variable unit with optimal delay based on the communication delay between chain nodes, the delay requirements of the consensus stage and the uplink bandwidth of each chain node. Source multicast distribution tree.
- the method of operating a consortium chain network according to these embodiments can dynamically optimize the total delay of message distribution caused by the multicast distribution tree, which is helpful to improve the performance of the consortium chain network.
- the master node i.e. accounting node
- the master node used to initiate consensus proposals has been elected and determined before the start of a round of consensus. Elections can be conducted regularly or triggered by events. For example, if the consensus fails, a re-election of the master node will be triggered.
- the non-master node After receiving the transaction submitted by the client, the non-master node sends the transaction to the determined master node.
- the master node packages the transactions in this consensus cycle and sends them to other chain nodes (i.e. verification nodes).
- Each chain node in the alliance chain network establishes its own local routing table.
- the local routing table of the link node may indicate the uplink bandwidth of each neighbor node of the link node and the network delay between the link node and each neighbor node.
- Neighbor nodes also called neighbor nodes, are directly connected nodes. When the chain nodes in the alliance chain network are in full connection mode, each chain node establishes a direct communication connection with each other chain node. Therefore, the neighbor nodes of each chain node include all other chain nodes. .
- Heartbeat messages without business semantics are regularly sent between adjacent chain nodes to detect the network connection status of neighbor nodes and the communication status between this node and neighbor nodes, so that this node can count the uplink traffic of each neighbor node. Data such as bandwidth and network delay between this node and each neighbor node.
- the local routing table local_route_table established by the first link node may include a list, and each item in the list is a node description with the structure route_item established for each neighbor node.
- the local route table local_route_table can have the following data structure:
- nid is the ID of the corresponding neighbor node, which is the identification information of the chain node, for example, it can be the hash value of the public key of the chain node; srtt is the communication between the first chain node and the neighbor node with ID nid. Smoothed Round Trip Time; bandwidth is the uplink bandwidth of the neighbor node with ID nid; state is the status of the neighbor node with ID nid, such as indicating the node's online or offline network connection status, indicating that the node is overloaded, Low load or normal CPU usage, etc.
- Each chain node can periodically update its local routing table and broadcast its local routing table to all other chain nodes in the entire consortium chain network upon initial establishment and after each update.
- each chain node may not broadcast its local routing table, but only send its routing table to the previously determined master node in the alliance chain network.
- the link node if a link node is not directly connected to the master node, the link node can forward its local routing table to the master node through its neighbor nodes; in the case of a direct connection between the link node and the master node, the link node can forward its local routing table to the master node through its neighbor nodes.
- Chain nodes can send their local routing tables directly to the master node.
- the master node receives the local routing tables of other chain nodes in the alliance chain network, and can establish a global routing table based on the local routing tables of each chain node.
- the global routing table global_route_table can have the following data structure:
- the map collection stores key-value pairs, in which the string string is used as the key and is the link node ID; the RouteList list is used as the value and is the local routing table of the corresponding link node.
- the master node can establish a multicast distribution tree for message distribution based on the global routing table.
- Each tree node in the multicast distribution tree corresponds to a link node in the alliance chain.
- a multicast distribution tree includes a root node, one or more layers of trunk nodes, and one layer of leaf nodes, with the root node being the master node.
- Figure 5 shows an example of a multicast distribution tree. The establishment of a multicast distribution tree includes two processes.
- the first process is to determine the structure of the multicast distribution tree, that is, to determine the order n and layer number L of the multicast distribution tree, where the order n indicates each tree node (except The maximum number of child nodes outside the leaf node), the number of layers L indicates the total number of layers of the trunk node and the leaf node; the second process is to map each link node to (this disclosure also refers to the configuration as) the multicast distribution tree Each tree node. It should be noted that when "child nodes" are mentioned in this article, unless otherwise stated, they refer to direct child nodes, excluding grandchild nodes.
- the master node determines the total number of chain nodes N, the minimum requirement for the uplink bandwidth of the chain node bw (for example, in the alliance chain network, the deployed nodes are usually required to meet a minimum bandwidth requirement), and the distance between each chain node.
- the communication delay and consensus timeout time T are used to determine the order n and layer number L of the multicast distribution tree.
- the master node determines the order n and layer number L of the multicast distribution tree according to the following formulas 1 to 3:
- b is the size of the data packet including the transaction set within the consensus cycle
- SRTTk is the maximum smoothed round-trip time of each tree node in the k-th layer in the layer number L
- SRTT1 is the maximum value of the smooth round-trip time of each tree node in the k-th layer in the layer number L
- SRTT2 is the maximum value of the smoothed round trip time of each tree node in the second layer of the layer number L, and so on.
- the size of a Pre-Prepare message is equivalent to the sum of the sizes of all transactions in a block generation cycle. Therefore, the size of the Pre-Prepare message b can be estimated based on the average transaction size and the maximum TPS (that is, including the size of the Pre-Prepare message in the consensus cycle).
- the size of the data packet of the transaction set so based on the maximum number of child nodes (that is, the first-level trunk nodes) that the master node can support, the upper limit of the order n can be calculated according to Formula 2.
- the left side of Formula 3 reflects the delay time caused by multi-layer message distribution according to the multicast distribution tree.
- the Pre-Prepare message needs to be completed within the consensus timeout time T.
- the consensus timeout T and the SRTT of each chain node the upper limit of the number of layers L can be calculated according to Formula 3. According to the relationship between the order n and the number of layers L and the total number of link nodes N in Formula 1, the values of the order n and the number of layers L can be adjusted and finally determined.
- the order n of the multicast distribution tree is greater than or equal to 3, so that the master node can arrange the number of its next-level nodes to 3 or more, that is, the number of nodes in one or more layers of trunk nodes
- the number of trunk nodes in the first layer is greater than or equal to 3.
- the master node configures each link node to each tree node of the multicast distribution tree.
- n that is, fill up the order n, for example, each trunk node in Figure 5 has order child nodes
- you can also configure less than n for example, Figure 5 The root node in 5 has less than order child nodes.
- the master node configures each link node of the alliance chain network as the corresponding tree node of the multicast distribution tree based on the communication delay between each link node and the uplink bandwidth of each link node.
- the master node can configure the link node with the uplink bandwidth less than the threshold as a leaf node; and configure the link node with the uplink bandwidth greater than the threshold as the trunk node, and configure the node with smaller communication delay with the parent node closer to the parent node layer.
- the upstream bandwidth of the trunk node is also an important consideration. If the uplink bandwidth of a link node is small (for example, less than the threshold), the link node is configured as a leaf node; if the uplink bandwidth of the link node is large (for example, greater than the threshold), it can be configured as a trunk node.
- the master node When arranging tree trunk nodes, when the master node selects its multiple child nodes, it first gives priority to the transmission link with the lowest delay based on the SRTT data in the local routing table; when the master node selects child nodes for the distribution node, it first selects the transmission link with the lowest delay based on the SRTT data in the local routing table.
- the SRTT data in the adjacency table of the corresponding node in the global routing table that is, the local routing table of the node
- the node with a small SRTT is given priority, but the bandwidth needs to be able to support the transmission bandwidth requirement of the number of its child nodes.
- the master node uses itself as the root node of the multicast distribution tree, selects the n link nodes with the smallest SRTT among the link nodes directly connected to itself as the first-level tree trunk nodes; and then selects the first-level tree trunk nodes in sequence. Each node then selects its child node, and so on, until the number of layers reaches L, thus constructing the following message routing table list_pp_msg_route_table:
- nid is the ID of the corresponding link node, which is the identification information of the link node, for example, it can be the hash value of the public key of the link node; parent_node_idx is the parent node of the link node with the ID nid in the multicast distribution tree. index value.
- the index value of the tree node in the multicast distribution tree is the number of each tree node starting from the root node and going down layer by layer.
- the number at the position of each tree node in Figure 5 is the index value of the corresponding tree node.
- the message routing table list_pp_msg_route_table corresponding to the multicast distribution tree shown in Figure 5 is shown in Figure 6.
- the message routing table list_pp_msg_route_table can be an instance of the multicast distribution tree created by the master node and stored by all chain nodes.
- the parent node index value of node 0 (link node with ID nid0) is -1, that is, node 0 is the root node; node 1 (link node with ID nid1) to The parent node index value of node 3 (link node with ID nid3) is 0, that is, the parent node of node 1 to node 3 is node 0; node 4 (link node with ID nid4) to node 7 (link with ID nid7
- the parent node index value of node) is 1, that is, the parent node of node 4 to node 7 is node 1; the parent node index value of node 8 (link node with ID nid8) to node 11 (link node with ID nid11) is 2, that is, the parent node of node 8 to node 11 is node 2; the parent node index value of node 12 (link node with ID nid12) to node 15
- the master node broadcasts the multicast distribution tree it establishes in the alliance chain network, for example, to other chain nodes in the form of a message routing table. If it is the first time that the master node builds a multicast distribution tree, the master node will broadcast immediately; if it is not the first time that the master node builds a multicast distribution tree, the master node will broadcast when the update conditions described below are met. Other chain nodes receive and save the multicast distribution tree.
- the master node when the master node, such as node 0, starts consensus, it will multicast the data packet including the transaction set within the consensus cycle, that is, the Pre-Prepare message data packet, to all its child nodes based on the multicast distribution tree. , that is, the first layer of tree trunk nodes, such as node 1 to node 3. After the trunk node receives the Pre-Prepare message, it forwards the Pre-Prepare message to all its child nodes (which may be leaf nodes or trunk nodes at the next level) based on the multicast distribution tree.
- the trunk node After the trunk node receives the Pre-Prepare message, it forwards the Pre-Prepare message to all its child nodes (which may be leaf nodes or trunk nodes at the next level) based on the multicast distribution tree.
- node 1 forwards the Pre-Prepare message to nodes 4 to 7
- node 2 forwards the Pre-Prepare message to nodes 8 to 11
- node 3 forwards the Pre-Prepare message to nodes 12 to 15.
- the multicast distribution tree is forwarded layer by layer until the Pre-Prepare message is sent to each leaf node, thus completing the full chain distribution process of the Pre-Prepare message in the consensus proposal phase.
- nodes 1 to 15 verify the transaction set in the message, and after passing the verification, send Prepare messages and Commit messages to other nodes to complete this consensus process. Prepare messages and Commit messages are usually much smaller than Pre-Prepare messages, so they can be sent in the existing way.
- each chain node can be used to complete the sending of Prepare messages and Commit messages, that is, each chain node broadcasts Prepare messages to all other chain nodes. and Commit message.
- each link node can send Prepare messages and Commit messages to its neighbor nodes, and the neighbor nodes forward them, thereby sending Prepare messages and Commit messages to other link nodes in the alliance chain network. .
- the multicast distribution tree of the entire alliance chain network is established and maintained by the master node.
- Other chain nodes receive and save the multicast distribution tree from the master node, which can ensure that the multicast distribution tree of each chain node is consistent, thus It can avoid message routing loops and reduce message storms caused by repeated message forwarding.
- the order n of the multicast distribution tree can be determined as 10, and the number of layers L can be determined as 3 to support 1000 nodes. According to the method described above, nodes with smaller communication delays with the parent node are configured at a layer closer to the parent node.
- all nodes except the master node can be The SRTT of each link node outside is arranged from small to large.
- the SRTT value of the 10th link node after the arrangement is SRTT1, which is the maximum delay time of the first-layer trunk node.
- the SRTT of the 110th link node is SRTT2, that is The maximum delay time in the second-level trunk node, the SRTT of the 999th link node is SRTT3, which is the maximum delay time in the leaf nodes.
- the shortest possible delay for the third layer is the (2N+1)th of the total arrangement.
- the uplink bandwidth that the node needs to provide is 1600Mbps (as mentioned above), which can make The maximum delay for consensus approval is Ro.
- the uplink bandwidth that the nodes need to provide is 160Mbps, and the maximum delay that can enable the consensus to pass is SRTT1+SRTT2+ Ro. It can be seen that according to the method of operating a consortium chain network according to the embodiment of the present disclosure, the node bandwidth demand can be reduced by 90% and the chain scale can be increased by 10 times without significantly increasing the consensus delay.
- the master node establishes a multicast distribution tree according to the communication status of each link node in the alliance chain network.
- the communication status of each link node such as the total number of link nodes N, the minimum requirement for the uplink bandwidth of the link node bw, the communication delay between each link node and the uplink bandwidth of each link node, etc. All will change over time.
- each chain node regularly sends its local routing table to the master node, and the master node can periodically calculate the communication total of the currently used multicast distribution tree based on the local routing table periodically reported by each chain node in the alliance chain network.
- the first total delay Delay (hereinafter referred to as the first total delay); and based on the changed communication status of each node, and using the method described above to establish a possible updated multicast distribution tree, and calculate the possible updated multicast distribution tree
- the total communication delay (hereinafter referred to as the second total delay).
- the method for estimating the total communication delay of the multicast distribution tree has been described above.
- the master node compares the two total delays. If the reduction of the second total delay from the first total delay meets the conditions, for example, a delay reduction of 10%, the master node will use the multicast distribution tree currently in use.
- Update the possible updated multicast distribution tree and broadcast the updated multicast distribution tree to each link node in the alliance chain network, so that the updated multicast distribution tree can be used for the Pre-Prepare message in the subsequent consensus process. distribution to complete the optimization of the multicast distribution tree.
- the following three situations in the alliance chain network may trigger changes in the multicast distribution tree.
- the first situation the link node in the alliance chain network is offline.
- the offline link node is a leaf node of the multicast distribution tree, because the offline of the leaf node does not affect the multicast distribution process, there is no need to update the multicast distribution tree at this time.
- the offline link node is a trunk node, different situations need to be handled. 1. If the offline trunk node is not the first-level trunk node (referring to the trunk node closest to the root node), because the total number of chain nodes to which the message can be distributed is greater than 2/3 of the total number of chain nodes, it does not affect the overall consensus. Based on the local routing table reported by other chain nodes, the master node can find that some nodes (that is, the child nodes of offline chain nodes) cannot receive the distributed Pre-Prepare messages. At this time, the master node waits until it receives the updated local routing table reported by all child nodes of the offline chain node, then re-establishes (i.e.
- the offline trunk node is the first-level trunk node, and if the order n of the multicast distribution tree is greater than 3, then it will be processed in the same way as the aforementioned item 1; if the order n of the multicast distribution tree is less than or equal to 3, the master node immediately updates and broadcasts the multicast distribution tree to ensure that the multicast distribution tree is updated before distributing the Pre-Prepare message.
- the master node first configures the new link node as a leaf node of the multicast distribution tree, that is, puts it in the L layer, updates the multicast distribution tree, and adds the updated multicast
- the distribution tree is notified to at least the new link node and its parent node, for example, broadcasting the updated multicast distribution tree, or sending the multicast distribution tree in a directed manner.
- the master node regularly performs optimization of the multicast distribution tree, so the new link node may later be reconfigured as a trunk node of a certain layer or still configured as a leaf node.
- the third situation the master node is offline or changed.
- each chain node in the alliance chain network will re-elect the master node, and then each chain node will periodically send its local routing table to the new master node .
- the new master node establishes and broadcasts a multicast distribution tree based on the local routing table of each link node in the alliance chain network, and the new master node initiates a consensus proposal and sends the Pre-Prepare message to the first-layer trunk node.
- Each chain node in the alliance chain network forwards the received Pre-Prepare message packet to its child nodes according to the multicast distribution tree, so that the Pre-Prepare message is distributed throughout the alliance chain network.
- One or more embodiments of the present disclosure also provide a method for running a blockchain network, in which each node in the blockchain network is in a fully connected mode.
- the method includes: starting from the first chain node in the blockchain network According to the communication status of each chain node, a multicast distribution tree is established with each chain node in the blockchain network as a tree node.
- the multicast distribution tree includes a root node, one or more layers of trunk nodes, and one layer of leaf nodes, where , the root node is the first node, and the root node and each trunk node have multiple child nodes; in response to the size of the first data packet to be sent to each other link node being greater than the threshold, according to the multicast distribution tree, the first The chain node and each trunk node multicast the first data packet to its child nodes; and in response to the size of the second data packet to be sent to each other chain node being less than the threshold, the first chain node in the blockchain network Broadcast the second data packet.
- larger data packets that need to be distributed by the entire blockchain network can be processed according to the multicast distribution tree.
- Distribute; and for smaller data packets that need to be distributed by the entire blockchain network the corresponding chain nodes directly broadcast them. In this way, while ensuring the communication efficiency between chain nodes in the blockchain network, the requirements for the upstream bandwidth of the chain nodes can be reduced, which will help improve the performance and scale of the blockchain network.
- FIG. 7 is a schematic structural diagram of at least part of a node device 700 for a blockchain network according to an embodiment of the present disclosure.
- Node device 700 includes one or more processors 710, one or more memories 720, and other components typically found in a computer or the like (not shown).
- Each of the one or more memories 720 may store content that may be accessed by the one or more processors 710 , including instructions 721 that may be executed by the one or more processors 710 and that may be executed by the one or more processors 710 .
- Instructions 721 may be any set of instructions to be executed directly by one or more processors 710, such as machine code, or indirectly, such as a script.
- the terms “instructions,” “applications,” “processes,” “steps,” and “programs” in this disclosure may be used interchangeably in this disclosure.
- Instructions 721 may be stored in object code format for direct processing by one or more processors 710, or in any other computer language, including a script or collection of independent source code modules that are interpreted on demand or compiled ahead of time. The functionality, methods, and routines of instructions 721 are explained in greater detail elsewhere in this disclosure.
- One or more memories 720 may be any transitory or non-transitory computer-readable storage medium capable of storing content accessible by one or more processors 710, such as a hard drive, memory card, ROM, RAM, DVD, CD, USB memory, writable memory, read-only memory, etc.
- One or more of the memory 720 may include a distributed storage system, in which instructions 721 and/or data 722 may be stored on multiple different storage devices that may be physically located at the same or different geographic locations.
- One or more of the one or more memories 720 may be connected to the one or more first devices 710 via a network, and/or may be directly connected to or incorporated into any of the one or more processors 710 .
- One or more processors 710 may retrieve, store, or modify data 722 in accordance with instructions 721 .
- data 722 may also be stored in a computer register (not shown), as a table with many different fields and records, or as an XML document in a relational database.
- Data 722 may be formatted in any format readable by a computing device, such as, but not limited to, binary values, ASCII, or Unicode. Additionally, data 722 may include any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary code, pointers, references to data stored in other memory, such as at other network locations, or used by functions to calculate correlations. Data information.
- the one or more processors 710 may be any conventional processor, such as a commercially available central processing unit (CPU), graphics processing unit (GPU), or the like. Alternatively, one or more processors 710 may also be dedicated components, such as application specific integrated circuits (ASICs) or other hardware-based processors. Although not required, one or more processors 710 may include specialized hardware components to perform certain computing processes faster or more efficiently.
- CPU central processing unit
- GPU graphics processing unit
- ASICs application specific integrated circuits
- processors 710 may include specialized hardware components to perform certain computing processes faster or more efficiently.
- processors 710 and one or more memories 720 are schematically shown in the same box in Figure 7, the node device 700 may actually include multiple processors that may exist within the same physical enclosure or in different Multiple processors or memories within one physical enclosure.
- reference to a processor, computer, computing device, or memory should be understood to include reference to a collection of processors, computers, computing devices, or memories that may or may not operate in parallel.
- System 800 is an exemplary block diagram of a general hardware system 800 applicable in accordance with one or more exemplary embodiments of the present disclosure.
- System 800 will now be described with reference to Figure 8, which is an example of a hardware device that may be applied to aspects of the present disclosure.
- the node device 700 in the above embodiments may include all or part of the system 800.
- System 800 may be any machine configured to perform processing and/or computation, which may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, a personal data assistant, a smartphone, a vehicle-mounted computer, or any thereof. combination.
- System 800 may include elements that may be coupled to or in communication with bus 802 via one or more interfaces.
- system 800 may include bus 802, as well as one or more processors 804, one or more input devices 806, and one or more output devices 808.
- the one or more processors 804 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (eg, special processing chips). Each operation and/or step in the method described above can be implemented by one or more processors 804 executing instructions.
- Input device 806 may be any type of device that can input information to a computing device and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote control.
- Output device 808 may be any type of device that can present information and may include, but is not limited to, a display, speakers, video/audio output terminal, vibrator, and/or printer.
- the System 800 may also include or be connected to non-transitory storage device 810 .
- the non-transitory storage device 810 may be any storage device that is non-transitory and can implement data storage, and may include, but is not limited to, a disk drive, an optical storage device, a solid state memory, a floppy disk, a hard disk, a magnetic tape or any other magnetic media, an optical disk or any Other optical media, ROM (read only memory), RAM (random access memory), cache memory, and/or any other memory chip/chipset, and/or from which the computer can read data, instructions and/or code any other media.
- Non-transitory storage device 810 may be detached from the interface.
- the non-transitory storage device 810 may have data/instructions/code for implementing the methods, operations, steps and processes described above.
- System 800 may also include a communications device 812.
- Communication device 812 may be any type of device or system capable of communicating with external devices and/or with a network, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication device, and/or a chipset, such as a Bluetooth device, 802.11 devices, WiFi devices, WiMax devices, cellular communications devices, satellite communications devices, and/or the like.
- Bus 802 may include, but is not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
- bus 802 may also include a controller area network (CAN) bus or other architecture designed for use on a vehicle.
- ISA Industry Standard Architecture
- MCA Micro Channel Architecture
- EISA Enhanced ISA
- VESA Video Electronics Standards Association
- PCI Peripheral Component Interconnect
- bus 802 may also include a controller area network (CAN) bus or other architecture designed for use on a vehicle.
- CAN controller area network
- System 800 may also include working memory 814 , which may be any type of working memory that may store instructions and/or data useful for the operation of processor 804 , which may include, but is not limited to, random access memory and/or read-only storage devices. .
- Software elements may be located in working memory 814, including, but not limited to, operating system 816, one or more applications 818, drivers, and/or other data and code. Instructions for performing the methods, operations, and steps described above may be included in one or more applications 818.
- the executable code or source code of the instructions of the software elements may be stored in a non-transitory computer-readable storage medium, such as the storage device 810 described above, and may be read into the working memory 814 through compilation and/or installation.
- the executable code or source code of the instructions for the software element may also be downloaded from a remote location.
- system 800 can be distributed over a network. For example, some processing may be performed using one processor, while other processing may be performed by another processor remote from the one processor. Other components of system 800 may be similarly distributed. As such, system 800 may be interpreted as a distributed computing system that performs processing at multiple locations.
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- HDL Hardware Description Language
- the controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor. , logic gates, switches, Application Specific Integrated Circuit (ASIC), programmable logic controllers and embedded microcontrollers.
- controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, For Microchip PIC18F26K20 and Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
- the controller in addition to implementing the controller in the form of pure computer-readable program code, the controller can be completely programmed with logic gates, switches, application-specific integrated circuits, programmable logic controllers and embedded logic by logically programming the method steps. Microcontroller, etc. to achieve the same function. Therefore, this controller can be considered as a hardware component, and the devices included therein for implementing various functions can also be considered as structures within the hardware component. Or even, the means for implementing various functions can be considered as structures within hardware components as well as software modules implementing the methods.
- the systems, devices, modules or units described in the above embodiments may be implemented by computer chips or entities, or by products with certain functions.
- a typical implementation device is a server system.
- the computer that implements the functions of the above embodiments may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, or a personal digital assistant. , media player, navigation device, email device, game console, tablet, wearable device, or a combination of any of these devices.
- the functions of each module can be implemented in the same or multiple software and/or hardware, or the modules that implement the same function can be implemented by a combination of multiple sub-modules or sub-units, etc. .
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
- the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
- These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions
- the device implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
- These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device.
- Instructions provide steps for implementing the functions specified in a process or processes of a flowchart diagram and/or a block or blocks of a block diagram.
- a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- Memory may include non-permanent storage in computer-readable media, random access memory (RAM) and/or non-volatile memory in the form of read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
- RAM random access memory
- ROM read-only memory
- flash RAM flash random access memory
- Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information.
- Information may be computer-readable instructions, data structures, modules of programs, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory.
- PRAM phase change memory
- SRAM static random access memory
- DRAM dynamic random access memory
- RAM random access memory
- read-only memory read-only memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory or other memory technology
- compact disc read-only memory CD-ROM
- DVD digital versatile disc
- Magnetic tape magnetic tape storage, graphene storage or other magnetic storage devices or any other non-transmission medium can be used to store information that can be accessed by a computing device.
- computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
- one or more embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, one or more embodiments of the present disclosure may employ a computer program embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. Product form.
- computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
- One or more embodiments of the present disclosure may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
- program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
- One or more embodiments of the present disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices connected through a communications network.
- program modules may be located in both local and remote computer storage media including storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Small-Scale Networks (AREA)
Abstract
本公开涉及运行联盟链网络的方法,包括:由所述联盟链网络中将要发起共识提议的主节点建立以所述联盟链网络中各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中根节点为所述主节点;由所述主节点在所述联盟链网络中广播所述组播分发树;以及根据所述组播分发树,由所述主节点将包括共识周期内的交易集合的数据包组播给所述一层或多层树干节点中的第一层树干节点,并由每个树干节点将接收到的数据包组播给其子节点。
Description
本申请要求于2022年06月01日提交中国专利局、申请号为202210616509.3、发明名称为“一种运行联盟链网络的方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本公开实施例属于计算机技术领域,尤其涉及运行联盟链网络的方法、运行区块链网络的方法和用于区块链网络的节点。
随着区块链技术的发展,基于联盟链网络的业务处理模式较为常见。联盟链可以是只针对特定群体的成员和有限的第三方,多个服务机构可以组成一个联盟。允许授权的节点加入联盟链网络,每个服务机构的服务器(或服务器集群)可以为联盟链网络中的一个节点,链上节点可根据权限查看信息。随着联盟链支撑的业务规模越来越大,联盟链中需要接入的共识(或非共识)节点数量也越来越多。
发明内容
本公开的目的在于提供运行联盟链网络的方法、运行区块链网络的方法和用于区块链网络的节点。
根据本公开的第一方面,提供了一种运行联盟链网络的方法,包括:由所述联盟链网络中将要发起共识提议的主节点建立以所述联盟链网络中各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中根节点为所述主节点;由所述主节点在所述联盟链网络中广播所述组播分发树;以及根据所述组播分发树,由所述主节点将包括共识周期内的交易集合的数据包组播给所述一层或多层树干节点中的第一层树干节点,并由每个树干节点将接收到的数据包组播给其子节点。
根据本公开的第二方面,提供了一种运行联盟链网络的方法,包括:由所述联盟链网络中的各链节点选举主节点,所述主节点能够发起共识提议;由所述联盟链网络中的各链节点建立、并周期性地更新及广播其本地路由表,所述本地路由表指示相应链节点的各邻居节点的上行带宽、以及相应链节点与各邻居节点之间的网络延迟;由所述主节点根据所述联盟链网络中各链节点的本地路由表建立并在所述联盟链网络中广播以所述联盟链网络中各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中根节点为所述主节点;以及由所述联盟链网络中的各链节点根据所述组播分发树,将包括共识周期内的交易集合的数据包组播给其子节点。
根据本公开的第三方面,提供了一种运行区块链网络的方法,所述区块链网络中的各节点之间为全连接模式,所述方法包括:由所述区块链网络中的第一链节点根据各链节点的通信状况建立以所述区块链网络中各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中,根节点为所述第一节点,并且根节点和每个树干节点均有多个子节点;响应于要被发送给其他各链节点的第一数据包的大小大于阈值,根据所述组播分发树,所述第一链节点和每个树干节点将第一数据包组播给其子节点;以及响应于要被发送给其他各链节点的第二数据包的大小小于阈值,所述第一链节点在所述区块链网络中广播第二数据包。
根据本公开的第四方面,提供了一种用于区块链网络的节点,包括电路系统,所述电路系统被配置为执行:建立并周期性地更新及广播其本地路由表,所述本地路由表指示相应链节点的各邻居节点的上 行带宽、以及相应链节点与各邻居节点之间的网络延迟;响应于所述节点被选举为主节点:根据所述联盟链网络中各链节点的本地路由表建立并在所述联盟链网络中广播以所述联盟链网络中各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中根节点为所述主节点;以及根据所述组播分发树,将包括共识周期内的交易集合的数据包组播给所述一层或多层树干节点中的第一层树干节点;响应于所述节点未被选举为主节点:接收主节点广播的组播分发树;以及根据所述组播分发树,将接收到的包括共识周期内的交易集合的数据包组播给所述节点的子节点。
为了更清楚地说明本公开实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是示例联盟链网络的示意图;
图2是图1的示例联盟链网络的拓扑结构的示意图;
图3是联盟链网络中主节点发送Pre-Prepare消息的示意图;
图4是根据本公开实施例的运行联盟链网络的方法中主节点发送Pre-Prepare消息的示意图;
图5是图4中主节点发送Pre-Prepare消息的组播分发树的示意图;
图6是图4中主节点发送Pre-Prepare消息的消息路由表的示意图;
图7是根据本公开实施例的用于区块链网络的节点设备的至少部分的结构示意图;
图8是可应用于根据本公开实施例的通用硬件系统的示例性框图。
为了使本技术领域的人员更好地理解本公开中的技术方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本公开保护的范围。
图1是示例联盟链网络的示意图。该示例联盟链网络有多个参与方(参与方A、B、C),每个参与方可以有一个或多个联盟链节点(节点0至节点5)。每个联盟链节点可以从该参与方所服务的客户接收交易,并由联盟链网络中的各节点进行分布式执行与存证。为了区别于后文描述的组播分发树中的节点,本文中将联盟链网络中的链上节点简称为“链节点”,将组播分发树中的根节点、树干节点和叶子节点统称为“树节点”。
联盟链的网络拓扑结构可以为全连接(full-mesh)网络结构,如图2所示。在全连接模式下,每个链节点都与其他的链节点直接通信连接。联盟链采用的共识算法可以是实用拜占庭容错(Practical Byzantine Fault Tolerance,PBFT)算法。联盟链中的一个链节点,例如节点0,可以通过选举被确定为主节点。主节点在PBFT共识算法中,负责发起共识提议,并将一个共识周期内的交易集合(Pre-Prepare消息)广播给其他的链节点(在共识阶段为参与共识的链节点),如图3所示。具体地,主节点根据本节点存储的各条交易的交易内容和执行结果生成状态树、交易树和收据树,将这三棵树的根节点对应的根哈希记入区块头中;然后,主节点将这一组交易打包并生成新的区块后,将该区块(或区块头)广播至其他链节点。其他的链节点,例如节点1至节点15,接收到Pre-Prepare消息之后,通过执行Pre-Prepare消息中的一组交易,对区块头中的根哈希进行验证。在对共识提议的验证通过后,向其它节点发送Prepare消息。在预定时间范围内,如果收到超过2F个不同链节点的Prepare消息(其中F为PBFT共识算法中 的容错节点个数),就代表Prepare阶段已经完成,则进入Commit阶段。每个链节点向其它节点广播Commit消息,当收到2F+1个Commit消息后(包括自己的),代表当前共识周期内的共识已经达成,各链节点可以将包含这组交易的区块追加到原有的区块链的末尾(也称链式存储、上链),并根据这组交易的执行结果对世界状态进行更新。
需要说明的是,本文所述的交易(transaction),是指用户通过区块链的客户端创建,并需要最终发布至区块链的分布式数据库中的一笔数据。交易是区块链协议中所约定的一种数据结构,一笔数据要存入区块链,就需要被封装成交易。区块链中的交易,存在狭义的交易以及广义的交易之分。狭义的交易是指用户向区块链发布的一笔价值转移;例如,在传统的比特币区块链网络中,交易可以是用户在区块链中发起的一笔转账。而广义的交易是指用户向区块链发布的一笔具有业务意图的业务数据;例如,运营方可以基于实际的业务需求搭建一个联盟链,依托于联盟链部署一些与价值转移无关的其它类型的在线业务(比如,租房业务、车辆调度业务、保险理赔业务、信用服务、医疗服务等),而在这类联盟链中,交易可以是用户在联盟链中发布的一笔具有业务意图的业务消息或者业务请求。
可以看出,在以上描述的共识过程中,主节点需要将包括一个共识周期内的所有交易的Pre-Prepare消息广播给联盟链网络内其他所有的共识节点,如图3所示。Pre-Prepare消息包含了一个区块中所有交易数据,一般有几百KB至10MB。采用PBFT算法,主节点要把Pre-Prepare消息发给其他所有共识节点,随着共识节点数量增多,主节点需要的带宽就成倍增加。主节点的上行带宽往往成了整个联盟链吞吐量的瓶颈。例如,对于有100个链节点的联盟链网络,如果平均每个交易的大小为1KB,若要支撑2000TPS(每秒交易数,Transaction Per Second),则每个节点需要提供的带宽大小为:2000TPS*1KB*8bit*(100-1)个节点≈1600Mbps。如果联盟链网络的链节点的个数增多,则对每个节点能提供的带宽大小的要求也随之增高。
本公开一个或多个实施例提供的运行联盟链网络的方法,针对主节点分发Pre-Prepare消息的过程,对联盟链网络中的链节点构建组播分发树,以主节点作为数据源,即组播分发树的根节点,部分节点作为用于消息分发的一个或多个中间层(即一层或多层树干节点),以将消息分发到更多的边缘节点(即叶子节点),从而使得Pre-Prepare消息从主节点分发到各个链节点。根据这些实施例的运行联盟链网络的方法,主节点不需要将Pre-Prepare消息发送给所有其他的链节点,而只需要发给部分链节点(即第一层树干节点),因此可以大幅降低对主节点带宽的要求。如此,即便在链节点数量庞大的情况下,也能够保证主节点的带宽满足要求,有助于实现联盟链网络的扩展。本公开一个或多个实施例提供的运行联盟链网络的方法,可以基于链节点之间的通信延迟、共识阶段的时延要求和各链节点的上行带宽,建立时延最优的可变单源组播分发树。根据这些实施例的运行联盟链网络的方法,能够动态优化组播分发树带来的消息分发的总时延,有助于联盟链网络性能的提升。
下面结合图4至图6以具体的示例描述根据本公开实施例的运行联盟链网络的方法。在采用PBFT等机制的区块链网络中,在一轮共识开始之前已经选举确定了用于发起共识提议的主节点(即记账节点)。选举可以是定期进行,也可以由事件触发,例如共识不通过则触发重新选举主节点。非主节点在收到客户端提交的该交易后,将该交易发送至已确定的主节点。主节点在共识阶段将本共识周期内的交易打包发送至其他各链节点(即验证节点)。
联盟链网络中的每个链节点建立各自的本地路由表。链节点的本地路由表可以指示该链节点的各邻居节点的上行带宽、以及该链节点与各邻居节点之间的网络延迟。邻居节点,也称邻节点,即直接连接的节点。在联盟链网络中的各链节点之间为全连接模式的情况下,每个链节点都与其他各个链节点建立直接的通信连接,因此,每个链节点的邻居节点均包括其他所有链节点。相邻的链节点之间会定期发送不具有业务语义的心跳报文,以探测邻居节点的网络连接状态、以及本节点和邻居节点之间的通信状况,从而本节点可以统计各邻居节点的上行带宽、以及本节点与各邻居节点之间的网络延迟等数据。
在一个具体的示例中,第一链节点建立的本地路由表local_route_table可以包括清单,清单中的每一项为对每个邻居节点建立的结构为route_item的节点描述。例如,本地路由表local_route_table可以具 有如下的数据结构:
struct route_item{
string nid;
uint32_t srtt;
uint32_t bandwidth;
uint16_t state;
}
std::list<route_item>local_route_table;
其中,nid为相应的邻居节点的ID,为该链节点的标识信息,例如可以是该链节点的公钥的哈希值;srtt为第一链节点与ID为nid的邻居节点之间通信的平滑往返时间(Smoothed Round Trip Time);bandwidth为ID为nid的邻居节点的上行带宽;state为ID为nid的邻居节点的状态,例如指示该节点在线或离线的网络连接状态、指示该节点超载、低载还是正常状态的CPU的使用率等。
每个链节点可以周期性地更新各自的本地路由表,并在初次建立以及每次更新之后向整个联盟链网络中的所有其他链节点广播其本地路由表。在一些实施例中,每个链节点可以不广播其本地路由表,而只是将其路由表发送给之前确定的联盟链网络中的主节点。在一些网络架构下,如果一个链节点与主节点不是直接连接,则该链节点可以将其本地路由表通过其邻居节点转发给主节点;在链节点与主节点存在直接连接的情况下,该链节点可以将其本地路由表直接发送给主节点。
主节点接收联盟链网络中其他链节点的本地路由表,并可以根据各链节点的本地路由表建立全局路由表。在一个具体的示例中,全局路由表global_route_table可以具有如下的数据结构:
std::map<string,RouteList>global_route_table;
map集合中存储键值对(key-value),其中string字符串用作键,为链节点ID;RouteList清单列表用作值,为相应链节点的本地路由表。
建立全局路由表之后,主节点可以根据全局路由表建立用于消息分发的组播分发树。组播分发树中的各个树节点均对应为联盟链中的链节点。组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中根节点为主节点。图5示出了组播分发树的一个示例。组播分发树的建立包括两个过程,第一个过程是确定组播分发树的结构,即确定组播分发树的阶数n和层数L,其中,阶数n指示各树节点(除叶子节点外)的最大子节点个数,层数L指示树干节点和叶子节点的总层数;第二个过程是将各个链节点对应到(本公开也称配置为)组播分发树中的各个树节点。需要说明的是,本文中提到“子节点”时,除另有说明,均指直接子节点,不包括孙节点。
在第一个过程中,主节点根据链节点的总数N、链节点上行带宽的最低要求bw(例如在联盟链网络中通常会要求部署的节点满足一个最低的带宽要求)、各链节点之间的通信延迟和共识超时时间T,来确定组播分发树的阶数n和层数L。在一个具体的示例中,主节点根据如下的公式1至公式3来确定组播分发树的阶数n和层数L:
1+n+n^2+n^3+......+n^(L-1)<=N<=1+n+n^2+n^3+......+n^(L) 公式1
n*b<=bw 公式2
SRTT1+SRTT2+......+SRTT(L-1)+SRTT(L)<T 公式3
其中,b为包括共识周期内的交易集合的数据包的大小,SRTTk为层数L中第k层的各树节点的平滑往返时间的最大值,例如SRTT1为层数L中第1层的各树节点的平滑往返时间的最大值,SRTT2为层数L中第2层的各树节点的平滑往返时间的最大值,依次类推。
如前所述,一个Pre-Prepare消息的大小相当于一个出块周期内所有交易的大小总和,因此,基于平均交易大小和最大TPS可以估算Pre-Prepare消息的大小b(即包括共识周期内的交易集合的数据包的大小),从而基于主节点最多能够支持的子节点(即第一层树干节点)的个数,根据公式2可以计算出阶数n的上限值。公式3的左边体现了依照组播分发树进行多层的消息分发所带来的延时时间,因此要想 保障共识能按预期时间完成,那么需要在共识超时时间T内完成将Pre-Prepare消息从主节点分发到叶子节点。因此可以根据共识超时时间T和每个链节点的SRTT,可以根据公式3计算出层数L的上限值。根据公式1中阶数n和层数L以及链节点的总数N之间的关系,可以调整并最终确定阶数n和层数L的值。
为了保障共识的稳定性,组播分发树的阶数n大于或等于3,从而主节点可以将其下一级节点的个数安排为3个及以上,即一层或多层树干节点中的第一层树干节点的个数大于或等于3。如此,能保障主节点的下一级节点中即使宕掉一个节点也能够有除主节点之外的其他所有链节点的2/3能够接收到Pre-Prepare消息,也就是说,即使第一层树干节点中有一个节点故障也不会立即导致共识无法通过,这可以给主节点预留充分的时间来调整组播分发树,例如在下一次共识进行之前建立新的组播分发树即可。
在确定了组播分发树的阶数n和层数L之后,整个组播分发树的结构就可以确定了,即包括一个根节点(例如图5中的节点0),在根节点下还包括L层节点(在图5所示的示例中层数L=2),每个节点最多具有n(阶数)个子节点(在图5所示的示例中阶数n=4)。主节点在如下所述的第二个过程中,将各个链节点配置到组播分发树的各个树节点上。在为根节点或树干节点配置子节点时,通常选择配置n个(即将阶数n填满,例如图5中每个树干节点具有阶数个子节点),但也可以配置小于n个(例如图5中的根节点具有小于阶数个子节点)。
在第二个过程中,主节点根据各链节点之间的通信延迟和各链节点的上行带宽,将联盟链网络的各链节点配置为组播分发树的相应的树节点。主节点可以将上行带宽小于阈值的链节点配置为叶子节点;以及将上行带宽大于阈值的链节点配置为树干节点,并且将与父节点之间的通信延迟较小的节点配置在更靠近父节点的层。
由于作为树干节点的中间分发节点需要向多个子节点组播数据包,因此树干节点的上行带宽也是个重要的考量因素。如果链节点的上行带宽较小(例如小于阈值),则将该链节点配置为叶子节点;如果链节点的上行带宽较大(例如大于阈值),则可以将其配置为树干节点。
在树干节点的排布时,主节点在选择自己的多个子节点时,首先基于本地路由表中的SRTT数据,优先选择最低延迟的传输链路;主节点在为分发节点选择子节点时,基于全局路由表中对应节点的邻接表(即该节点的本地路由表)中的SRTT数据,优先选择SRTT小的节点,但带宽需要可支撑其子节点数量的传输带宽需求。
在一个示例中,主节点以自身作为组播分发树的根节点,选取与自身直接连接的链节点中SRTT最小的n个链节点作为第一层树干节点;然后依次对第一层树干节点中的各个节点再选取其子节点,依次类推,直到层数达到L,从而构建成如下的消息路由表list_pp_msg_route_table:
struct msg_route_item{
string nid;
int parent_node_idx;
}
std::list<msg_route_item>list_pp_msg_route_table;
其中,nid为相应的链节点的ID,为该链节点的标识信息,例如可以是该链节点的公钥的哈希值;parent_node_idx为ID为nid的链节点在组播分发树中的父节点的索引值。组播分发树中的树节点的索引值为从根节点开始逐层依次向下对每个树节点的编号,如图5中每个树节点位置的数字即为相应树节点的索引值。
与图5所示的组播分发树对应的消息路由表list_pp_msg_route_table如图6所示。消息路由表list_pp_msg_route_table可以是主节点创建、所有链节点存储的组播分发树的一个实例。在图6所示的消息路由表的实例中,节点0(ID为nid0的链节点)的父节点索引值为-1,即节点0为根节点;节点1(ID为nid1的链节点)至节点3(ID为nid3的链节点)的父节点索引值为0,即节点1至节点3的父节点为节点0;节点4(ID为nid4的链节点)至节点7(ID为nid7的链节点)的父节点索引值为1,即节点 4至节点7的父节点为节点1;节点8(ID为nid8的链节点)至节点11(ID为nid11的链节点)的父节点索引值为2,即节点8至节点11的父节点为节点2;节点12(ID为nid12的链节点)至节点15(ID为nid15的链节点)的父节点索引值为3,即节点12至节点15的父节点为节点3。
主节点将其建立的组播分发树在联盟链网络中进行广播,例如以消息路由表的形式广播给其他链节点。若是该主节点第一次构建组播分发树,则主节点立即进行广播;若不是该主节点第一次构建组播分发树,则在满足下文所述的更新条件时进行广播。其他链节点接收并保存组播分发树。
参照图4,主节点,例如节点0,开始进行共识时,基于组播分发树,将包括共识周期内的交易集合的数据包,即Pre-Prepare消息数据包,组播给自己所有的子节点,即第一层树干节点,例如节点1至节点3。树干节点接收到Pre-Prepare消息之后,基于组播分发树,把Pre-Prepare消息转发给自己所有的子节点(可能是叶子节点也可能是下一层的树干节点)。例如,节点1将Pre-Prepare消息转发给节点4至节点7,节点2将Pre-Prepare消息转发给节点8至节点11,节点3将Pre-Prepare消息转发给节点12至节点15。依此类推,依照组播分发树逐层转发,直到把Pre-Prepare消息发送到各个叶子节点,从而完成共识提议阶段的Pre-Prepare消息的全链分发过程。节点1至节点15接收到Pre-Prepare消息之后,对消息中的交易集合进行验证,并在验证通过后向其它节点发送Prepare消息和Commit消息,以完成本次共识过程。Prepare消息和Commit消息通常比Pre-Prepare消息要小得多,因此其发送可以采用现有的方式进行。例如,在联盟链网络的拓扑结构为全连接结构的情况下,可以利用各链节点之间的全连接模式完成Prepare消息和Commit消息的发送,即每个链节点向其他所有链节点广播Prepare消息和Commit消息。在非全连接的网络结构下,每个链节点可以将Prepare消息和Commit消息发送给其邻居节点,并由邻居节点进行转发,从而将Prepare消息和Commit消息发送给联盟链网络中其他的链节点。
在上述过程中,整个联盟链网络的组播分发树均由主节点建立和维护,其他链节点从主节点接收并保存组播分发树,可以保证每个链节点的组播分发树一致,从而可以避免消息路由环路,减少消息重复转发造成的消息风暴。
在一个具体的示例中,联盟链网络中有1000个链节点,即N=1000。如果平均每个交易大小为1KB,一个共识周期内有2000条交易,则每个Pre-Prepare消息的大小为2000*1KB*8bit=16Mb。如果联盟链中部署的节点的最低带宽bw要求为160Mbps,根据上述方法,组播分发树的阶数n可以被确定为10,层数L可以被确定为3,以支持1000个节点。按照上文所述的方法,将与父节点之间的通信延迟较小的节点配置在更靠近父节点的层,因此在对组播分发树进行总时延估计时,可以将除主节点之外的各链节点的SRTT从小到大排列,排列之后的第10个链节点的SRTT值为SRTT1,即第一层树干节点的中的最大延迟时间,第110个链节点的SRTT为SRTT2,即第二层树干节点中的最大延迟时间,第999个链节点的SRTT为SRTT3,即叶子节点中的最大延迟时间。但由于共识通过的要求为总共(2N+1)/3个节点验证通过Pre-Prepare消息即可,因此,第三层即叶子节点层可能的最短时延为总排列的第(2N+1)/3个链节点的SRTT(为简便起见,下文即为Ro)。因此,能够使得共识通过的整个组播分发树可能的最短总时延Rn=SRTT1+SRTT2+Ro。
在主节点将Pre-Prepare消息广播给其他各链节点的操作方式中,按照整个联盟链网络中有100个链节点来计算,节点需提供的上行带宽为1600Mbps(如上文所述),能够使得共识通过的最大时延为Ro。而根据本公开实施例的运行联盟链网络的方法,按照整个联盟链网络中有1000个链节点来计算,节点需提供的上行带宽为160Mbps,能够使得共识通过的最大时延为SRTT1+SRTT2+Ro。可见,根据本公开实施例的运行联盟链网络的方法,在没有大幅增加共识延迟的情况下,可以让节点带宽需求降低90%、链规模增长10倍。
在本公开各实施例中,主节点根据联盟链网络中各链节点的通信状况来建立组播分发树。然而在联盟链网络运行的过程中,各个链节点的通信状况,例如链节点的总数N、链节点上行带宽的最低要求bw、各链节点之间的通信延迟和各链节点的上行带宽等,均会随着时间的经过而发生变化。如上所述,每个链节点定期将其本地路由表发送给主节点,主节点可以定期根据联盟链网络中各链节点周期性报告的本 地路由表来计算当前使用的组播分发树的通信总时延(下文称为第一总时延);以及基于变化的各节点的通信状况、并使用上文所述的方法来建立可能的更新组播分发树,并计算可能的更新组播分发树的通信总时延(下文称为第二总时延)。组播分发树的通信总时延的估计方法在上文已经描述。主节点比较两个总时延,在第二总时延比第一总时延的减少量满足条件的情况下,例如减少10%的时延量,则主节点将当前使用的组播分发树更新为该可能的更新组播分发树,并向联盟链网络中的各链节点广播该更新组播分发树,以将该更新组播分发树用于之后的共识过程中的Pre-Prepare消息的分发,从而完成组播分发树的优化。
此外,联盟链网络中有如下三种情况可能会触发组播分发树的变化。
第一种情况:联盟链网络中的链节点离线。
如果离线链节点为组播分发树的叶子节点,因为叶子节点的离线不影响组播分发过程,因此此时不需要更新组播分发树。
如果离线链节点为树干节点,需区分不同情况进行处理。1.如果该离线树干节点不是第一层树干节点(指最接近根节点的树干节点),则因为消息能够分发到的链节点的总数大于链节点总数的2/3,因此不影响整体共识。主节点基于其他链节点上报的本地路由表,可以发现部分节点(即离线链节点的子节点)无法接收到分发的Pre-Prepare消息了。此时,主节点等到接收到离线链节点的所有子节点上报更新的本地路由表之后,基于上文所述的方法重新建立(即更新)组播分发树,并将更新后的组播分发树在联盟链网络内进行广播。这个组播分发树的更新过程,可以在本次共识过程结束之后、并且在下一次共识过程开始之前完成,从而保证能够在不影响共识进行的情况下更新组播分发树。2.如果该离线树干节点是第一层树干节点,并且如果组播分发树的阶数n大于3,则按照与前述第1条相同的方式来处理;如果组播分发树的阶数n小于或等于3,则主节点立即更新并广播组播分发树,以保证在分发Pre-Prepare消息之前先更新组播分发树。
第二种情况:联盟链网络中有新的链节点加入。
如果有新链节点加入到联盟链网络,则主节点先将该新链节点配置为组播分发树的叶子节点,即放到第L层中,更新组播分发树并将更新后的组播分发树至少通知给该新链节点及其父节点,例如广播该更新后的组播分发树、或定向发送该组播分发树。此外,如上文所述,主节点定期进行组播分发树的优化,因此之后可能将该新链节点重新配置为某一层的树干节点或仍然配置为叶子节点。
第三种情况:主节点离线或变更。
如果主节点离线或满足主节点变更条件(例如共识失败),则由联盟链网络中的各链节点重新选举主节点,并且之后各链节点周期性地讲各自的本地路由表发送给新主节点。之后由新主节点根据联盟链网络中各链节点的本地路由表建立并广播组播分发树,并由该新主节点发起共识提议,将Pre-Prepare消息发送给第一层树干节点。联盟链网络中的各链节点根据组播分发树,将接收到的Pre-Prepare消息数据包转发给其子节点,从而使得Pre-Prepare消息在整个联盟链网络中被分发。
本公开一个或多个实施例还提供了运行区块链网络的方法,其中区块链网络中的各节点之间为全连接模式,该方法包括:由区块链网络中的第一链节点根据各链节点的通信状况建立以区块链网络中各链节点为树节点的组播分发树,组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中,根节点为第一节点,并且根节点和每个树干节点均有多个子节点;响应于要被发送给其他各链节点的第一数据包的大小大于阈值,根据组播分发树,第一链节点和每个树干节点将第一数据包组播给其子节点;以及响应于要被发送给其他各链节点的第二数据包的大小小于阈值,第一链节点在区块链网络中广播第二数据包。根据这一个或多个实施例的运行区块链网络的方法,在全连接模式的区块链网络中,可以对较大的需要被全区块链网络分发的数据包,依照组播分发树进行分发;并且对较小的需要被全区块链网络分发的数据包,由相应的链节点直接进行广播。如此,保证区块链网络内各链节点之间通信效率的情况下,能够降低对链节点的上行带宽的要求,有助于区块链网络的性能提升和规模扩展。
图7是根据本公开实施例的用于区块链网络的节点设备700的至少部分的结构示意图。节点设备700 包括一个或多个处理器710、一个或多个存储器720、以及通常存在于计算机等装置中的其他组件(未示出)。一个或多个存储器720中的每一个可以存储可由一个或多个处理器710访问的内容,包括可以由一个或多个处理器710执行的指令721、以及可以由一个或多个处理器710来检索、操纵或存储的数据722。
指令721可以是将由一个或多个处理器710直接地执行的任何指令集,诸如机器代码,或者间接地执行的任何指令集,诸如脚本。本公开中的术语“指令”、“应用”、“过程”、“步骤”和“程序”在本公开中可以互换使用。指令721可以存储为目标代码格式以便由一个或多个处理器710直接处理,或者存储为任何其他计算机语言,包括按需解释或提前编译的独立源代码模块的脚本或集合。本公开其他部分更加详细地解释了指令721的功能、方法和例程。
一个或多个存储器720可以是能够存储可由一个或多个处理器710访问的内容的任何临时性或非临时性计算机可读存储介质,诸如硬盘驱动器、存储卡、ROM、RAM、DVD、CD、USB存储器、能写存储器和只读存储器等。一个或多个存储器720中的一个或多个可以包括分布式存储系统,其中指令721和/或数据722可以存储在可以物理地位于相同或不同的地理位置处的多个不同的存储装置上。一个或多个存储器720中的一个或多个可以经由网络连接至一个或多个第一装置710,和/或可以直接地连接至或并入一个或多个处理器710中的任何一个中。
一个或多个处理器710可以根据指令721来检索、存储或修改数据722。虽然本公开所描述的主题不受任何特定数据结构限制,但是数据722还可能存储在计算机寄存器(未示出)中,作为具有许多不同的字段和记录的表格或XML文档存储在关系型数据库中。数据722可以被格式化为任何计算装置可读格式,诸如但不限于二进制值、ASCII或统一代码。此外,数据722可以包括足以识别相关信息的任何信息,诸如编号、描述性文本、专有代码、指针、对存储在诸如其他网络位置处等其他存储器中的数据的引用或者被函数用于计算相关数据的信息。
一个或多个处理器710可以是任何常规处理器,诸如市场上可购得的中央处理单元(CPU)、图形处理单元(GPU)等。可替换地,一个或多个处理器710还可以是专用组件,诸如专用集成电路(ASIC)或其他基于硬件的处理器。虽然不是必需的,但是一个或多个处理器710可以包括专门的硬件组件来更快或更有效地执行特定的计算过程。
虽然图7中示意性地将一个或多个处理器710以及一个或多个存储器720示出在同一个框内,但是节点设备700可以实际上包括可能存在于同一个物理壳体内或不同的多个物理壳体内的多个处理器或存储器。因此,引用处理器、计算机、计算装置或存储器应被理解成包括引用可能并行操作或可能非并行操作的处理器、计算机、计算装置或存储器的集合。
图8是可应用于根据本公开一个或多个示例性实施例的通用硬件系统800的示例性框图。现在将参考图8描述系统800,其是可以应用于本公开的各方面的硬件设备的示例。上述各实施例中的节点设备700可以包括系统800的全部或部分。系统800可以是被配置为执行处理和/或计算的任何机器,可以是但不限于工作站、服务器、台式计算机、膝上型计算机、平板计算机、个人数据助理、智能电话、车载电脑、或其任何组合。
系统800可以包括可能经由一个或多个接口与总线802连接或与总线802通信的元件。例如,系统800可以包括总线802,以及一个或多个处理器804,一个或多个输入设备806和一个或多个输出设备808。一个或多个处理器804可以是任何类型的处理器,可以包括但不限于一个或多个通用处理器和/或一个或多个专用处理器(例如特殊处理芯片)。上文所述的方法中的各个操作和/或步骤均可以通过一个或多个处理器804执行指令来实现。
输入设备806可以是可以向计算设备输入信息的任何类型的设备,可以包括但不限于鼠标、键盘、触摸屏、麦克风和/或遥控器。输出设备808可以是可以呈现信息的任何类型的设备,可以包括但不限于显示器、扬声器、视频/音频输出终端、振动器和/或打印机。
系统800还可以包括非暂时性存储设备810或者与非暂时性存储设备810连接。非暂时性存储设备 810可以是非暂时性的并且可以实现数据存储的任何存储设备,可以包括但不限于磁盘驱动器、光学存储设备、固态存储器、软盘、硬盘、磁带或任何其他磁介质、光盘或任何其他光学介质、ROM(只读存储器)、RAM(随机存取存储器)、高速缓冲存储器、和/或任何其他存储器芯片/芯片组、和/或计算机可从其读取数据、指令和/或代码的任何其他介质。非暂时性存储设备810可以从接口拆卸。非暂时性存储设备810可以具有用于实现上述方法、操作、步骤和过程的数据/指令/代码。
系统800还可以包括通信设备812。通信设备812可以是能够与外部设备和/或与网络通信的任何类型的设备或系统,可以包括但不限于调制解调器、网卡、红外通信设备、无线通信设备、和/或芯片组,例如蓝牙设备、802.11设备、WiFi设备、WiMax设备、蜂窝通信设备、卫星通信设备、和/或类似物。
总线802可以包括但不限于工业标准体系结构(ISA)总线、微通道架构(MCA)总线、增强型ISA(EISA)总线、视频电子标准协会(VESA)本地总线、和外围部件互连(PCI)总线。特别地,对于车载设备,总线802还可以包括控制器区域网络(CAN)总线或设计用于在车辆上应用的其他架构。
系统800还可以包括工作存储器814,其可以是可以存储对处理器804的工作有用的指令和/或数据的任何类型的工作存储器,可以包括但不限于随机存取存储器和/或只读存储设备。
软件元素可以位于工作存储器814中,包括但不限于操作系统816、一个或多个应用程序818、驱动程序、和/或其他数据和代码。用于执行上述方法、操作和步骤的指令可以包括在一个或多个应用程序818中。软件元素的指令的可执行代码或源代码可以存储在非暂时性计算机可读存储介质中,例如上述存储设备810,并且可以通过编译和/或安装被读入工作存储器814中。还可以从远程位置下载软件元素的指令的可执行代码或源代码。
还应该理解,可以根据具体要求进行变化。例如,也可以使用定制硬件,和/或可以用硬件、软件、固件、中间件、微代码、硬件描述语言或其任何组合来实现特定元件。此外,可以采用与诸如网络输入/输出设备之类的其他计算设备的连接。例如,根据本公开实施例的方法或设备中的一些或全部可以通过使用根据本公开的逻辑和算法的、用汇编语言或硬件编程语言(诸如VERILOG,VHDL,C++)的编程硬件(例如,包括现场可编程门阵列(FPGA)和/或可编程逻辑阵列(PLA)的可编程逻辑电路)来实现。
还应该理解,系统800的组件可以分布在网络上。例如,可以使用一个处理器执行一些处理,而可以由远离该一个处理器的另一个处理器执行其他处理。系统800的其他组件也可以类似地分布。这样,系统800可以被解释为在多个位置执行处理的分布式计算系统。
尽管到目前为止已经参考附图描述了本公开的各方面,但是上述方法,系统和设备仅仅是示例性示例,并且本公开的范围不受这些方面的限制,而是仅由以下方面限定:所附权利要求及其等同物。可以省略各种元件,或者可以用等效元件代替。另外,可以以与本公开中描述的顺序不同的顺序执行这些步骤。此外,可以以各种方式组合各种元件。同样重要的是,随着技术的发展,所描述的许多元素可以由在本公开之后出现的等同元素代替。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(FieldProgrammable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL (Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为服务器系统。当然,本申请不排除随着未来计算机技术的发展,实现上述实施例功能的计算机例如可以为个人计算机、膝上型计算机、车载人机交互设备、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
虽然本公开一个或多个实施例提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的手段可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或终端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境,甚至为分布式数据处理环境)。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、产品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、产品或者设备所固有的要素。在没有更多限制的情况下,并不排除在包括所述要素的过程、方法、产品或者设备中还存在另外的相同或等同要素。例如若使用到第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本公开一个或多个时可以把各模块的功能在同一个或多个软件和/或硬件中实现,也可以将实现同一功能的模块由多个子模块或子单元的组合实现等。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
本公开是参照根据本公开实施例的方法、装置(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实 现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储、石墨烯存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
本领域技术人员应明白,本公开一个或多个实施例可提供为方法、系统或计算机程序产品。因此,本公开一个或多个实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本公开一个或多个实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开一个或多个实施例可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本本公开一个或多个实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本公开中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。在本公开的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本公开中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本公开中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
以上所述仅为本公开一个或多个实施例的实施例而已,并不用于限制本本公开一个或多个实施例。对于本领域技术人员来说,本公开一个或多个实施例可以有各种更改和变化。凡在本公开的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在权利要求范围之内。
Claims (34)
- 一种运行联盟链网络的方法,包括:由所述联盟链网络中将要发起共识提议的主节点,根据所述联盟链网络中各链节点的通信状况建立以所述联盟链网络中各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中根节点为所述主节点;由所述主节点在所述联盟链网络中广播所述组播分发树;以及根据所述组播分发树,由所述主节点将包括共识周期内的交易集合的数据包组播给所述一层或多层树干节点中的第一层树干节点,并由每个树干节点将接收到的数据包组播给其子节点。
- 根据权利要求1所述的方法,还包括由所述主节点:根据链节点的总数N、链节点上行带宽的最低要求bw、各链节点之间的通信延迟和共识超时时间T,确定所述组播分发树的阶数n和层数L,其中阶数n指示各树节点的最大子节点个数,层数L指示树干节点和叶子节点的总层数;以及根据各链节点之间的通信延迟和各链节点的上行带宽,将所述联盟链网络的各链节点配置为所述组播分发树的相应的树节点。
- 根据权利要求2所述的方法,还包括由所述主节点根据如下确定所述组播分发树的阶数n和层数L:1+n+n^2+n^3+......+n^(L-1)<=N<=1+n+n^2+n^3+......+n^(L);n*b<=bw;以及SRTT1+SRTT2+......+SRTT(L-1)+SRTT(L)<T,其中,b为包括共识周期内的交易集合的所述数据包的大小,SRTTk为层数L中第k层的各树节点的平滑往返时间的最大值。
- 根据权利要求2所述的方法,其中,所述阶数n大于或等于3。
- 根据权利要求1所述的方法,其中,所述第一层树干节点的个数大于或等于3。
- 根据权利要求2所述的方法,还包括由所述主节点:将上行带宽小于阈值的链节点配置为叶子节点;以及将上行带宽大于阈值的链节点配置为树干节点,并且将与父节点之间的通信延迟较小的节点配置在更靠近父节点的层。
- 根据权利要求2所述的方法,其中,所述主节点通过接收其他链节点的广播来获取各链节点之间的通信延迟和各链节点的上行带宽。
- 根据权利要求1所述的方法,还包括:由所述联盟链网络中的各链节点周期性地将其本地路由表报告给所述主节点,链节点的本地路由表指示该链节点的各邻居节点的上行带宽、以及该链节点与各邻居节点之间的网络延迟;以及由所述主节点根据所述联盟链网络中各链节点的本地路由表建立所述组播分发树。
- 根据权利要求8所述的方法,还包括响应于所述联盟链网络中的第一链节点离线,并且:响应于所述一层或多层树干节点中的第一层树干节点的个数小于或等于3并且所述第一链节点为第一层树干节点,由所述主节点更新并广播组播分发树;以及响应于所述一层或多层树干节点中的第一层树干节点的个数大于3并且所述第一链节点为第一层树干节点、或者所述第一链节点为所述一层或多层树干节点中的非第一层树干节点,由所述主节点在接收到所述第一链节点的所有子节点的更新的本地路由表之后,更新并广播组播分发树。
- 根据权利要求8所述的方法,还包括:响应于新链节点加入到所述联盟链网络,所述主节点将所述新链节点配置为组播分发树中的叶子节点,更新组播分发树并将更新后的组播分发树至少通知给所述新链节点及其父节点。
- 根据权利要求8所述的方法,还包括:由所述主节点定期根据所述联盟链网络中各链节点周期性报告的本地路由表计算当前使用的组播分发树的第一总时延以及可能的更新组播分发树的第二总时延;响应于所述第二总时延比所述第一总时延的减少量满足条件,由所述主节点更新并广播组播分发树。
- 一种运行联盟链网络的方法,包括:由所述联盟链网络中的各链节点选举主节点,所述主节点能够发起共识提议;由所述联盟链网络中的各链节点建立、并周期性地更新及广播其本地路由表,所述本地路由表指示相应链节点的各邻居节点的上行带宽、以及相应链节点与各邻居节点之间的网络延迟;由所述主节点根据所述联盟链网络中各链节点的本地路由表建立并在所述联盟链网络中广播以所述联盟链网络中各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中根节点为所述主节点;以及由所述联盟链网络中的各链节点根据所述组播分发树,将包括共识周期内的交易集合的数据包组播给其子节点。
- 根据权利要求12所述的方法,还包括:由所述主节点根据所述联盟链网络中各链节点的本地路由表建立全局路由表,并根据所述全局路由表建立所述组播分发树,其中,所述全局路由表包括所述联盟链网络中各链节点的本地路由表。
- 根据权利要求12所述的方法,还包括由所述主节点:根据链节点的总数N、链节点上行带宽的最低要求bw、各链节点之间的通信延迟和共识超时时间T,确定所述组播分发树的阶数n和层数L,其中阶数n指示各树节点的最大子节点个数,层数L指示树干节点和叶子节点的总层数;以及根据各链节点之间的通信延迟和各链节点的上行带宽,将所述联盟链网络的各链节点配置为所述组播分发树的相应的树节点。
- 根据权利要求14所述的方法,还包括由所述主节点根据如下确定所述组播分发树的阶数n和层数L:1+n+n^2+n^3+......+n^(L-1)<=N<=1+n+n^2+n^3+......+n^(L);n*b<=bw;以及SRTT1+SRTT2+......+SRTT(L-1)+SRTT(L)<T,其中,b为包括共识周期内的交易集合的所述数据包的大小,SRTTk为层数L中第k层的各树节点的平滑往返时间的最大值。
- 根据权利要求14所述的方法,其中,所述阶数n大于或等于3。
- 根据权利要求12所述的方法,其中,所述一层或多层树干节点中的第一层树干节点的个数大于或等于3。
- 根据权利要求14所述的方法,还包括由所述主节点:将上行带宽小于阈值的链节点配置为叶子节点;以及将上行带宽大于阈值的链节点配置为树干节点,并且将与父节点之间的通信延迟较小的节点配置在更靠近父节点的层。
- 根据权利要求12所述的方法,其中,包括共识周期内的交易集合的数据包为Pre-Prepare消息数据包。
- 根据权利要求19所述的方法,其中,所述联盟链网络中的各链节点之间为全连接模式,所述方法还包括:利用各链节点之间的全连接模式完成Prepare消息和Commit消息的发送,以完成本次共识过程。
- 根据权利要求12所述的方法,还包括响应于所述联盟链网络中的第一链节点离线,并且:响应于所述一层或多层树干节点中的第一层树干节点的个数小于或等于3并且所述第一链节点为第一层树干节点,由所述主节点更新并广播组播分发树;以及响应于所述一层或多层树干节点中的第一层树干节点的个数大于3并且所述第一链节点为第一层树干节点、或者所述第一链节点为所述一层或多层树干节点中的非第一层树干节点,由所述主节点在接收到所述第一链节点的所有子节点的更新的本地路由表之后,更新并广播组播分发树。
- 根据权利要求12所述的方法,还包括:响应于新链节点加入到所述联盟链网络,所述主节点将所述新链节点配置为组播分发树中的叶子节点,更新组播分发树并将更新后的组播分发树至少通知给所述新链节点及其父节点。
- 根据权利要求12所述的方法,还包括:由所述主节点定期根据所述联盟链网络中各链节点周期性报告的本地路由表计算当前使用的组播分发树的第一总时延以及可能的更新组播分发树的第二总时延;响应于所述第二总时延比所述第一总时延的减少量满足条件,由所述主节点更新并广播组播分发树。
- 根据权利要求12所述的方法,还包括:响应于主节点离线或满足主节点变更条件,由所述联盟链网络中的各链节点重新选举主节点;由重新选举的主节点根据所述联盟链网络中各链节点的本地路由表建立并广播组播分发树;以及由所述联盟链网络中的各链节点根据所述组播分发树,将包括共识周期内的交易集合的数据包组播给其子节点。
- 一种运行区块链网络的方法,所述区块链网络中的各节点之间为全连接模式,所述方法包括:由所述区块链网络中的第一链节点根据各链节点的通信状况建立以所述区块链网络中各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中,根节点为所述第一节点,并且根节点和每个树干节点均有多个子节点;响应于要被发送给其他各链节点的第一数据包的大小大于阈值,根据所述组播分发树,所述第一链节点和每个树干节点将第一数据包组播给其子节点;以及响应于要被发送给其他各链节点的第二数据包的大小小于阈值,所述第一链节点在所述区块链网络中广播第二数据包。
- 根据权利要求25所述的方法,其中,所述第一节点通过接收其他链节点的广播来获取各链节点的通信状况。
- 根据权利要求25所述的方法,其中,所述各链节点的通信状况包括各链节点之间的通信延迟和各链节点的上行带宽。
- 根据权利要求27所述的方法,还包括由所述第一节点:将上行带宽小于阈值的链节点配置为叶子节点;以及将上行带宽大于阈值的链节点配置为树干节点,并且将与父节点之间的通信延迟较小的节点配置在更靠近父节点的层。
- 根据权利要求25所述的方法,其中,根节点有大于或等于3个子节点。
- 根据权利要求25所述的方法,其中,所述一层或多层树干节点中的第一层树干节点的个数大于或等于3。
- 根据权利要求25所述的方法,其中,所述区块链网络为联盟链网络。
- 根据权利要求25所述的方法,其中,第一数据包包括Pre-Prepare消息数据包,第二数据包包括Prepare消息数据包或Commit消息数据包。
- 一种用于区块链网络的节点设备,包括电路系统,所述电路系统被配置为执行:建立并周期性地更新及广播所述节点设备的本地路由表,所述本地路由表指示所述节点设备的各邻居节点的上行带宽、以及所述节点设备与各邻居节点之间的网络延迟;响应于所述节点设备被选举为主节点:根据所述联盟链网络中各链节点的本地路由表建立并在所述联盟链网络中广播以各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中根节点为所述主节点;以及根据所述组播分发树,将包括共识周期内的交易集合的数据包组播给所述一层或多层树干节点中的第一层树干节点;响应于所述节点设备未被选举为主节点:接收主节点广播的组播分发树;以及根据所述组播分发树,将接收到的包括共识周期内的交易集合的数据包组播给所述节点设备的子节点。
- 一种用于区块链网络的节点设备,包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器被配置为存储一系列计算机可执行指令,其中,所述一系列计算机可执行指令在由所述一个或多个处理器执行时,使所述一个或多个处理器执行:建立并周期性地更新及广播所述节点设备的本地路由表,所述本地路由表指示所述节点设备的各邻居节点的上行带宽、以及所述节点设备与各邻居节点之间的网络延迟;响应于所述节点设备被选举为主节点:根据所述联盟链网络中各链节点的本地路由表建立并在所述联盟链网络中广播以各链节点为树节点的组播分发树,所述组播分发树包括一个根节点、一层或多层树干节点、以及一层叶子节点,其中根节点为所述主节点;以及根据所述组播分发树,将包括共识周期内的交易集合的数据包组播给所述一层或多层树干节点中的第一层树干节点;响应于所述节点设备未被选举为主节点:接收主节点广播的组播分发树;以及根据所述组播分发树,将接收到的包括共识周期内的交易集合的数据包组播给所述节点设备的子节点。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210616509.3A CN115022340B (zh) | 2022-06-01 | 2022-06-01 | 一种运行联盟链网络的方法和用于区块链网络的节点设备 |
CN202210616509.3 | 2022-06-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023231344A1 true WO2023231344A1 (zh) | 2023-12-07 |
Family
ID=83070891
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/135443 WO2023231344A1 (zh) | 2022-06-01 | 2022-11-30 | 一种运行联盟链网络的方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115022340B (zh) |
WO (1) | WO2023231344A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115022340B (zh) * | 2022-06-01 | 2024-05-28 | 蚂蚁区块链科技(上海)有限公司 | 一种运行联盟链网络的方法和用于区块链网络的节点设备 |
GB2627297A (en) * | 2023-02-20 | 2024-08-21 | Nchain Licensing Ag | Propagating blockchain messages |
GB2627298A (en) * | 2023-02-20 | 2024-08-21 | Nchain Licensing Ag | Propagating blockchain messages |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140226491A1 (en) * | 2013-02-14 | 2014-08-14 | Cisco Technology, Inc. | Mechanism and Framework for Finding Optimal Multicast Tree Roots Without the Knowledge of Traffic Sources and Receivers for Fabricpath and TRILL |
CN108010298A (zh) * | 2017-12-19 | 2018-05-08 | 青岛海信移动通信技术股份有限公司 | 设备控制方法及装置 |
CN112565389A (zh) * | 2020-11-30 | 2021-03-26 | 网易(杭州)网络有限公司 | 基于区块链的消息广播方法、装置、电子设备及存储介质 |
CN113365229A (zh) * | 2021-05-28 | 2021-09-07 | 电子科技大学 | 一种多联盟链共识算法的网络时延优化方法 |
CN115022340A (zh) * | 2022-06-01 | 2022-09-06 | 蚂蚁区块链科技(上海)有限公司 | 一种运行联盟链网络的方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10616324B1 (en) * | 2017-07-20 | 2020-04-07 | Architecture Technology Corporation | Decentralized ledger system and method for enterprises |
CN112104558B (zh) * | 2020-10-30 | 2021-09-07 | 上海交通大学 | 区块链分发网络的实现方法、系统、终端及介质 |
CN112769580B (zh) * | 2020-12-31 | 2023-08-01 | 广东海洋大学 | 一种区块链分层激励共识算法 |
-
2022
- 2022-06-01 CN CN202210616509.3A patent/CN115022340B/zh active Active
- 2022-11-30 WO PCT/CN2022/135443 patent/WO2023231344A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140226491A1 (en) * | 2013-02-14 | 2014-08-14 | Cisco Technology, Inc. | Mechanism and Framework for Finding Optimal Multicast Tree Roots Without the Knowledge of Traffic Sources and Receivers for Fabricpath and TRILL |
CN108010298A (zh) * | 2017-12-19 | 2018-05-08 | 青岛海信移动通信技术股份有限公司 | 设备控制方法及装置 |
CN112565389A (zh) * | 2020-11-30 | 2021-03-26 | 网易(杭州)网络有限公司 | 基于区块链的消息广播方法、装置、电子设备及存储介质 |
CN113365229A (zh) * | 2021-05-28 | 2021-09-07 | 电子科技大学 | 一种多联盟链共识算法的网络时延优化方法 |
CN115022340A (zh) * | 2022-06-01 | 2022-09-06 | 蚂蚁区块链科技(上海)有限公司 | 一种运行联盟链网络的方法 |
Also Published As
Publication number | Publication date |
---|---|
CN115022340B (zh) | 2024-05-28 |
CN115022340A (zh) | 2022-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023231344A1 (zh) | 一种运行联盟链网络的方法 | |
TWI839379B (zh) | 在網路路由環境中的單節點和多節點資料儲存空間架構 | |
US10979522B2 (en) | Data management in an edge network | |
US9906407B1 (en) | Methods and apparatus for scalable resilient networks | |
EP2901308B1 (en) | Load distribution in data networks | |
US10581932B2 (en) | Network-based dynamic data management | |
US9148381B2 (en) | Cloud computing enhanced gateway for communication networks | |
WO2020119648A1 (zh) | 一种基于代价优化的计算任务卸载算法 | |
US9231860B2 (en) | System and method for hierarchical link aggregation | |
WO2016161677A1 (zh) | 一种业务卸载方法及系统 | |
TW201837735A (zh) | 一種區塊鏈共識方法及裝置 | |
US20140136724A1 (en) | Streams optional execution paths depending upon data rates | |
CN110226159B (zh) | 在网络交换机上执行数据库功能的方法 | |
CN110169019B (zh) | 数据库功能定义的网络交换机和数据库系统 | |
CN102904837A (zh) | 一种基于虚拟业务平面的区分业务生存性方法 | |
US9386353B2 (en) | Child node, parent node, and caching method and system for multi-layer video network | |
US10409620B2 (en) | Spanning tree protocol warm reboot system | |
US8856338B2 (en) | Efficiently relating adjacent management applications managing a shared infrastructure | |
CN110460482B (zh) | 流量获取方法、装置、服务器及介质 | |
CN115037756B (zh) | 一种运行联盟链网络的方法、联盟链网络和用于联盟链网络的节点设备 | |
WO2022063125A1 (zh) | 一种次级内容分发网络cdn、数据提供方法和存储介质 | |
US11729045B2 (en) | Aggregated networking device failover system | |
WO2021103801A1 (zh) | 信息处理方法及相关设备 | |
US20240202207A1 (en) | Distributed function data transformation system | |
WO2023050818A1 (zh) | 数据转发方法、系统、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22944645 Country of ref document: EP Kind code of ref document: A1 |