CN111666343A - Data uplink method, device and readable storage medium based on consensus mechanism - Google Patents

Data uplink method, device and readable storage medium based on consensus mechanism Download PDF

Info

Publication number
CN111666343A
CN111666343A CN202010537030.1A CN202010537030A CN111666343A CN 111666343 A CN111666343 A CN 111666343A CN 202010537030 A CN202010537030 A CN 202010537030A CN 111666343 A CN111666343 A CN 111666343A
Authority
CN
China
Prior art keywords
node
performance information
calculation
consensus
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010537030.1A
Other languages
Chinese (zh)
Other versions
CN111666343B (en
Inventor
周志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Douyu Network Technology Co Ltd
Original Assignee
Wuhan Douyu Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Douyu Network Technology Co Ltd filed Critical Wuhan Douyu Network Technology Co Ltd
Priority to CN202010537030.1A priority Critical patent/CN111666343B/en
Publication of CN111666343A publication Critical patent/CN111666343A/en
Application granted granted Critical
Publication of CN111666343B publication Critical patent/CN111666343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Computing Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention provides a data uplink method, a device and a readable storage medium based on a consensus mechanism, wherein a central node broadcasts broadcast information corresponding to a new block to all consensus nodes under the condition of generating the new block, each consensus node feeds back read-write performance information of block data corresponding to a historical block number and calculated performance information obtained by calculating the block data corresponding to the historical block number by using random data and a calculation turn number after receiving the broadcast information, the central node determines network performance information of the consensus nodes based on the read-write performance information and/or the calculated performance information fed back by each consensus node, and comprehensively considers the read-write performance information, the calculated performance information and the network performance information based on each consensus node based on the consensus mechanism to more reasonably determine candidate nodes for uplink packaging the new block, finally, the number of the historical blocks generated by each candidate node is considered in a balanced mode, and the target node is determined from the candidate nodes more fairly.

Description

Data uplink method, device and readable storage medium based on consensus mechanism
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a data chaining method and apparatus based on a consensus mechanism, and a readable storage medium.
Background
The block chains are divided into three types according to the disclosure degree of participants, namely a public chain, an alliance chain and a private chain, and different from the requirement of the public chain on the computing capacity, the private chain and the alliance chain have more requirements on the effectiveness and fairness of the packed uplink. If a certain node deliberately forges the difference, other clients cannot check the difference, so that the distribution of the packing right is not fair, and the efficiency of node packing and uplink cannot be effectively guaranteed only by distributing the packing right based on the computing power of the node.
Disclosure of Invention
The embodiment of the invention provides a data uplink method, a device and a readable storage medium based on a consensus mechanism, which are used for fairly and effectively distributing block data packing right.
In a first aspect, the present invention provides a method for data uplink based on a common identification mechanism, which is applied to a central node in a block chain network, and includes:
under the condition of generating a new block, broadcasting broadcast information corresponding to the new block to all consensus nodes in the block chain network, wherein the broadcast information comprises a specified historical block number, random data and a calculation wheel number;
receiving read-write performance information, fed back by each consensus node based on the broadcast information, of block data corresponding to the historical block number and calculation performance information obtained by calculating the block data corresponding to the historical block number by using the random data and the calculation turns;
determining network performance information of the consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node;
determining candidate nodes meeting preset conditions from all the consensus nodes based on the read-write performance information, the calculation performance information and the network performance information of each consensus node;
and determining a target node from the candidate nodes based on the historical block number generated by each candidate node, wherein the target node is used for packaging and uplink transmitting the data of the new block.
Optionally, the read-write performance information includes a reading time length for the corresponding consensus node to read the block data corresponding to the historical block number, the calculation performance information includes a calculation result and a calculation time length for the corresponding consensus node to perform the calculation for several times on the block data corresponding to the historical block number by using a preset hash algorithm, where the data calculated in each round is a sum of the calculation result in the previous round and the random data, and the initial data is the block data corresponding to the historical block number.
Optionally, the determining the network performance information of the consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node includes:
and determining the receiving time for receiving the read-write performance information and/or the calculation performance information sent by each consensus node, wherein the network performance information of the consensus node comprises the receiving time.
Optionally, the determining, based on the read-write performance information, the calculation performance information, and the network performance information of each consensus node, a candidate node that meets a preset condition from all consensus nodes includes:
determining a first candidate consensus node with a correct calculation result from all consensus nodes based on the calculation result of each consensus node;
performing weighted calculation on the reading duration, the calculation duration and the receiving duration corresponding to each first candidate node according to a preset weighted mode to obtain a comprehensive performance value of the first candidate node, wherein a calculation formula of the weighted calculation is Sum = CPU × K1 + IO × K2 + Network × K3, K1 + K2 + K3 = 1, Sum represents the comprehensive performance value, CPU represents the calculation duration, IO represents the reading duration, Network represents the receiving duration, and K1> K3> K2;
and sequencing all the first candidate common identification nodes from small to large according to the comprehensive performance value, and taking the first N first candidate nodes as the candidate nodes meeting the preset condition, wherein N is an integer larger than 0.
Optionally, the determining a target node from the candidate nodes based on the number of history blocks generated by each candidate node includes:
determining a target value based on the number of history blocks generated by each of the candidate nodes, the target value being greater than a maximum of the number of history blocks generated,
determining the number of node labels generating each candidate node as the difference value of the target value and the number of history blocks generated by the candidate node;
generating node labels corresponding to the candidate nodes according to the number of the node labels;
randomly determining a target node label from all the generated node labels, and taking a candidate node corresponding to the target node label as the target node.
In a second aspect, an embodiment of the present invention provides a data uplink method based on a common-acknowledgement mechanism, applied to a common-acknowledgement node in a block chain network, including:
receiving broadcast information corresponding to a new block, which is broadcast by a central node in the block chain network, wherein the broadcast information comprises a designated historical block number, random data and a calculation wheel number;
based on the broadcast information, feeding back read-write performance information aiming at the block data corresponding to the historical block number and calculation performance information obtained by calculating the block data corresponding to the historical block number by adopting the random data and the calculation turns;
and if the confirmation information sent by the central node is received, performing packaging uplink on the data of the new block.
In a third aspect, an embodiment of the present invention provides a data uplink apparatus based on a common identification mechanism, which is applied to a central node in a block chain network, and includes:
the broadcast unit is used for broadcasting broadcast information corresponding to a new block to all the consensus nodes in the block chain network under the condition that the new block is generated, wherein the broadcast information comprises a specified historical block number, random data and a calculation wheel number;
a receiving unit, configured to receive, by each of the common identification nodes, read/write performance information for the block data corresponding to the historical block number, which is fed back based on the broadcast information, and calculation performance information obtained by calculating the block data corresponding to the historical block number using the random data and the calculation turns;
the first determining unit is used for determining the network performance information of each consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node;
a second determining unit, configured to determine, based on the read-write performance information, the calculation performance information, and the network performance information of each consensus node, a candidate node that meets a preset condition from all the consensus nodes;
and a third determining unit, configured to determine a target node from the candidate nodes based on a historical block number generated by each candidate node, where the target node is configured to perform packed uplink on data of the new block.
In a fourth aspect, an embodiment of the present invention provides a data uplink apparatus based on a common-acknowledgement mechanism, which is applied to a common-acknowledgement node in a block chain network, and includes:
a receiving unit, configured to receive broadcast information corresponding to a new block, where the broadcast information includes a specified historical block number, random data, and a calculation round number, and is broadcast by a central node in the block chain network;
a feedback unit, configured to feedback, based on the broadcast information, read-write performance information for the block data corresponding to the historical block number and calculation performance information obtained by calculating the block data corresponding to the historical block number using the random data and the calculation turns;
and the packing unit is used for packing and uplink the data of the new block if the determination information sent by the central node is received.
In a fifth aspect, an embodiment of the present invention provides a data uplink apparatus based on a common knowledge mechanism, the apparatus including a processor, configured to implement the steps of the data uplink method based on the common knowledge mechanism as described in the foregoing first and second embodiments when executing a computer program stored in a memory.
In a sixth aspect, an embodiment of the present invention provides a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for data uplink based on a common acknowledgement mechanism as described in the embodiments of the first aspect and the second aspect.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
in the technical scheme of the embodiment of the invention, aiming at the block chain network of the private chain and the alliance chain, a central node broadcasts broadcast information corresponding to a new block to all consensus nodes in the block chain network under the condition of generating the new block, wherein the broadcast information comprises a specified historical block number, random data and a calculation wheel number, after each consensus node of the whole network receives the broadcast information, the read-write performance information of the block data corresponding to the historical block number and the calculation performance information obtained by calculating the block data corresponding to the historical block number by using the random data and the calculation wheel number are fed back, the central node determines the network performance information of the consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node, the read-write performance information, the calculation performance information and the network performance information based on each consensus node are comprehensively considered based on a consensus mechanism, and finally, the number of the historical blocks generated by each candidate node is considered in a balanced manner, and a target node is determined from the candidate nodes. Therefore, the efficiency of fast data chaining can be greatly improved by determining the target node for generating the new block, meanwhile, the distribution of the packing right can also take fairness into consideration, and the distribution of the packing right is guaranteed to be open, transparent, fair and reasonable.
Drawings
Fig. 1 is a flowchart of a data uplink method based on a common acknowledgement mechanism according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for data uplink based on a common acknowledgement mechanism according to a second embodiment of the present invention;
FIG. 3 is a schematic view of an apparatus according to a third embodiment of the present invention;
FIG. 4 is a schematic view of an apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic view of an apparatus according to a fifth embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a data uplink method, a device and a readable storage medium based on a consensus mechanism, which are used for fairly and effectively distributing block data packing right. The data uplink method based on the consensus mechanism applied to the central node comprises the following steps: under the condition of generating a new block, broadcasting broadcast information corresponding to the new block to all consensus nodes in the block chain network, wherein the broadcast information comprises a specified historical block number, random data and a calculation wheel number; receiving read-write performance information, fed back by each consensus node based on the broadcast information, of block data corresponding to the historical block number and calculation performance information obtained by calculating the block data corresponding to the historical block number by using the random data and the calculation turns; determining network performance information of the consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node; determining candidate nodes meeting preset conditions from all the consensus nodes based on the read-write performance information, the calculation performance information and the network performance information of each consensus node; and determining a target node from the candidate nodes based on the historical block number generated by each candidate node, wherein the target node is used for packaging and uplink transmitting the data of the new block.
The technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are described in detail in the technical solutions of the present application, and are not limited to the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Examples
Referring to fig. 1, a first embodiment of the present invention provides a data uplink method based on a common acknowledgement mechanism, applied to a central node in a block chain network, the data uplink method based on the common acknowledgement mechanism including the following steps:
s101: under the condition of generating a new block, broadcasting broadcast information corresponding to the new block to all consensus nodes in the block chain network, wherein the broadcast information comprises a specified historical block number, random data and a calculation wheel number;
s102: receiving read-write performance information, fed back by each consensus node based on the broadcast information, of block data corresponding to the historical block number and calculation performance information obtained by calculating the block data corresponding to the historical block number by using the random data and the calculation turns;
s103: determining network performance information of the consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node;
s104: determining candidate nodes meeting preset conditions from all the consensus nodes based on the read-write performance information, the calculation performance information and the network performance information of each consensus node;
s105: and determining a target node from the candidate nodes based on the historical block number generated by each candidate node, wherein the target node is used for packaging and uplink transmitting the data of the new block.
Specifically, in this embodiment, the method for data uplink based on the consensus mechanism is applied to a central node in a block chain network, that is, the central node. Of course, the present invention can also be applied to other types of blockchain networks, and the embodiment is not limited herein.
In this embodiment, a central node and a plurality of consensus nodes connected thereto exist in the blockchain network, and the consensus nodes are clients in the blockchain network. As for each common node in the block chain, because the common node is a terminal device, the current computing capability and the current network condition of the common node change in real time, and the IO (input/output) capability of the disk also changes. For the generation of a new block, the consensus node performs the packing right competition, which is determined by several factors including the current computing capacity, which can reflect the capacity of the node for packing transaction to calculate the next block, the current network condition, and the current disk read-write capacity. For example, if the consensus node is currently performing some data computation and has consumed a large amount of computing resources, then the computing resources for computing the next block at this time is in a limited state. After a block is computed for a blockchain node, which needs to be broadcast to all nodes in the entire chain, the network conditions are critical for blockchain data. The disk IO is that the calculation of the whole block needs to read a large number of disk files to calculate the block, so if the disk is cleaning data or performing a large number of file read-write operations, the calculation of the whole block is slow. Therefore, in this embodiment, each index of the common node is collected based on the above factors and reported to the central node, so that the central node can select a more reasonable block computing node based on the reported data of each node, that is, a target node for packaging and chaining a new block in this embodiment.
Based on the above analysis, in the method in this embodiment, first, when the central node determines that a new block needs to be generated, broadcast information is broadcast to consensus nodes in the entire network through step S101, and each consensus node needs to follow a consensus mechanism and perform related information feedback by using the broadcast information. The broadcast information comprises a designated historical block number, random data and a calculation round number, and all the parameters are information required by the consensus node to compete for the packing right of the new block.
Furthermore, after the consensus node receives the broadcast information, two types of performance information are fed back:
the first method comprises the following steps: and the read-write performance (IO performance) information comprises the reading duration of the block data corresponding to the historical block number read by the corresponding consensus node.
Specifically, in this embodiment, after receiving the broadcast information, each common identification node parses the broadcast information, and reads the block data corresponding to the history block number from the local disk according to the history block number specified in the broadcast information, and after the reading is completed, records the reading duration consumed by reading the whole block, where the reading duration may indicate IO performance data of the corresponding common identification node, and a smaller reading duration indicates that the common identification node has better current performance of reading the whole data of the block chain, and the IO reading performance is high. The read-write performance in this embodiment is the capability of reading historical block data, but not the capability of reading data of other files of the disk, and reading data of other files of the disk cannot truly reflect the IO capability of reading data of a block chain, so the read-write performance is designed to read block data of one block chain, and the data of a block is the true data to be read by subsequently calculating the next block, so that the IO performance of the read block can be truly reflected.
And the second method comprises the following steps: and calculating performance information, including calculation results and calculation duration of the calculation rounds of the block data corresponding to the historical block numbers by adopting a preset hash algorithm by the corresponding consensus nodes, wherein the data calculated in each round is the sum of the calculation results of the previous round and the random data, and the initial data is the block data corresponding to the historical block numbers.
Specifically, in this embodiment, for the computation performance information of the consensus node, the performance information is obtained by computing data of the entire history block read from the read performance information, and computing the hash result of the preset hash algorithm SHA-256 of the block data together with random data in the broadcast information, and the computation algorithm is to obtain a result by continuously performing iterative computation on the block data and the random data together, and obtain the computation duration of the entire computation process by computing the number of times the server is completed. Because each consensus node needs to calculate according to the block data and the random data corresponding to the historical block number appointed by the central node during calculation, some counterfeiters are prevented from calculating well in advance, and the addition of the random data is necessary to depend on the result of the random data calculation to be the correct result, so that the whole result changes as soon as the random data changes, and meanwhile, the calculation round number and the specific calculated block are appointed by the central node, so that any consensus node cannot be appointed in advance or cannot be calculated well in advance, and the condition of unfair cheating competition is avoided. The calculation algorithm is as follows:
initializing initial data result = block, wherein block is the read historical block data. Result is the initial data of the calculation.
For (int i = 0; i < Times; i + +) iteratively calculates the number of calculation rounds issued by the central node by a loop.
Result = sha-256. create( result + rand); }
Where the sha-256 algorithm is used to iteratively compute the hash. And Rand is random data sent by the central node. The calculated result data is the initial data of the next round of calculation, and random data must be added in each iteration, so that the random data can play a role in each round, and the counterfeiting is difficult. After the calculation is completed, the calculation time consumed in the whole calculation process is recorded, and the shorter the calculation time is, the better the calculation performance of the consensus node is, in the specific implementation process, the preset hash algorithm may be set according to actual needs, and the number of bits of the hash result is required to exceed the preset number of bits (e.g., 50, 100, etc.), which is not limited in this embodiment.
Thus, the central node can receive the read-write performance information and the calculation performance information fed back by each consensus node according to the broadcast information in a mode specified by the consensus mechanism through step S102. And then determines network performance information of each consensus node through step S103. Specifically, the executable step determines a receiving duration for receiving the read-write performance information and/or the calculation performance information sent by each consensus node, where the network performance information of the consensus node includes the receiving duration.
Specifically, in this embodiment, since the target node that obtains the packing right needs to broadcast the block to the core node in the blockchain network after completing the block calculation, the speed of broadcasting to the core node reflects the state of the network, and also determines the actual capability of performing new block data packing uplink. The broadcast block data is an uplink network, so that the network performance of each consensus node cannot be reflected when the broadcast information is received from the central node, and the uplink bandwidth capacity of the consensus node can be known only when the data is reported to the central node by each consensus node, so that after the result of sha-256 is calculated by each consensus node in the manner, the result data needs to be uploaded to the central node, and the central node records each performance information of the whole consensus node, including the disk IO performance information and the calculation performance information. And the central node calculates the total receiving time of the uploaded data from the set uploading to the uploading ending, namely the uploading time required by uploading the read-write performance information and/or calculating the performance information, thereby obtaining the network performance information.
After determining the read-write performance information, the calculation new energy information, and the network performance information of each common node, the central node may determine, through step S104, a candidate node for performing packed uplink on a new block from the common nodes in the entire network based on the three information, which may specifically be implemented by the following steps:
determining a first candidate consensus node with a correct calculation result from all consensus nodes based on the calculation result of each consensus node;
performing weighted calculation on the reading duration, the calculation duration and the receiving duration corresponding to each first candidate node according to a preset weighted mode to obtain a comprehensive performance value of the first candidate node, wherein a calculation formula of the weighted calculation is Sum = CPU × K1 + IO × K2 + Network × K3, K1 + K2 + K3 = 1, Sum represents the comprehensive performance value, CPU represents the calculation duration, IO represents the reading duration, Network represents the receiving duration, and K1> K3> K2;
and sequencing all the first candidate common identification nodes from small to large according to the comprehensive performance value, and taking the first N first candidate nodes as the candidate nodes meeting the preset condition, wherein N is an integer larger than 0.
Specifically, in this embodiment, the central node stores and verifies the performance information reported by each consensus node. Before the central node receives the reported calculation performance information of the consensus node, random data and the calculation round number in the broadcast information are adopted in advance, the same preset Hash algorithm is adopted to calculate the block data of the appointed historical block number to obtain a calculation result,
therefore, after the performance information fed back by each consensus node is received, whether the calculation result of each consensus node is consistent with the calculation result of the consensus node can be compared, and if the calculation results are consistent, the calculation is correct. And then the first candidate consensus node with correct calculation result is selected. And then, sequencing the comprehensive performance values of the first candidate consensus node, wherein the sequencing algorithm is a sequence from small to large after the calculation of the preset weighting mode by using the 3 performance information. For the block chain, the performance index is calculated firstly, then the network index is calculated, then the disk IO index is calculated, and the comprehensive performance value adopts the following calculation mode:
sum = CPU K1 + IO K2 + Network K3, wherein (K1 + K2 + K3 = 1)
The CPU represents the calculation time length in the calculation performance information, the IO represents the reading time length in the reading and writing performance information, and the Network represents the receiving time length in the Network performance information. Each performance data is given a weight K1, K2, K3, and the sum of the weights is 1. The calculation power is the packing capacity of the common identification node for the new block data, can greatly influence the efficiency of packing uplink, and is the most considered factor, so the specific gravity of K1 is the largest. Secondly, since it is reflected that the capability of broadcasting new block information to the whole network also affects the efficiency of packet uplink to some extent, K3 is smaller than K1, and the capability of reading block data in the disk is the last factor to be considered, and the effect on the efficiency of packet uplink is smaller than the above calculation power and network capability, so K2 is smaller than K3, i.e. K1> K3> K2.
While traditional packing right competition is basically distributed based on computing capacity, the method in the embodiment combines the characteristics of the federation chain and the private chain, and in the private chain and the federation chain, because the number of nodes is relatively small and the amount of data to be billed is small, the requirement on computing capacity is not high. According to the method, not only the computing capacity of the node is considered, but also other data reading capacity and data uploading capacity are considered, and the comprehensive performance of the node can be reflected from multiple aspects, so that when the packing node is determined according to the comprehensive performance, the situation that the determined node packing efficiency is slow due to the fact that the computing capacity is simply considered and other capacities are insufficient can be avoided.
A smaller overall performance value represents better overall performance of the consensus node. And the comprehensive performance value of each consensus node is determined in a weighting mode, so that the data of 3 indexes can be reflected, the calculation speed is higher, the sequencing is also higher, and meanwhile, the abnormity cannot be caused. For example, a common node has the fastest CPU consumption time of almost 0 ms, and if the disk IO is very bad, for example, it takes 10 seconds to read the disk data, and the network data is normal, for example, it takes 1 second, and the total is 11 seconds, and by adding 3 data weights, any one performance is reflected on the total comprehensive performance value, so as to avoid the situation that some common nodes unreasonably compete for the uplink right of the new block because only one index is particularly good and one index is particularly bad, which results in the low efficiency of new block packaging. After the comprehensive performance values of the central node for each first candidate node are sorted in the descending order, the top N (such as 10000, 20000, etc.) first candidate nodes can be selected as candidate nodes meeting the preset condition. In a specific implementation process, the sorting manner and the number of the selected first candidate nodes may be set according to actual needs, and this embodiment is not limited herein.
Finally, the central node determines a target node from the candidate nodes based on the number of history blocks generated by each candidate node in step S105, specifically, the target node may be determined by:
determining a target value based on the number of history blocks generated by each of the candidate nodes, the target value being greater than a maximum of the number of history blocks generated,
determining the number of node labels generating each candidate node as the difference value of the target value and the number of history blocks generated by the candidate node;
generating node labels corresponding to the candidate nodes according to the number of the node labels;
randomly determining a target node label from all the generated node labels, and taking a candidate node corresponding to the target node label as the target node.
Specifically, in the present embodiment, for how to select the target node for performing the new block packed uplink, the equalization should be considered, and the algorithm should not be too complex, resulting in a large amount of computation and thus consuming resources and time.
The central node records the label of the target node generating the block each time, and the central node performs equalization based on the number of history blocks generated by each candidate node. The selection is carried out through a certain algorithm, so that the balance of all the consensus nodes can be achieved, and the situation that a certain consensus node is best to select the node every time is avoided. The selection principle of the central node is to equalize the selection times of the nodes. The specific algorithm is as follows:
firstly, for N candidate nodes screened according to a sorting result, reading the number of blocks generated by each candidate node in the past, determining the consensus node with the maximum number of generated blocks, recording the number value of the consensus node as M, determining a target value Max which is slightly larger than M, for example, Max is larger than M by 2, then subtracting the value of the number of the generated historical blocks from the Max value of each candidate node, and generating a unique label of a node according to the number of the value after obtaining the value. For example, Max is 10, for candidate node a, the number of generated history blocks is 2, and the difference is 8, so that node labels of 8 candidate nodes a need to be generated. Generating a corresponding number of node labels for each candidate node in the above manner, storing the generated node labels into an array, and after the storage is completed, generating more candidate nodes with more blocks, the fewer the number of corresponding node labels in the array, and generating less candidate nodes with more blocks, the more the number of corresponding node labels in the array. Then, a random algorithm is needed to scramble the data of the Node, which can be realized by a function Rand (Node), wherein the interface Rand randomly scrambles the data stored in the Node. The central Node selects the Node from the array using the random data generated before, and the calculation algorithm may use Node _ Index = rand% Node. Size () indicates the number of all stored node labels in the array. By calculating the remainder, a Node can be randomly selected from the Node array. And if the calculated value is Node _ Index, the Node _ Index is a Node label corresponding to the target Node, and the target Node is further determined.
After the central node determines a target node for packaging and chaining the new block, the central node can broadcast confirmation information to the common identification nodes of the whole network, each common identification node determines whether the common identification node is distributed to obtain a packaging right according to the confirmation information, and when the target node is determined to be the common identification node, the new block is packaged and chained.
The method in the embodiment considers the performance of all aspects of the consensus node and can also consider fairness, determines that the target node for generating the new block can greatly improve the efficiency of fast data chaining, and meanwhile, the distribution of the packing right can also consider fairness, thereby ensuring that the distribution of the packing right is open and transparent and fair and reasonable.
Referring to fig. 2, a second embodiment of the present invention provides a method for data uplink based on a common-mode mechanism, which is applied to a common-mode node in a block chain network, and includes:
s201: receiving broadcast information corresponding to a new block, which is broadcast by a central node in the block chain network, wherein the broadcast information comprises a designated historical block number, random data and a calculation wheel number;
s202: based on the broadcast information, feeding back read-write performance information aiming at the block data corresponding to the historical block number and calculation performance information obtained by calculating the block data corresponding to the historical block number by adopting the random data and the calculation turns;
s203: and if the confirmation information sent by the central node is received, performing packaging uplink on the data of the new block.
The method for performing data uplink by the common node based on the common mechanism in this embodiment has been described in detail in the foregoing first embodiment, and is not described herein again.
Referring to fig. 3, a third embodiment of the present invention provides a data uplink apparatus based on a common acknowledgement mechanism, applied to a central node in a block chain network, including:
a broadcasting unit 301, configured to broadcast, when a new block is generated, broadcast information corresponding to the new block to all consensus nodes in the block chain network, where the broadcast information includes a specified historical block number, random data, and a calculation round number;
a receiving unit 302, configured to receive, by each of the common identification nodes, read/write performance information for the block data corresponding to the historical block number, which is fed back based on the broadcast information, and calculation performance information obtained by calculating the block data corresponding to the historical block number by using the random data and the calculation turns;
a first determining unit 303, configured to determine network performance information of each consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node;
a second determining unit 304, configured to determine, based on the read-write performance information, the calculation performance information, and the network performance information of each consensus node, a candidate node that meets a preset condition from all consensus nodes;
a third determining unit 305, configured to determine a target node from the candidate nodes based on the historical block number generated by each candidate node, where the target node is configured to perform packed uplink on the data of the new block.
Further, in this embodiment, the read-write performance information includes a reading time length for the corresponding consensus node to read the block data corresponding to the historical block number, the calculation performance information includes a calculation result and a calculation time length for the corresponding consensus node to perform the calculation rounds of calculation on the block data corresponding to the historical block number by using a preset hash algorithm, where the data calculated in each round is a sum of the calculation result in the previous round and the random data, and the initial data is the block data corresponding to the historical block number.
Further, in this embodiment, the first determining unit 303 is specifically configured to:
and determining the receiving time for receiving the read-write performance information and/or the calculation performance information sent by each consensus node, wherein the network performance information of the consensus node comprises the receiving time.
Further, in this embodiment, the second determining unit 304 is specifically configured to:
determining a first candidate consensus node with a correct calculation result from all consensus nodes based on the calculation result of each consensus node;
performing weighted calculation on the reading duration, the calculation duration and the receiving duration corresponding to each first candidate node according to a preset weighted mode to obtain a comprehensive performance value of the first candidate node, wherein a calculation formula of the weighted calculation is Sum = CPU × K1 + IO × K2 + Network × K3, K1 + K2 + K3 = 1, Sum represents the comprehensive performance value, CPU represents the calculation duration, IO represents the reading duration, Network represents the receiving duration, and K1> K3> K2;
and sequencing all the first candidate common identification nodes from small to large according to the comprehensive performance value, and taking the first N first candidate nodes as the candidate nodes meeting the preset condition, wherein N is an integer larger than 0.
Further, in this embodiment, the third determining unit 305 is specifically configured to:
determining a target value based on the number of history blocks generated by each of the candidate nodes, the target value being greater than a maximum of the number of history blocks generated,
determining the number of node labels generating each candidate node as the difference value of the target value and the number of history blocks generated by the candidate node;
generating node labels corresponding to the candidate nodes according to the number of the node labels;
randomly determining a target node label from all the generated node labels, and taking a candidate node corresponding to the target node label as the target node.
Referring to fig. 4, a fourth embodiment of the present invention provides a data uplink apparatus based on a common acknowledgement mechanism, applied to a common acknowledgement node in a block chain network, including:
a receiving unit 401, configured to receive broadcast information corresponding to a new block, where the broadcast information includes a specified historical block number, random data, and a calculation round number, and is broadcast by a central node in the block chain network;
a feedback unit 402, configured to feedback, based on the broadcast information, read/write performance information for the block data corresponding to the historical block number and calculation performance information obtained by calculating the block data corresponding to the historical block number using the random data and the calculation turns;
a packing unit 403, configured to pack and uplink the data of the new block if the determination information sent by the central node is received.
Referring to fig. 5, a fifth embodiment of the present invention provides a data uplink apparatus based on a common acknowledgement mechanism, the apparatus of the embodiment comprising: a processor 501, a memory 502 and a computer program stored in the memory and executable on the processor, such as a program corresponding to the data uplink method based on the consensus mechanism in the first embodiment or the second embodiment. The processor, when executing the computer program, implements the steps of the method for data uplink based on the common acknowledgement mechanism in the first embodiment or the second embodiment. Alternatively, the processor implements the functions of the modules/units in the apparatus of the third embodiment or the fourth embodiment described above when executing the computer program.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the computer apparatus.
The device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the schematic diagram 5 is merely an example of a computer apparatus and is not intended to limit the apparatus, and may include more or less components than those shown, or some components in combination, or different components, for example, the apparatus may also include input and output devices, network access devices, buses, etc.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like which is the control center for the computer device and which connects the various parts of the overall computer device using various interfaces and lines.
The memory 502 may be used to store the computer programs and/or modules, and the processor may implement the various functions of the computer device by running or executing the computer programs and/or modules stored in the memory, as well as by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the cellular phone, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
A sixth embodiment of the present invention provides a computer readable storage medium, on which a computer program is stored, and the apparatus in the third embodiment and the functional unit integrated with the data uplink apparatus based on the consensus mechanism in the fourth embodiment can be stored in a computer readable storage medium if they are implemented in the form of software functional units and sold or used as independent products. Based on such understanding, all or part of the processes of the method for uplink data based on the consensus mechanism according to the first embodiment or the second embodiment of the present invention can also be implemented by a computer program, which can be stored in a computer-readable storage medium and can be executed by a processor to implement the steps of the above-mentioned method embodiments. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
In the technical scheme of the embodiment of the invention, aiming at the block chain network of the private chain and the alliance chain, a central node broadcasts broadcast information corresponding to a new block to all consensus nodes in the block chain network under the condition of generating the new block, wherein the broadcast information comprises a specified historical block number, random data and a calculation wheel number, after each consensus node of the whole network receives the broadcast information, the read-write performance information of the block data corresponding to the historical block number and the calculation performance information obtained by calculating the block data corresponding to the historical block number by using the random data and the calculation wheel number are fed back, the central node determines the network performance information of the consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node, the read-write performance information, the calculation performance information and the network performance information based on each consensus node are comprehensively considered based on a consensus mechanism, and finally, the number of the historical blocks generated by each candidate node is considered in a balanced manner, and a target node is determined from the candidate nodes. Therefore, the efficiency of fast data chaining can be greatly improved by determining the target node for generating the new block, meanwhile, the distribution of the packing right can also take fairness into consideration, and the distribution of the packing right is guaranteed to be open, transparent, fair and reasonable.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A method for data uplink based on a common identification mechanism is applied to a central node in a block chain network, and is characterized by comprising the following steps:
under the condition of generating a new block, broadcasting broadcast information corresponding to the new block to all consensus nodes in the block chain network, wherein the broadcast information comprises a specified historical block number, random data and a calculation wheel number;
receiving read-write performance information, fed back by each consensus node based on the broadcast information, of block data corresponding to the historical block number and calculation performance information obtained by calculating the block data corresponding to the historical block number by using the random data and the calculation turns;
determining network performance information of the consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node;
determining candidate nodes meeting preset conditions from all the consensus nodes based on the read-write performance information, the calculation performance information and the network performance information of each consensus node;
and determining a target node from the candidate nodes based on the historical block number generated by each candidate node, wherein the target node is used for packaging and uplink transmitting the data of the new block.
2. The method of claim 1, wherein the read/write performance information includes a reading duration for the corresponding consensus node to read the block data corresponding to the historical block number, and the calculation performance information includes a calculation result and a calculation duration for the corresponding consensus node to perform the calculation rounds of the calculation on the block data corresponding to the historical block number by using a predetermined hash algorithm, wherein the data calculated in each round is a sum of the calculation result of the previous round and the random data, and the initial data is the block data corresponding to the historical block number.
3. The method as claimed in claim 2, wherein the determining the network performance information of the consensus node based on the read-write performance information and/or the computation performance information fed back by each consensus node comprises:
and determining the receiving time for receiving the read-write performance information and/or the calculation performance information sent by each consensus node, wherein the network performance information of the consensus node comprises the receiving time.
4. The method of claim 3, wherein the determining candidate nodes satisfying a preset condition from all the consensus nodes based on the read-write performance information, the computation performance information, and the network performance information of each consensus node comprises:
determining a first candidate consensus node with a correct calculation result from all consensus nodes based on the calculation result of each consensus node;
performing weighted calculation on the reading duration, the calculation duration and the receiving duration corresponding to each first candidate node according to a preset weighted mode to obtain a comprehensive performance value of the first candidate node, wherein a calculation formula of the weighted calculation is Sum = CPU × K1 + IO × K2 + Network × K3, K1 + K2 + K3 = 1, Sum represents the comprehensive performance value, CPU represents the calculation duration, IO represents the reading duration, Network represents the receiving duration, and K1> K3> K2;
and sequencing all the first candidate common identification nodes from small to large according to the comprehensive performance value, and taking the first N first candidate nodes as the candidate nodes meeting the preset condition, wherein N is an integer larger than 0.
5. The method of claim 1, wherein said determining a target node from the candidate nodes based on the number of historical blocks generated by each of said candidate nodes comprises:
determining a target value based on the number of history blocks generated by each of the candidate nodes, the target value being greater than a maximum of the number of history blocks generated,
determining the number of node labels generating each candidate node as the difference value of the target value and the number of history blocks generated by the candidate node;
generating node labels corresponding to the candidate nodes according to the number of the node labels;
randomly determining a target node label from all the generated node labels, and taking a candidate node corresponding to the target node label as the target node.
6. A data uplink apparatus based on a common identification mechanism, applied to a central node in a block chain network, comprising:
the broadcast unit is used for broadcasting broadcast information corresponding to a new block to all the consensus nodes in the block chain network under the condition that the new block is generated, wherein the broadcast information comprises a specified historical block number, random data and a calculation wheel number;
a receiving unit, configured to receive, by each of the common identification nodes, read/write performance information for the block data corresponding to the historical block number, which is fed back based on the broadcast information, and calculation performance information obtained by calculating the block data corresponding to the historical block number using the random data and the calculation turns;
the first determining unit is used for determining the network performance information of each consensus node based on the read-write performance information and/or the calculation performance information fed back by each consensus node;
a second determining unit, configured to determine, based on the read-write performance information, the calculation performance information, and the network performance information of each consensus node, a candidate node that meets a preset condition from all the consensus nodes;
and a third determining unit, configured to determine a target node from the candidate nodes based on a historical block number generated by each candidate node, where the target node is configured to perform packed uplink on data of the new block.
7. An apparatus for data uplink based on a common knowledge base, comprising a processor configured to implement the steps of the method for data uplink based on a common knowledge base according to any one of claims 1-5 when executing a computer program stored in a memory.
8. A readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the method for data uplink based on a consensus mechanism as claimed in any one of claims 1-5.
CN202010537030.1A 2020-06-12 2020-06-12 Common-identification-mechanism-based data uplink method, device and readable storage medium Active CN111666343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010537030.1A CN111666343B (en) 2020-06-12 2020-06-12 Common-identification-mechanism-based data uplink method, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010537030.1A CN111666343B (en) 2020-06-12 2020-06-12 Common-identification-mechanism-based data uplink method, device and readable storage medium

Publications (2)

Publication Number Publication Date
CN111666343A true CN111666343A (en) 2020-09-15
CN111666343B CN111666343B (en) 2022-05-10

Family

ID=72387294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010537030.1A Active CN111666343B (en) 2020-06-12 2020-06-12 Common-identification-mechanism-based data uplink method, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111666343B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600887A (en) * 2020-12-03 2021-04-02 中国联合网络通信集团有限公司 Computing power management method and device
CN112954009A (en) * 2021-01-27 2021-06-11 咪咕音乐有限公司 Block chain consensus method, device and storage medium
CN113486118A (en) * 2021-07-21 2021-10-08 银清科技有限公司 Consensus node selection method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124403A (en) * 2017-04-14 2017-09-01 朱清明 The generation method and computing device of common recognition block in block chain
WO2019192062A1 (en) * 2018-04-04 2019-10-10 上海金丘信息科技股份有限公司 Dynamic stake consensus method based on trusted members
CN110995439A (en) * 2019-11-20 2020-04-10 上海链颉科技有限公司 Block chain consensus method, electronic device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124403A (en) * 2017-04-14 2017-09-01 朱清明 The generation method and computing device of common recognition block in block chain
WO2019192062A1 (en) * 2018-04-04 2019-10-10 上海金丘信息科技股份有限公司 Dynamic stake consensus method based on trusted members
CN110995439A (en) * 2019-11-20 2020-04-10 上海链颉科技有限公司 Block chain consensus method, electronic device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李忠诚等: "一种基于权益代表的可扩展共识协议", 《应用科学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600887A (en) * 2020-12-03 2021-04-02 中国联合网络通信集团有限公司 Computing power management method and device
CN112954009A (en) * 2021-01-27 2021-06-11 咪咕音乐有限公司 Block chain consensus method, device and storage medium
CN113486118A (en) * 2021-07-21 2021-10-08 银清科技有限公司 Consensus node selection method and device
CN113486118B (en) * 2021-07-21 2023-09-22 银清科技有限公司 Consensus node selection method and device

Also Published As

Publication number Publication date
CN111666343B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN111666343B (en) Common-identification-mechanism-based data uplink method, device and readable storage medium
CN108596621B (en) Block chain accounting node generation method and device, computer equipment and storage medium
CN106355391B (en) Service processing method and device
CN107454110A (en) A kind of data verification method and server
CN104965844A (en) Information processing method and apparatus
CN108347483B (en) Decentralized computing system based on double-layer network
CN112926897A (en) Client contribution calculation method and device based on federal learning
CN111061505B (en) Machine learning-based optimized AB packaging method
CN109889397B (en) Lottery method, block generation method, equipment and storage medium
CN107465698A (en) A kind of data verification method and server
CN107682328A (en) A kind of data verification method and client
CN107623865A (en) A kind of data verification method and server
CN107426253A (en) A kind of data verification method and client
CN108241970B (en) Mining method and device based on block chain and computer readable storage medium
CN107528855A (en) A kind of data verification method and server
CN110675183B (en) Marketing object determining method, marketing popularization method and related devices
CN111047348A (en) Novel block chain consensus algorithm and block chain network system based on same
CN112651744A (en) Block chain-based credit mutual evaluation method and system and electronic equipment
CN113988831A (en) Transfer method based on alliance chain
CN116055052A (en) Block chain-based data processing method, device, equipment and readable storage medium
CN107507020B (en) Method for obtaining network propagation influence competitive advantage maximization
CN112995167A (en) Kafka mechanism-based power utilization information acquisition method, block chain network and user side
CN108415686B (en) Account-splitting calculation method and device in random number providing process
CN116185731A (en) Terminal test system and method based on blockchain network and electronic equipment
CN110298593A (en) Human cost calculation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200915

Assignee: Yidu Lehuo Network Technology Co.,Ltd.

Assignor: WUHAN DOUYU YULE NETWORK TECHNOLOGY Co.,Ltd.

Contract record no.: X2023980041383

Denomination of invention: Method, device, and readable storage medium for data chaining based on consensus mechanism

Granted publication date: 20220510

License type: Common License

Record date: 20230908

EE01 Entry into force of recordation of patent licensing contract