Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a flowchart of a parallel chain block generation method according to an embodiment of the present invention. As shown in fig. 1, in this embodiment, the present invention provides a parallel link block generation method suitable for parallel link nodes, where the method includes the following two steps:
s12: receiving first block data of a first main chain block sent by a main chain node, updating a temporary parallel chain block queue according to the first block data and storing the temporary parallel chain block queue in a local database; wherein the first block data includes first block header information of the first main-chain block, the first block header information including a first parent hash of the first main-chain block;
s13: the temporary parallel chain blocks of the temporary parallel chain block queue are read from the local database to generate parallel chain blocks and executed.
In the proposed parallel chain mechanism (refer to the parallel chain patent documents filed by the applicant), after a parallel chain receives block data of a main chain block (assuming that the main chain block includes parallel chain transactions of a current parallel chain), the parallel chain transaction of the current parallel chain is screened to generate a parallel chain block, and after the generated parallel chain block is executed, a next main chain block is received.
The reason that the applicant originally designed this is that parallel link nodes must serially execute receiving block data and executing parallel link blocks (assuming that there is a parallel link transaction of parallel links to which parallel link points belong in each main chain block), i.e., the parallel link nodes must first generate and execute parallel link blocks from block data of a main chain block (100)) with a block height of 100, and then generate and execute parallel link blocks from block data of a main chain block (101)) with a block height of 101;
if the parallel chain node reads the block (101) first, generating a parallel chain block according to the block data of the block (101) and executing, so that the parallel chain generated by the parallel chain node is different from the parallel chain generated by other parallel chain nodes; in order to ensure that parallel chains generated by different parallel chain link points are the same, even if the parallel chain node reads the block (101) first, the step of generating a parallel chain block according to the block data of the block (100) and executing the block is only completed, and then the step of generating a parallel chain block according to the block data of the block (101) and executing the block is executed; under the condition, the parallel execution is designed, so that the CPU resource utilization rate cannot be improved, and the problem of wasting space for storing the block (101) is also caused;
considering the scene of main chain branching rollback handled by parallel chain nodes, in the current main chain-parallel chain mechanism, parallel chain nodes need to completely reproduce the rollback scene of main chain nodes, namely, a main chain node generates block (100), generates block (101), rolls back block (101), generates block (101)'; parallel chain nodes also need to generate parallel chain blocks according to the block (100), generate parallel chain blocks according to the block (101), rollback the parallel chain blocks generated corresponding to the block (101), and generate parallel chain blocks according to the block (101)';
the applicant considers that, when a parallel chain node executes the parallel chain block generation according to the block (100), the main chain node has already executed the steps of "generating the block (101), rolling back the block (101), and generating the block (101) ', at this time, the parallel chain node does not need to completely reproduce the rolling back scene of the main chain node, and the parallel chain block is directly generated according to the block (101)', such a mechanism is more flexible and expandable, therefore, the applicant provides the parallel chain block generation method executed in parallel according to the present application.
Specifically, thread one and thread two are executed in parallel;
executing step S12 in thread one, where the parallel link node receives first block data of the first main link block sent by the main link node, updates the temporary parallel link block queue according to the first block data, and stores the temporary parallel link block queue in the local database; wherein the first block data includes first block header information of the first main-chain block, the first block header information including a first parent hash of the first main-chain block;
in thread two, step S13 is performed, and the parallel chain node reads the temporary parallel chain chunk of the temporary parallel chain chunk queue from the local database to generate a parallel chain chunk and executes it.
In further embodiments, the temporary parallel link block queue may also be stored in a memory, so as to improve the reading efficiency of the parallel link nodes reading the temporary parallel link blocks in the temporary parallel link block queue from the memory, but the temporary parallel link block queue still needs to be stored in a local database to prevent the downtime from losing the memory data.
The above-described embodiments enhance the flexibility, robustness and scalability of the blockchain system.
Fig. 2 is a flowchart of step S12 in a preferred embodiment of the method shown in fig. 1. As shown in fig. 2, in a preferred embodiment, step S12 includes:
s120: receiving first block data of a first main chain block sent by a main chain node, and judging whether the first block data contains parallel chain transaction of a current parallel chain:
if yes, go to step S121: generating a first temporary parallel chain block according to the first block data, updating a temporary parallel chain block queue according to the first temporary parallel chain block and first block header information, and storing the temporary parallel chain block queue in a local database;
otherwise, step S122 is executed: and updating the temporary parallel chain block queue according to the first block head information and storing the temporary parallel chain block queue in a local database.
The first block data of the above embodiment includes all transactions of the first main chain block, so that when the parallel chain node receives the first block data of the first main chain block sent by the main chain node, it needs to screen out the parallel chain transaction of the current parallel chain by determining whether the parallel chain transaction of the current parallel chain is included in the first block data;
in further embodiments, the parallel chain transaction of the current parallel chain may be initially screened by the main chain node and sent to the current parallel chain node (i.e., the first block data includes the parallel chain transaction of the current parallel chain in the first main chain block, and if there is no parallel chain transaction of the current parallel chain in the first main chain block, the first block data is empty), at this time, the parallel chain link point only needs to determine whether the first block data is empty: if not, generating a first temporary parallel chain block according to the first block data, updating the temporary parallel chain block queue according to the first temporary parallel chain block and the first block header information, and storing the temporary parallel chain block queue in a local database; if yes, updating the temporary parallel chain block queue according to the first block header information and storing the temporary parallel chain block queue in a local database; the same technical effect can be achieved.
FIG. 3 is a flow diagram of a preferred embodiment of the method shown in FIG. 1. As shown in fig. 3, in a preferred embodiment, the method further includes:
s14: acquiring the height of a first parallel chain block of a current parallel chain;
s15: judging whether the remainder of the height of the first parallel chain block and a pre-configured first threshold is 0:
if yes, go to step S16: calculating the difference between the first parallel chain block height and a pre-configured first threshold value to obtain a second parallel chain block height; and the number of the first and second groups,
s17: and deleting the temporary parallel chain blocks in the temporary parallel chain block queue before the second parallel chain block height.
Specifically, assuming that the first threshold is 1000, the first parallel chain block height of the current parallel chain is 3000;
in step S14, the parallel chain node acquires a first parallel chain block height of the current parallel chain;
in step S15, the parallel link point determines whether the remainder of the first parallel link block height from the preconfigured first threshold is 0:
since the first parallel link block height is 3000, the preconfigured first threshold is 1000, and the remainder of 3000 and 1000 is 0, step S16 is executed: the parallel link node calculates the difference between the first parallel link block height and a pre-configured first threshold to obtain a second parallel link block height, namely, the second parallel link block height is 2000; and the number of the first and second groups,
in step S17, the parallel link node deletes the temporary parallel link block in the temporary parallel link block queue before the block height is 2000.
In further embodiments, the first threshold may be configured to other values according to actual requirements, for example, to 500, and the same technical effect may be achieved.
In further embodiments, the remainder may be configured to other values according to actual requirements, for example, configured to be 1, that is, it is determined whether the remainder of the first parallel chain block height and the preconfigured first threshold is 1, which can achieve the same technical effect.
In more embodiments, a method for deleting older temporary parallel chain blocks in the local database may be configured according to actual needs, so that the same technical effect may be achieved. For example configured to: acquiring the height of a first parallel chain block of a current parallel chain; calculating a second parallel chain block height according to the first parallel chain block height and a pre-configured first threshold value; deleting all temporary parallel chain blocks in the temporary parallel chain block queue before the second parallel chain block height.
Deleting older temporary parallel chain blocks in the local database can save the storage space of the local database, but under the scene that the difference between the block height of the newest parallel chain block of the parallel chain node and the block height of the newest temporary parallel chain block in the temporary parallel chain block queue of the new main chain node exceeds a first threshold value when the parallel chain link points are switched to the main chain node through the load balancing server; or, in a scenario where the main chain rolls back more than the first threshold number of blocks, the parallel link point cannot read the temporary parallel link block from the temporary parallel link block queue to generate the parallel link block. At the moment, the parallel chain nodes traverse the main chain block forwards until a third main chain block is found; the third parent hash of the third main chain block is the same as the block hash of the main chain block corresponding to the third temporary parallel chain block in the temporary parallel chain block queue, and the parallel chain node deletes all temporary parallel chain blocks behind the third temporary parallel chain block in the temporary parallel chain block queue; and updating the temporary parallel chain block queue according to the block data of the third main chain block and storing the temporary parallel chain block queue in a local database.
The embodiment deletes part of the temporary parallel chain blocks in the local database, thereby improving the query efficiency of the local database.
Fig. 4 is a flowchart of step S12 in another preferred embodiment of the method shown in fig. 1. As shown in fig. 4, in another preferred embodiment, step S12 includes:
s123: receiving first block data of a first main chain block sent by a main chain node;
s124: verifying whether the block hash of the main chain block corresponding to the latest temporary parallel chain block in the temporary parallel chain block queue is the same as the first parent hash:
the method comprises the following steps: step S125 is executed: generating a temporary parallel chain block according to the first block data to update the temporary parallel chain block queue and store the temporary parallel chain block queue in a local database;
otherwise, step S126 is executed: traversing the main chain block forwards until a second main chain block is found; the second parent hash of the second main chain block is the same as the block hash of the main chain block corresponding to the second temporary parallel chain block in the temporary parallel chain block queue; and the number of the first and second groups,
s127: deleting all temporary parallel chain blocks behind the second temporary parallel chain block in the temporary parallel chain block queue; and the number of the first and second groups,
s128: and updating the temporary parallel chain block queue according to the block data of the second main chain block, and storing the temporary parallel chain block queue into a local database.
Assuming a first application scenario, the first application scenario is as follows:
when a main chain node A connected with a parallel chain node a is down, a is connected to a new main chain node B through a load balancing server, and at the moment, a block HashBuckHash (100) of a main chain block corresponding to a latest temporary parallel chain block in a parallel chain link point temporary parallel chain block queue may be different from a first parent HashParanthHash (101)' of a first main chain block of the new main chain node; if a updates the temporary parallel chain block queue directly according to the first block data of the first main chain block of B, the updated parallel chain block queue has errors;
the problem generated by the first application scenario can be solved through steps S123 to S128;
assume that the blockhash (100) is identical to the parenthosh (101)':
in step S123, a receives the first block data of the first main-chain block sent by B;
in step S124, a verifies whether the block hash blockhash (100) of the main chain block corresponding to the latest temporary parallel chain block in the temporary parallel chain block queue is the same as the first parent hash parenthosh (101)':
if the blockhash (100) is the same as the parenthosh (101)', step S125 is executed, a generates a temporary parallel chain block according to the first block data to update the temporary parallel chain block queue and store the temporary parallel chain block queue in the local database;
assume that the blockhash (100) is different from the parenthosh (101)':
in step S123, a receives the first block data of the first main-chain block sent by B;
in step S124, a verifies whether the block hash blockhash (100) of the main chain block corresponding to the latest temporary parallel chain block in the temporary parallel chain block queue is the same as the first parent hash parenthosh (101)':
if the blockhash (100) is different from the parenthosh (101)', step S126 is executed, where a traverses the main chain block forward until the second main chain block is found; the second parent hash of the second main chain block is the same as the block hash of the main chain block corresponding to the second temporary parallel chain block in the temporary parallel chain block queue; and the number of the first and second groups,
in step S127, a deletes all temporary parallel link blocks after the second temporary parallel link block in the temporary parallel link block queue; and the number of the first and second groups,
in step S128, a updates the temporary parallel chain block queue according to the block data of the second main chain block, and stores the updated temporary parallel chain block queue in the local database.
Fig. 5 is a flowchart of step S13 in a preferred embodiment of the method shown in fig. 1. As shown in fig. 5, in a preferred embodiment, step S13 includes:
s132: verifying whether the third parent hash of the read third temporary parallel chain block is the same as the block hash of the latest parallel chain block of the current parallel chain:
if yes, go to step S133: generating a parallel chain block according to the third temporary parallel chain block and executing;
otherwise, step S134 is executed: traversing the temporary parallel chain blocks in the temporary parallel chain block queue forwards until a fourth temporary parallel chain block is found; the parent hash of the fourth temporary parallel chain block is the same as the block hash of the first parallel chain block in the current parallel chain; and the number of the first and second groups,
s135: deleting all parallel chain blocks after the first parallel chain block; and the number of the first and second groups,
s136: and generating a parallel chain block according to the fourth temporary parallel chain block and executing.
Assuming a second application scenario, the second application scenario is as follows:
reading a temporary parallel chain block of the temporary parallel chain block queue from a local database by a parallel chain node a, and when a third parent HashParatchah (21_ Flat _ Rep) of the read third temporary parallel chain block is different from the HashBuckHash (20_ Flat) of the block of the latest parallel chain block of the current parallel chain, if the parallel chain block is directly generated according to the third temporary parallel chain block, the generated parallel chain block goes wrong;
the problem generated by the second application scenario can be solved through steps S132 to S136;
assume that the parenthosh (21_ flat _ adjacent) is the same as the blockhash (20_ flat);
in step S132, a verifies whether the third parent hash parenthosh (21_ flat _ adjacent) of the read third temporary parallel chain block is the same as the block hash blockhash (20_ flat) of the newest parallel chain block of the current parallel chain:
since the parenthosh (21_ flat _ adjacent) is the same as the blockhash (20_ flat), step S133 is performed: generating a parallel chain block according to the third temporary parallel chain block and executing;
assume that the parenthosh (21_ flat _ adjacent) is different from the blockhash (20_ flat);
in step S132, a verifies whether the third parent hash parenthosh (21_ flat _ adjacent) of the read third temporary parallel chain block is the same as the block hash blockhash (20_ flat) of the newest parallel chain block of the current parallel chain:
since the parenthosh (21_ flat _ adjacent) is different from the blockhash (20_ flat), step S134 is performed: a, traversing the temporary parallel chain blocks in the temporary parallel chain block queue forwards until finding a fourth temporary parallel chain block; the parent hash of the fourth temporary parallel chain block is the same as the block hash of the first parallel chain block in the current parallel chain; and the number of the first and second groups,
in step S135, a deletes all parallel chain blocks after the first parallel chain block;
and the number of the first and second groups,
in step S136, a generates a parallel chain block from the fourth temporary parallel chain block and executes.
Fig. 6 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
As shown in fig. 6, as another aspect, the present application also provides an apparatus 600 including one or more Central Processing Units (CPUs) 601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the apparatus 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the parallel chain block generation method described in any of the above embodiments may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing a parallel chain block generation method. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
As yet another aspect, the present application also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus of the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the parallel chain block generation method described herein.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor, for example, each of the described units may be a software program provided in a computer or a mobile intelligent device, or may be a separately configured hardware device. Wherein the designation of a unit or module does not in some way constitute a limitation of the unit or module itself.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the present application. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.