Disclosure of Invention
In view of the foregoing drawbacks and deficiencies of the prior art, it is desirable to provide a parallel chain block pushing method, device and storage medium that saves parallel chain synchronous parallel chain block bandwidth and reduces the time-consuming duration of parallel chain node synchronous parallel chain blocks.
In a first aspect, the present invention provides a parallel link block pushing method suitable for parallel link nodes, including:
the second parallel link node sends first registration request information of a parallel link data push service to the first parallel link node, so that the first parallel link node stores a push address of the second parallel link node after receiving the first registration request information;
the second parallel chain node receives the first block operation information pushed by the first parallel chain node; the first block operation information comprises block information of a plurality of parallel chain blocks;
and the second parallel link node updates local data according to the first block operation information.
In a second aspect, the present invention also provides an apparatus comprising one or more processors and a memory, wherein the memory contains instructions executable by the one or more processors to cause the one or more processors to perform the parallel chain block pushing method, apparatus and storage medium provided according to embodiments of the present invention.
In a third aspect, the present invention further provides a storage medium storing a computer program, where the computer program is configured to cause a computer to execute the parallel chain block pushing method, the parallel chain block pushing apparatus, and the storage medium provided in embodiments of the present invention.
In the parallel chain block pushing method, device, and storage medium provided in various embodiments of the present invention, a first registration request message of a parallel chain data pushing service is sent to a first parallel chain node through a second parallel chain node, so that after the first parallel chain node receives the first registration request message, a pushing address of the second parallel chain node is stored; the second parallel chain node receives the first block operation information pushed by the first parallel chain node; the first block operation information comprises block information of a plurality of parallel chain blocks; the method for updating local data by the second parallel chain node according to the first block operation information enables the parallel chain nodes with fewer blocks to receive a plurality of parallel chain blocks from the normally running parallel chain nodes, namely the parallel chain nodes with more blocks, and then generates the parallel chain blocks according to the current main chain-parallel chain mechanism, so that the bandwidth of the parallel chain nodes for synchronizing the existing parallel chain blocks is saved, and the time consumption for synchronizing the existing parallel chain blocks by the parallel chain nodes is shortened.
In the parallel chain block pushing method, device, and storage medium provided by some embodiments of the present invention, further, by using a method of configuring an operation type for the first block operation information, the second parallel chain node updates local data according to a specific operation type, so that convenience in updating data by the second parallel chain node is improved, bandwidth of an existing parallel chain block synchronized by parallel chain nodes is further saved, and time consumption for the existing parallel chain block synchronized by the parallel chain nodes is reduced.
In the parallel chain block pushing method, device, and storage medium provided by some embodiments of the present invention, further, by using a method of configuring a first operation sequence number for first block operation information, a second parallel chain node operates a received parallel chain block according to the first operation sequence number, so that efficiency of receiving the parallel chain block by the second parallel chain node is improved, bandwidth of an existing parallel chain block synchronized by a parallel chain link point is further saved, and time consumption for synchronizing the existing parallel chain block by the parallel chain node is reduced.
In the parallel chain block pushing method, device, and storage medium provided by some embodiments of the present invention, the synchronized block height of the second parallel chain node is configured for the first block operation information, and the method for generating and pushing the first block operation information when the synchronized block height of the first parallel chain node is smaller than the current block height is configured for the first parallel chain node, so that efficiency of receiving the parallel chain block by the second parallel chain node is improved, bandwidth of synchronizing the existing parallel chain block by the parallel chain node is further saved, and time consumption for synchronizing the existing parallel chain block by the parallel chain node is reduced.
Some embodiments of the present invention provide a method, an apparatus, and a storage medium for pushing a parallel chain block, further providing a mechanism for verifying a parallel chain block by a first main-chain node, in which a second parallel chain node sends a first verification request message to the first main-chain node in response to the end of pushing, so that the first main-chain node verifies whether the parallel chain block pushed by the first parallel chain node is correct.
In some embodiments of the present invention, the parallel chain block pushing method, apparatus, and storage medium further send, by the second parallel chain node, second verification request information to the first main chain node, so that the first main chain node generates verification result information for verifying whether the first block with the second block height specified by the first main chain node is the same as a second block with the same block height as the second block height on the second main chain node corresponding to the first parallel chain link node, and returns the verification result information; the second parallel chain node receives the verification result information and judges whether the verification results are the same: if so, the second parallel link node sends first registration request information of the parallel link data push service to the first parallel link node; and if not, the second parallel chain node reassigns the first parallel chain node, and returns to the step of sending the second verification request information to the first main chain node by the second parallel chain node, so that the parallel chain block received by the second parallel chain node is the parallel chain block required to be received, the bandwidth of the parallel chain block synchronously existing by the parallel chain nodes is further saved, and the time consumption for the parallel chain nodes to synchronously have the parallel chain block is reduced.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a flowchart of a parallel link block pushing method according to an embodiment of the present invention. As shown in fig. 1, in the present embodiment, the present invention provides a parallel chain block pushing method, including:
s12: the second parallel link node sends first registration request information of a parallel link data push service to the first parallel link node, so that the first parallel link node stores a push address of the second parallel link node after receiving the first registration request information;
s13: the second parallel chain node receives the first block operation information pushed by the first parallel chain node; the first block operation information comprises block information of a plurality of parallel chain blocks;
s14: and the second parallel link node updates local data according to the first block operation information.
Specifically, assuming that the second parallel link node is b and the first parallel link node is a, block information of a plurality of parallel link blocks is configured as block information of one parallel link block, and the first block operation information is block 1;
in step S12, b sends first registration request information of the parallel-link data push service to a, and a stores the push address of b after receiving the first registration request information;
in step S13, b receives the pushed first block operation information block 1;
in step S14, b updates local data according to block 1.
B, after receiving a plurality of the parallel chain blocks pushed by a, B generates the parallel chain blocks by adopting the current main chain-parallel chain mechanism, namely B synchronizes parallel chain transactions related to the parallel chain from the main chain blocks of the corresponding main chain node B to generate the parallel chain blocks.
In further embodiments, the parallel chain block pushing method provided by the present invention is not limited to the above examples, and the block information of the several blocks may be configured to be the block information of different numbers according to actual requirements, for example, the block information of 2 blocks and the block information of 5 blocks may be configured, or even the block information of several blocks configured to be different numbers at regular intervals, for example, the block information of 5 blocks is configured for the first 100 blocks, and the block information of 1 block is configured for all blocks after the first 100 blocks, so that the same technical effect can be achieved.
In the current main chain-parallel chain mechanism, nodes of a parallel chain need to acquire parallel chain transactions of all the parallel chains by traversing each main chain block; in fact, not every backbone block has a parallel chain transaction of the own parallel chain; by the method of the embodiment, the parallel chain nodes with fewer blocks receive a plurality of parallel chain blocks from the normally running parallel chain link points, namely the parallel chain link points with more blocks, and then the parallel chain blocks are generated by adopting the current main chain-parallel chain mechanism, so that the bandwidth of the parallel chain link points for synchronizing the existing parallel chain blocks is saved, and the time consumption for synchronizing the existing parallel chain blocks by the parallel chain nodes is reduced.
In a preferred embodiment, the first block operation information further includes an operation type, and the operation type is an add block or a rollback block.
Specifically, the first block operation information comprises block information and operation types of a plurality of blocks, and the first block operation information is assumed to be (block1: add; block2: add; block2: delete; block2': add);
b, receiving the first block operation information pushed by the a;
b, judging whether the operation type of each parallel chain block in the first block operation information is an added block:
for block1: add: since the operation type is adding blocks, generating 1 data and storing in local database;
for block2: add: since the operation type is adding blocks, generating 2 data and storing in local database;
for block2: delete: generating data of block2 and deleting data of block2 from a local database because the operation type is a delete block;
for block2': add: since the operation type is adding blocks, generating data of block2' and storing the data in a local database;
the final blocks on b are: block1 and block 2'.
In further embodiments, the first blocking operation information may not include an operation type, and assuming that a certain block chain is configured as a non-rollback block chain, the pushed first blocking operation information may not be configured with an operation type (i.e., the operation type is only an added block), and the same technical effect may be achieved.
FIG. 2 is a flow diagram of a preferred embodiment of the method shown in FIG. 1. As shown in fig. 2, in a preferred embodiment, the first blocking operation information further includes a first operation sequence number of the second parallel link node, and the second parallel link node further includes, after updating the local data according to the first blocking operation information:
s15: and the second parallel link node returns first confirmation information of the first block operation information to the first parallel link node, so that the first parallel link node updates the first operation sequence number according to the first confirmation information after receiving the first confirmation information.
Specifically, it is assumed that the first block operation information includes block information, operation type and first operation sequence number of blocks (block1: add: 1; block2: add: 2; block2: delete: 3; block2': add: 4);
in step S131, b receives the first block operation information pushed by the node a (block1: add: 1; block2: add: 2; block2: delete: 3; block2': add: 4);
in step S14, b updates the data of the local relational database according to (block1: add: 1; block2: add: 2; block2: delete: 3; block2': add: 4); the execution principle of step S14 can be found in the above embodiments, and is not described herein again;
in step S15, b returns first confirmation information of the first tile operation information to a;
for block1: add:1, after a receives the first confirmation information returned by b, the first operation sequence number of b is updated according to the first confirmation information (from 0 to 1);
for block2: add:2, after a receives the first confirmation information returned by b, the first operation sequence number of b is updated according to the first confirmation information (from 1 to 2);
for block2: delete:3, after a receives the first confirmation information returned by b, the first operation sequence number of b is updated according to the first confirmation information (from 2 to 3);
for (block2': add:4), after a receives the first acknowledgement information returned by b, the first operation sequence number of b is updated (from 3 to 4) according to the first acknowledgement information.
In further embodiments, the first block operation information may not include an operation type, and assuming that a certain block chain is configured as a non-rollback block chain, the pushed first block operation information may not configure an operation type, that is, the operation type is only an added block, and the first block operation information includes block information of several blocks and a first operation sequence number, for example, the first block operation information is (block5: 5; block6:6), which may achieve the same technical effect.
According to the embodiment, the second parallel chain node operates the received parallel chain block according to the first operation sequence number, so that the efficiency of the second parallel chain node in receiving the parallel chain block is improved, the bandwidth of the parallel chain node in synchronizing the existing parallel chain block is further saved, and the time consumption for synchronizing the existing parallel chain block by the parallel chain node is reduced.
FIG. 3 is a flow diagram of another preferred embodiment of the method shown in FIG. 1. As shown in fig. 3, in a preferred embodiment, the first block operation information further includes a synchronized block height of the second parallel link node, and the second parallel link node further includes, after updating the local data according to the first block operation information:
s16: and the second parallel link node returns second confirmation information of the first block operation information to the first parallel link node, so that the first parallel link node updates the height of the synchronized block according to the second confirmation information after receiving the second confirmation information.
Specifically, it is assumed that the first block operation information includes block information of a plurality of blocks and a synchronized block height of the second parallel link node, the current block height is 50, and the synchronized block height of the centralized database server is 49;
a, judging whether the synchronized block height of the second parallel link node is smaller than the current block height, and generating and pushing first block operation information (block50:49) as the synchronized block height is 49 and the synchronized block height is smaller than the current block height is 50.
In step S132, b receives the pushed first block operation information of a (block50: 49);
in step S14, b updates local data according to (block50: 49);
in step S16, b returns (block50:49) the second confirmation information to a.
after a receives the second acknowledgement, the synchronized block height of b is updated according to the second acknowledgement (from 49 to 50).
In further embodiments, the first block operation information may be further configured to remove block information of a number of blocks, and further include at least one or more of: the same technical effect can be achieved by the operation type, the first sequence number and the synchronized block height.
The foregoing embodiment provides that when the synchronized block height of the second parallel link node is smaller than the current block height of the node, the first parallel link node pushes a parallel link block that the second parallel link node has not been pushed to the second parallel link node, and after the second parallel link node receives the parallel link block pushed by the first parallel link node, it is not necessary to locally query whether the block has been received, so as to improve the efficiency of the second parallel link node in receiving the parallel link block, further save the bandwidth of the parallel link node in synchronizing the existing parallel link block, and reduce the time consumption for the parallel link node to synchronize the existing parallel link block.
FIG. 4 is a flow chart of a preferred embodiment of the method shown in FIG. 3. In a preferred embodiment, as shown in fig. 4, the first parallel-link node stores a first block height when receiving the first registration request message, and the method further comprises:
s17: the second parallel chain node receives push completion information returned by the first parallel chain node; the push completion information is generated by the first parallel link node when the synchronized block height is equal to the first block height.
Specifically, assume that the first parallel link node a stores the current first block height of 500 when receiving the first registration request message, and after the current second parallel link node b receives the second acknowledgement message sent by the first parallel link node a, the synchronized block height of b is updated according to the second acknowledgement message (from 499 to 500);
in step S17, since the synchronized block height of b updated by a is 500, the first block height of a is 500, the synchronized block height is equal to the first block height, a generates the push completion information and returns it to b; and b, receiving the push completion information returned by the a.
FIG. 5 is a flow chart of a preferred embodiment of the method shown in FIG. 4. As shown in fig. 4, in a preferred embodiment, the method further includes:
s18: in response to the end of pushing, the second parallel link node sends first verification request information to the first main link node, so that the first main link node verifies whether the parallel link block pushed by the first parallel link node is correct; the verification request information includes a block hash of a latest parallel link block pushed by the first parallel link node.
Specifically, the first main chain node corresponding to the second parallel chain node B is B, the latest parallel chain block is assumed to be block500, and the verification request information includes a block hash of block500 (500);
in step S18, in response to the end of the push, B sends a first verification request message to B, where the first verification request message includes a hash (500), and B verifies whether the pushed parallel chain block by a is correct.
The consensus transaction sent by the parallel chain authorization node to the main chain node comprises the block hash of the parallel chain block, and the main chain node records the block hash of the parallel chain block into the main chain block after executing the consensus transaction, so that the main chain node records the hash (500)'.
B, after receiving the hash (500), searching whether the local hash (500)' same as the hash (500) exists, if yes, passing the verification; if not, the verification is not passed, and the verification result is returned to the step b;
when the verification is passed, B adopts a current main chain-parallel chain mechanism, and starts from the next main chain block recorded with the hash (500)' of B, and generates a parallel chain block by synchronizing parallel chain transactions in the main chain block;
when the verification fails, B sends first verification request information to B, wherein the first verification request information comprises a hash (499), B verifies whether the parallel chain block pushed by a is correct, if the verification still fails, B sends the first verification request information to B, and the first verification request information comprises a hash (498), and the steps are circulated until the verification passes;
assuming that B sends a first authentication request message to B, and the first authentication request message passes authentication when the first authentication request message includes a hash (490), B uses the current main chain-parallel chain mechanism to generate a parallel chain block by synchronizing parallel chain transactions in the main chain block, starting from the next main chain block of B recorded with the hash (490)'.
In further embodiments, the verification request information may also be configured to include a parameter such as the mekerr root of the newest parallel chain block pushed by the first parallel chain node, so long as the parameter can be used to identify the pushed newest parallel chain block, and the same technical effect can be achieved.
In a further embodiment, the verification request information may also be configured as a block hash of any one parallel chain block pushed by the first parallel chain node, and the same technical effect may be achieved.
The above embodiments provide a verification mechanism for parallel chain blocks.
Fig. 6 is a flowchart of step S12 in a preferred embodiment of the method shown in fig. 1.
As shown in fig. 6, in a preferred embodiment, step S12 includes:
s122: the second parallel chain node sends second verification request information to the first main chain node, so that the first main chain node generates verification result information for verifying whether a first block with a second block height specified by the first main chain node is the same as a second block with the same block height as the second block height on the second main chain node corresponding to the first parallel chain link node or not, and returns the verification result information;
s123: receiving verification result information, and judging whether the verification results are the same:
if yes, go to step S124: the second parallel link node sends first registration request information of a parallel link data push service to the first parallel link node;
otherwise, step S125 is executed: and the second parallel chain node reassigns the first parallel chain node and returns to the step that the second parallel chain node sends second verification request information to the first main chain node.
A first main chain node corresponding to the second parallel chain node B is B, and a second main chain node corresponding to the first parallel chain node a is A;
assume the following application scenarios: b and a are two branched main chain nodes, B, when synchronizing parallel chain transactions related to the parallel chain from the main chain block of the corresponding main chain node B to generate a parallel chain block, firstly rolling back a plurality of parallel chain blocks until the block hash of a certain parallel chain block rolled back is the same as the block hash recorded in B, and then B, synchronizing the parallel chain transactions related to the parallel chain from the main chain block of the corresponding main chain node B to generate the parallel chain block; the above mechanism wastes b the bandwidth of the existing parallel chain block of synchronization; the method provided by the embodiment can overcome the problems;
specifically, assuming that the second block height is 450, the first block of the second block height specified by the first main chain node is block450_ B, and the second block of the same block height as the second block height on the second main chain node corresponding to the first parallel link node is block450_ a;
in step S122, B sends second verification request information to B, and B generates verification result information verifying whether the first block450_ B is the same as the second block450_ a of the block height 450 on a and returns;
in step S123, b receives the verification result information, and determines whether the verification results are the same:
if yes, go to step S124: b, sending first registration request information of parallel chain data push service to a;
otherwise, step S125 is executed: b reassigns the first parallel-link node (e.g., assigns the parallel-link node c as the first parallel-link node), and returns to step S122.
After B receives the parallel chain block pushed by a, B can synchronize the parallel chain transaction related to the parallel chain from the main chain block of the corresponding main chain node B to generate the parallel chain block because B and A do not branch.
In further embodiments, the first block with the specified second block height may be configured as the latest parallel chain block received according to actual requirements, and the same technical effect may be achieved.
The embodiment enables the parallel chain block received by the second parallel chain node to be the parallel chain block required to be received, further saves the bandwidth of the parallel chain node for synchronizing the existing parallel chain block, and reduces the time consumption for the parallel chain node to synchronize the existing parallel chain block.
FIG. 7 is a flow diagram of another preferred embodiment of the method shown in FIG. 1. As shown in fig. 7, in a preferred embodiment, after updating the local data according to the first blocking operation information, the second parallel-link node further includes:
s19: and the second parallel chain node sends second registration request information for stopping the parallel chain data push service to the first parallel chain node, so that the first parallel chain node deletes the push address of the second parallel chain node after receiving the second registration request information.
Fig. 8 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
As shown in fig. 8, as another aspect, the present application also provides an apparatus 800 including one or more Central Processing Units (CPUs) 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the apparatus 800 are also stored. The CPU801, ROM802, and RAM803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to an embodiment of the present disclosure, the parallel chain block pushing method described in any of the above embodiments may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing a parallel chain block pushing method. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811.
As yet another aspect, the present application also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus of the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the parallel-link block pushing method described herein.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor, for example, each of the described units may be a software program provided in a computer or a mobile intelligent device, or may be a separately configured hardware device. Wherein the designation of a unit or module does not in some way constitute a limitation of the unit or module itself.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the present application. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.