CN110795227B - Data processing method of block chain and related equipment - Google Patents

Data processing method of block chain and related equipment Download PDF

Info

Publication number
CN110795227B
CN110795227B CN201810878178.4A CN201810878178A CN110795227B CN 110795227 B CN110795227 B CN 110795227B CN 201810878178 A CN201810878178 A CN 201810878178A CN 110795227 B CN110795227 B CN 110795227B
Authority
CN
China
Prior art keywords
shared
calculation
consensus
input parameters
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810878178.4A
Other languages
Chinese (zh)
Other versions
CN110795227A (en
Inventor
刘陆陆
李肃刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tiannengbo Information Technology Co ltd
Original Assignee
Beijing Tiannengbo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tiannengbo Information Technology Co ltd filed Critical Beijing Tiannengbo Information Technology Co ltd
Priority to CN201810878178.4A priority Critical patent/CN110795227B/en
Publication of CN110795227A publication Critical patent/CN110795227A/en
Application granted granted Critical
Publication of CN110795227B publication Critical patent/CN110795227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention provides a task processing method of a block chain and related equipment, wherein the method is applied to the task processing equipment of the block chain, the task processing equipment comprises a consensus server and a consensus calculator, the consensus calculator comprises a shared calculation kernel and n other calculation kernels, and the consensus server acquires input parameters from task issuing equipment of the block chain; the consensus server sends the input parameters to a consensus calculator; the consensus calculator determines a group of shared input parameters and a group of non-shared input parameters according to a specific algorithm, and determines a shared result of the shared input parameters through a shared calculation kernel. Determining n groups of calculation data according to the sharing result and the non-sharing input parameters; and correspondingly assigning the n groups of computing data to n other computing kernels. And each other computing kernel of the consensus calculator respectively performs data computation on the basis of the shared result. And each computation kernel in the consensus calculator does not perform repeated computation any more, so that the design cost and the power consumption of the whole computer are reduced.

Description

Data processing method of block chain and related equipment
Technical Field
The present invention relates to the field of block chains, and in particular, to a data processing method and related device for a block chain.
Background
In recent years, block chains have been increasingly used in which a task issuing apparatus issues block chain tasks, which is called a "mine pit", and a task processing apparatus processes the block chain tasks, which is called a "mine machine". With the continuous development of the block chain technology, the computation tasks in the block chain are exponentially increased, and a consensus calculator is responsible for the computation tasks in the block chain.
In the prior art, a consensus calculator has several (for example, x is a natural number) computing chips, each chip has several computing cores (for example, y is a natural number), and each computing core performs the same computing algorithm blake256_ R14, except that the input parameter of each computing core has a different value. That is, the whole consensus calculator has n ═ x y calculation kernels, which are simultaneously performing a same algorithm calculation, and the difference is only that some input parameter of the n simultaneous algorithms is different, for example, a total of 20 input parameters, but each kernel calculation is different from each other except the 12 th parameter, and other 19 parameters are completely consistent.
The inventor finds that the whole consensus calculator has n calculation cores, the calculation algorithms are consistent, only the 12 th parameter value is different among 20 input parameters, the 12 th reference value used by each calculation core is a value which is different from each other pairwise, but the other 19 parameters are consistent, so that the calculation workload of the n cores is repeated because part of work uses the same value for calculation, and the design cost and the power consumption of the whole machine are increased.
Disclosure of Invention
The embodiment of the invention provides a data processing method of a block chain and related equipment, and aims to solve the problems of high design cost and high overall power consumption caused by repeated calculation work in a consensus calculator.
According to an aspect of the present invention, there is provided a method for processing a task of a blockchain, which is applied to a task processing device of a blockchain, where the task processing device includes a consensus server and a consensus calculator, the consensus calculator includes a shared computation core and n other computation cores, n is a natural number, and the method includes:
the consensus server acquires input parameters from the task issuing equipment of the block chain;
the consensus server sends the input parameters to a consensus calculator;
the consensus calculator determines a shared input parameter and a non-shared input parameter;
a consensus calculator setting the shared input parameter to the shared computational core;
the consensus calculator determines a sharing result of the sharing input parameter through the sharing calculation kernel;
sharing the sharing result to the other computing cores of the consensus calculator by the consensus calculator;
the consensus calculator determines n groups of calculation data according to the non-shared input parameters and the shared result;
And the n other computing kernels of the consensus calculator respectively perform data computation according to the n groups of computing data.
According to another aspect of the present invention, there is provided a data processing device of a block chain, which is applied in a task processing device of a block chain, where the task processing device includes a consensus server and a consensus calculator, the consensus calculator includes a shared calculation kernel and n other calculation kernels, n is a natural number;
the consensus server is configured to:
acquiring input parameters from task issuing equipment of a block chain;
sending the input parameters to a consensus calculator;
the consensus calculator is to:
determining a shared input parameter and a non-shared input parameter;
setting the shared input parameter to the shared compute kernel;
determining a sharing result of the sharing input parameter through the sharing calculation kernel;
sharing the shared result to the other compute kernels of a consensus calculator;
determining n groups of calculation data according to the non-shared input parameters and the shared result;
and respectively carrying out data calculation according to the n groups of calculation data by the n other calculation kernels.
According to another aspect of the present invention, an electronic device is provided, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the computer program, when executed by the processor, implements the steps of the data processing method of the blockchain.
According to another aspect of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for data processing of a blockchain.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the task processing equipment comprises a consensus server and a consensus calculator, wherein the consensus calculator comprises a shared calculation kernel and n other calculation kernels, the consensus server acquires input parameters from the task issuing equipment of the block chain and sends the input parameters to the consensus calculator, and the consensus calculator determines shared input parameters and non-shared input parameters; setting a shared input parameter to a shared computing kernel by the consensus calculator; the consensus calculator determines a sharing result of the shared input parameters through the shared calculation kernel; sharing the sharing result to other computing kernels of the consensus calculator by the consensus calculator; the consensus calculator determines n groups of calculation data according to the non-shared input parameters and the shared result; and respectively carrying out data calculation by the n other calculation cores of the consensus calculator according to the n groups of calculation data. In the embodiment of the invention, the shared computing kernel is arranged, the shared result of the shared input parameter is computed by the shared computing kernel, and on the basis of the shared result, other computing kernels do not need to compute the shared input parameter and only need to compute other non-shared input parameters respectively, so that each computing kernel in the consensus calculator does not need to perform repeated computation any more, and the design cost and the power consumption of the whole computer are reduced.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for processing data in a blockchain according to an embodiment of the present invention;
FIG. 2 is a flow chart of steps of another method of data processing for a blockchain in accordance with one embodiment of the present invention;
FIG. 3 is a block diagram of a data processing apparatus for a blockchain according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart of steps of a data processing method of a blockchain according to an embodiment of the present invention is shown, and is applied to a task processing device of a blockchain, where the task processing device includes a consensus server and a consensus calculator, where the consensus calculator includes a shared calculation kernel and n other calculation kernels, where n is a natural number, and the method may specifically include the following steps:
step 101, the consensus server obtains input parameters from the task issuing equipment of the blockchain.
The consensus server obtains input parameters from the task issuing devices of the blockchain.
In the embodiment of the invention, the consensus server of the task processing device is responsible for communicating with the task issuing device.
The consensus server of the task processing device registers as a legal identity in the task issuing device, so that the input parameters can be obtained from the task issuing device.
It should be noted that there are many protocols between the task processing device and the task issuing device, for example, stratum, GBT (getblockade), getword with rolltime extension, and the like, which is not limited in this embodiment of the present invention.
And 102, the consensus server sends the input parameters to a consensus calculator.
In the embodiment of the invention, the consensus calculator is responsible for processing the calculation task.
The consensus server can send the consensus task to the consensus calculator through the serial port.
In step 103, the consensus calculator determines the shared input parameter and the non-shared input parameter.
In an embodiment of the invention, the consensus calculator comprises a shared computation core and n other computation cores. Sharing the shared result obtained by the computation of the shared input parameter by the computation kernel and sharing the shared result to n other computation kernels, so that the n other computation kernels do not need to repeatedly perform the computation of the shared input parameter; the n other compute kernels are computed with shared results and unshared input parameters.
In a specific application, the consensus calculator may input a consistent parameter as a shared input parameter in the n calculation cores, and input an inconsistent parameter as a non-shared input parameter in the n calculation cores, among the input parameters.
In the embodiment of the invention, considering that the values of part of input parameters in each calculation core are consistent, the calculation results obtained by calculation of each calculation core are also consistent, and if each calculation core calculates the input parameters with the consistent part of calculation results, a large amount of repeated calculation is caused, and large power consumption and resource waste are caused.
Therefore, the assignment parameters of the consistent calculation result obtained after calculation can be determined as the shared input parameters, the result obtained through calculation of the shared input parameters is used as the shared result, and the n other calculation kernels can share the shared result without repeated calculation.
And 104, setting the shared input parameters to the shared computing kernel by the consensus calculator.
In the embodiment of the invention, after the consensus calculator determines the shared input parameters, the shared input parameters can be arranged to the shared calculation kernel, and the calculation kernel calculates the shared input parameters at first.
And 105, determining a sharing result of the shared input parameters by the consensus calculator through the shared calculation kernel.
In the embodiment of the invention, the kernel of the consensus calculator is preset with the algorithm model, and the sharing result of the shared input parameters can be obtained through calculation of the algorithm model.
And 106, sharing the shared result to the other computing cores of the consensus calculator by the consensus calculator.
In the implementation of the invention, the consensus calculator can share the consensus result to other calculation kernels, so that the other calculation kernels can obtain the shared result without calculation, and the calculation amount of the other calculation kernels is reduced.
And 107, determining n groups of calculation data according to the unshared input parameters and the shared result by the consensus calculator.
In the embodiment of the present invention, the non-shared input parameters may include multiple groups, for example, n groups, and each of the other computing cores needs to use one of the groups for computing; the consensus calculator can combine the shared result and a group of non-shared input parameters and then determine the result as the calculation data of other calculation kernels; n sets of compute data may be determined based on the unshared input parameters and the shared results so that each of the other compute kernels may be assigned to one of the sets of compute data.
And step 108, respectively carrying out data calculation by the n other calculation kernels of the consensus calculator according to the n groups of calculation data.
In the embodiment of the invention, each other computing kernel can independently compute a group of computing data, and because the computing data is optimized on the basis of a shared result, the computing kernels do not perform repeated computation.
To sum up, in the embodiment of the present invention, the task processing device includes a consensus server and a consensus calculator, where the consensus calculator includes a shared computation kernel and n other computation kernels, the consensus server obtains an input parameter from the task issuing device of the block chain, and sends the input parameter to the consensus calculator, and the consensus calculator determines the shared input parameter and the non-shared input parameter; setting a shared input parameter to a shared computing kernel by the consensus calculator; the consensus calculator determines a sharing result of the shared input parameters through a shared calculation kernel; sharing the sharing result to other computing kernels of the consensus calculator by the consensus calculator; the consensus calculator determines n groups of calculation data according to the non-shared input parameters and the shared result; and respectively carrying out data calculation by the n other calculation cores of the consensus calculator according to the n groups of calculation data. In the embodiment of the invention, the shared computing kernel is arranged, the shared result of the shared input parameter is computed by the shared computing kernel, and on the basis of the shared result, other computing kernels do not need to compute the shared input parameter and only need to compute by using the shared result and other non-shared input parameters respectively, so that each computing kernel in the consensus calculator does not need to perform repeated computation any more, and the design cost and the power consumption of the whole computer are reduced.
Referring to fig. 2, a flowchart illustrating specific steps of another data processing method for a blockchain according to an embodiment of the present invention is shown, and the method is applied to a task processing device of a blockchain, where the task processing device includes a consensus server and a consensus calculator, the consensus calculator includes a shared calculation kernel and n other calculation kernels, where n is a natural number, and the method specifically includes the following steps:
step 201, the consensus server obtains input parameters from the task issuing device of the blockchain.
Step 202, the consensus server sends the input parameters to a consensus calculator; wherein, the core algorithm of the consensus calculator is blake256R 14; the shared compute kernel and each of the other compute kernels collectively complete the blake256R14 algorithm.
In the embodiment of the present invention, the core algorithm of the consensus calculator is blake256R14, which may also be referred to as blake256 fourteen-round algorithm. The G _ BLOCK algorithm was performed 16 times per round of algorithm.
Consider that in conventional data computation, the core algorithm requires 14 × 16 ═ 224 units of hardware resources. The resources required for each G _ BLOCK algorithm are: one adder for 3 input parameters, one adder for 2 input parameters, four 32-bit registers for storing intermediate results, two bivariate exclusive-or processors for 32 bits, and one exclusive-or processor for 32-bit variables and constants. Therefore, the embodiment of the invention can reduce the times of calculating the G _ BLOCK algorithm of the first round algorithm of the n calculation cores by the following steps, and can save the requirement of hardware resources, thereby reducing the cost and reducing the power consumption.
In step 203, the consensus calculator determines shared input parameters and non-shared input parameters.
In the embodiment of the invention, in the whole blake256R14 algorithm, the consensus server can send 20 input parameters to the consensus calculator, wherein only one of the 20 input parameters is different, and the other 19 parameters are the same. Based on the 20 input parameters of blake256R14 and the constant data table defined by blake256R14 algorithm white paper, the consensus calculator can obtain 32 values, which are respectively expressed as: "M0-MF"16 numerical values, and" V0-VF"16 values, 32 values in total, where only M is between n other compute kernels3The values of (A) are not consistent, and the other values are completely consistent.
As a preferable scheme of the embodiment of the present invention, the blake256R14 algorithm is 14 rounds of blake256 operations, each round of blake256 operations performs 16G _ BLOCK calculations, and each G _ BLOCK calculationThe algorithm model of G _ BLOCK adopts 5 input parameters: mjA, b, c, d, the algorithm model of G _ BLOCK is denoted as Gj(MjA, b, c, d), wherein j is an integer from 0 to 15.
The M isjCorresponds to M0To MF16 values, wherein of n of said other compute kernels, only M3The values of (a) are not consistent; said a, b, c and d correspond to V 0To VF16 numbers.
In a specific application, in the consensus calculator, an algorithm model of the G _ BLOCK for 16 times is represented as:
G0(M0,V0,V4,V8,VC);
G1(M1,V0,V4,V8,VC);
G2(M2,V1,V5,V9,VD);
G3(M3,V1,V5,V9,VD);
G4(M4,V2,V6,VA,VE);
G5(M5,V2,V6,VA,VE);
G6(M6,V3,V7,VB,VF);
G7(M7,V3,V7,VB,VF);
G8(M8,V0,V5,VA,VF);
G9(M9,V0,V5,VA,VF);
GA(MA,V1,V6,VB,VC);
GB(MB,V1,V6,VB,VC);
GC(MC,V2,V7,V8,VD);
GD(MD,V2,V7,V8,VD);
GE(ME,V3,V4,V9,VE);
GF(MF,V3,V4,V9,VE)。
the content in the parentheses is the input parameters determined by the consensus calculator, and comprises shared input parameters and non-shared input parameters.
In every two G _ BLOCK assignment operations, new calculation data a, b, c and d values are obtained after the operations are carried out through the following calculation model,
G2i(M2i,a,b,c,d):
Figure BDA0001753819950000081
wherein C is a constant defined in a blank 256R14 algorithm white paper, and the formula indicates that a is the sum of the following three values in the assignment of the current calculation: a. b and M(2i)XOR C(2i+1)
Figure BDA0001753819950000082
The formula represents: d is the calculation assignment, and d is circularly shifted by 16 bits after being subjected to XOR on a.
c←c+d;
The formula represents: and c is the sum of c and d in the calculation assignment.
Figure BDA0001753819950000083
The formula represents: b is the cyclic right shift of 12 bits after b is XOR c in the calculation assignment.
G2i+1(M2i+1,a,b,c,d):
Figure BDA0001753819950000084
Figure BDA0001753819950000085
c←c+d;
Figure BDA0001753819950000086
The four models are similar to the above model principle, and are not described in detail herein.
In the embodiment of the present invention, the following 7 input parameters of G _ BLOCK are calculated to be consistent, so that their input parameters may be used as shared input parameters, specifically: g0(M0,V0,V4,V8,VC);G1(M1,V0,V4,V8,VC);G2(M2,V1,V5,V9,VD);G4(M4,V2,V6,VA,VE);G5(M5,V2,V6,VA,VE);G6(M6,V3,V7,VB,VF);G7(M7,V3,V7,VB,VF) These 7G _ BLOCK computations are done in the shared compute core.
Since only M is present in the n other compute kernels 3Are not identical, therefore, by G3(M3,V1,V5,V9,VD) V obtained after calculation1、V5、V9、VDThe value of (A) is also inconsistent among the n computation cores, thus causing the use of V1、V5、V9、VDThe calculation result of G _ BLOCK is inconsistent, specifically: g8(M8,V0,V5,VA,VF);G9(M9,V0,V5,VA,VF);GA(MA,V1,V6,VB,VC);GB(MB,V1,V6,VB,VC);GC(MC,V2,V7,V8,VD);GD(MD,V2,V7,V8,VD);GE(ME,V3,V4,V9,VE);GF(MF,V3,V4,V9,VE). Therefore, the input of the G _ BLOCK calculation model is the shared result calculated by the unshared input parameters and the shared kernel.
It will be appreciated that at GE(ME,V3,V4,V9,VE);GF(MF,V3,V4,V9,VE) In because of V3,V4Are the same, so that V may also be used3,V4Are performed in a shared computational core, each occupying GEAnd GF0.5 of the calculated amount.
And step 204, the consensus calculator sets the shared input parameters to the shared calculation kernel.
Step 205, the consensus calculator determines the sharing result of the shared input parameter through the shared calculation kernel.
In step 206, the consensus calculator shares the shared result to the other computation cores of the consensus calculator.
And step 207, determining n groups of calculation data according to the non-shared input parameters and the shared result by the consensus calculator.
And step 208, respectively calculating data by the n other calculation kernels of the consensus calculator according to the n groups of calculation data.
In the embodiment of the present invention, taking the assignment in step 203 as an example, there may be 7+0.5+ 0.5G _ BLOCK operations in a total of 16G _ BLOCK operations, that is, 8 operations may share results without repeated operations.
It can be seen that, according to the conventional scheme, only (224- (7+0.5+0.5)) + G _ BLOCK operations (7+0.5+0.5) are required after computing resources are optimized, where each core needs 14 × 16 ═ 224G _ BLOCK operations, and each chip has y cores, and each chip originally needs 224 × y computing resources. If a chip has 100 cores, resources can be saved (8 × 100-8)/(224 × 100) ═ 3.54%.
In the embodiment of the invention, the task processing equipment comprises a consensus server and a consensus calculator, wherein the consensus calculator comprises a shared calculation kernel and n other calculation kernels, the consensus server acquires input parameters from the task issuing equipment of the block chain and sends the input parameters to the consensus calculator, and the consensus calculator determines shared input parameters and non-shared input parameters; the consensus calculator sets the shared input parameters to the shared calculation kernel; the consensus calculator determines a sharing result of the shared input parameters through the shared calculation kernel; sharing the sharing result to other computing kernels of the consensus calculator by the consensus calculator; the consensus calculator determines n groups of calculation data according to the non-shared input parameters and the shared result; and respectively carrying out data calculation by the n other calculation cores of the consensus calculator according to the n groups of calculation data. In the embodiment of the invention, the shared computing kernel is arranged, the shared result of the shared input parameter is computed through the shared computing kernel, and on the basis of the shared result, other computing kernels do not need to compute the shared input parameter and only need to compute other non-shared input parameters respectively, so that each computing kernel in the consensus calculator does not need to perform repeated computation any more, and the design cost and the power consumption of the whole computer are reduced.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 3, a block diagram of a data processing device of a blockchain according to an embodiment of the present invention is shown, and the data processing device is applied to a task processing device of a blockchain, where the task processing device includes a consensus server 310 and a consensus calculator 320; the consensus calculator comprises a shared calculation kernel and n other calculation kernels, wherein n is a natural number;
the consensus server 310 is configured to:
acquiring input parameters from task issuing equipment of a block chain;
sending the input parameters to a consensus calculator;
the consensus calculator 320 is configured to:
determining a shared input parameter and a non-shared input parameter;
setting the shared input parameter to the shared compute kernel;
Determining a sharing result of the sharing input parameter through the sharing calculation kernel;
sharing the shared result to the other compute kernels of a consensus calculator;
determining n groups of calculation data according to the non-shared input parameters and the shared result;
and respectively carrying out data calculation according to the n groups of calculation data by the n other calculation kernels.
In one embodiment of the invention, the core algorithm of the consensus calculator is blake256R 14; the shared compute kernel and each of the other compute kernels collectively complete the blake256R14 algorithm.
The blake256R14 algorithm is 14 rounds of blake256 operations, each round of blake256 operations is performed for 16 times of G _ BLOCK calculation, and in each G _ BLOCK calculation, the G _ BLOCK algorithm model adopts 5 input parameters: m is a group ofjA, b, c, d, the algorithm model of G _ BLOCK is denoted as Gj(MjA, b, c, d), wherein j is an integer from 0 to 15.
The M isjCorresponds to M0To MF16 values, wherein of n of said other compute kernels, only M3The values of (a) are not consistent; said a, b, c and d correspond to V0To VF16 numerical values; the consensus calculator determines shared input parameters and non-shared input parameters, comprising:
the consensus calculator determines that the shared input parameter is: m 0,M1,M2,M4,M5,M6(ii) a And V0To VF
The unshared input parameters are: m3,M8,M9,MA,MB,MC,MD,ME,MF(ii) a In every two G _ BLOCK assignment operations, new calculation data a, b, c and d values are obtained after the following calculation model is used for calculation, so that i is an integer from 0 to 7:
G2i(M2i,a,b,c,d):
Figure BDA0001753819950000111
Figure BDA0001753819950000112
c←c+d;
Figure BDA0001753819950000113
G2i+1(M2i+1,a,b,c,d):
Figure BDA0001753819950000114
Figure BDA0001753819950000115
c←c+d;
Figure BDA0001753819950000121
where C is a constant defined in the white paper of the blake256R14 algorithm.
For the apparatus embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
In the embodiment of the invention, the task processing equipment comprises a consensus server and a consensus calculator, wherein the consensus calculator comprises a shared calculation kernel and n other calculation kernels, the consensus server acquires input parameters from the task issuing equipment of the block chain and sends the input parameters to the consensus calculator, and the consensus calculator determines shared input parameters and non-shared input parameters; the consensus calculator sets the shared input parameters to the shared calculation kernel; the consensus calculator determines a sharing result of the shared input parameters through the shared calculation kernel; sharing the sharing result to other computing kernels of the consensus calculator by the consensus calculator; the consensus calculator determines n groups of calculation data according to the non-shared input parameters and the shared result; and respectively carrying out data calculation by the n other calculation cores of the consensus calculator according to the n groups of calculation data. In the embodiment of the invention, the shared computing kernel is arranged, the shared result of the shared input parameter is computed through the shared computing kernel, and on the basis of the shared result, other computing kernels do not need to compute the shared input parameter and only need to compute other non-shared input parameters respectively, so that each computing kernel in the consensus calculator does not need to perform repeated computation any more, and the design cost and the power consumption of the whole computer are reduced.
Fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the present invention. The electronic device 400 may vary greatly depending on configuration or performance, and may include one or more Central Processing Units (CPUs) 422 (e.g., one or more processors) and memory 432, one or more storage media 430 (e.g., one or more mass storage devices) storing applications 442 or data 444. Memory 432 and storage media 430 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 430 may include one or more modules (not shown), each of which may include a sequence of instructions operating on the electronic device. Still further, the central processor 422 may be configured to communicate with the storage medium 430 to execute a series of instruction operations in the storage medium 430 on the electronic device 400.
The electronic device 400 may also include one or more power supplies 426, one or more wired or wireless network interfaces 450, one or more input-output interfaces 458, one or more keyboards 456, and/or one or more operating systems 441, such as Windows Server, Mac OS XTM, UnixTM, Linux, FreeBSDTM, and the like.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the data processing method embodiment of the block chain, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the data processing method of the block chain, and can achieve the same technical effect, and in order to avoid repetition, the computer program is not described herein again. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above detailed description is provided for the data processing method and the related device of the block chain, and a specific example is applied in this document to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A data processing method of a block chain is applied to a task processing device of the block chain, the task processing device comprises a consensus server and a consensus calculator, the consensus calculator comprises a shared calculation kernel and n other calculation kernels, n is a natural number, and the method comprises the following steps:
the consensus server acquires input parameters from the task issuing equipment of the block chain;
the consensus server sends the input parameters to a consensus calculator;
the consensus calculator determines a shared input parameter and a non-shared input parameter;
a consensus calculator setting the shared input parameter to the shared computation core;
the consensus calculator determines a sharing result of the shared input parameter through the shared calculation kernel;
the consensus calculator shares the shared result to the other computation cores of the consensus calculator;
the consensus calculator determines n groups of calculation data according to the non-shared input parameters and the shared result;
and the n other computing kernels of the consensus calculator respectively perform data computation according to the n groups of computing data.
2. The method of claim 1, wherein the core algorithm of the consensus calculator is blake256R 14; the shared compute kernel and each of the other compute kernels collectively complete the blake256R14 algorithm.
3. The method of claim 2, wherein the blake256R14 algorithm is 14 rounds of blake256 operations, 16G _ BLOCK calculations are performed for each round of blake256 operations, and in each G _ BLOCK calculation, the algorithm model of G _ BLOCK takes 5 input parameters: m is a group ofjA, b, c, d, wherein MjCorresponding to M0To MF16 numbers, said M0To MFIs 16 variables in the algorithm model of G _ BLOCK, said a, b, c and d correspond to V0To VF16 numbers, said V0To VFIs 16 variables in the algorithm model of the G _ BLOCK, which is denoted as Gj(MjA, b, c, d), wherein j is an integer from 0 to 15.
4. The method of claim 3, wherein M isjCorresponds to M0To MF16 numbers, said M0To MFIs 16 variables in the algorithm model of G _ BLOCK, where only M is present in the n other compute kernels3The values of (a) are not consistent; said a, b, c and d correspond to V0To VF16 numbers, said V0To VFIs 16 variables in the algorithm model of the G _ BLOCK; the consensus calculator determines shared input parameters and non-shared input parameters, comprising:
the consensus calculator determines that the shared input parameter is: m0,M1,M2,M4,M5,M6(ii) a And V0To V F
The unshared input parameters are: m3,M8,M9,MA,MB,MC,MD,ME,MF
5. The method of claim 4, wherein each two G _ BLOCK assignment operations, new values of a, b, c, d are obtained after the following calculation model, so that i is an integer from 0 to 7:
G2i(M2i,a,b,c,d):
Figure FDA0003572373070000021
c←c+d;
Figure FDA0003572373070000022
G2i+1(M2i+1,a,b,c,d):
Figure FDA0003572373070000023
c←c+d;
Figure FDA0003572373070000024
wherein, C, C(2i)、C(2i+1)For the constants defined in the white paper of the blake256R14 algorithm, i is an integer from 0 to 7, wherein the constants in the calculation model are fixed and do not change with the assignment operation.
6. The data processing equipment of the block chain is characterized by being applied to task processing equipment of the block chain, wherein the task processing equipment comprises a consensus server and a consensus calculator, the consensus calculator comprises a shared calculation kernel and n other calculation kernels, and n is a natural number;
the consensus server is configured to:
acquiring input parameters from task issuing equipment of a block chain;
sending the input parameters to a consensus calculator;
the consensus calculator is to:
determining a shared input parameter and a non-shared input parameter;
setting the shared input parameters to the shared compute kernel;
determining a sharing result of the sharing input parameter through the sharing calculation kernel;
Sharing the shared result to the other compute kernels of a consensus calculator;
determining n groups of calculation data according to the non-shared input parameters and the shared result;
and respectively carrying out data calculation according to the n groups of calculation data by the n other calculation kernels.
7. The data processing device of claim 6, wherein the core algorithm of the consensus calculator is blake256R 14; the shared compute kernel and each of the other compute kernels collectively complete the blake256R14 algorithm.
8. The data processing device of claim 7, wherein the blake256R14 algorithm is 14 rounds of blake256 operations, 16G _ BLOCK calculations are performed for each round of blake256 operations, and the G _ BLOCK algorithm model uses 5 input parameters in each G _ BLOCK calculation: mjA, b, c, d, wherein MjCorresponds to M0To MF16 numbers, said M0To MFIs 16 variables in the algorithm model of G _ BLOCK, said a, b, c and d correspond to V0To VF16 numbers, said V0To VFIs 16 variables in the algorithm model of the G _ BLOCK, which is denoted as Gj(MjA, b, c, d), wherein j is an integer from 0 to 15;
the M is jCorresponding to M0To MF16 values, wherein of the n said other computation cores only M3The values of (a) are not consistent; said a, b, c and d correspond to V0To VF16 numerical values; the consensus calculator determines shared input parameters and non-shared input parameters, comprising:
the consensus calculator determines that the shared input parameters are: m is a group of0,M1,M2,M4,M5,M6(ii) a And V0To VF
The unshared input parameters are: m3,M8,M9,MA,MB,MC,MD,ME,MF
In every two G _ BLOCK assignment operations, new calculation data a, b, c and d values are obtained after the following calculation model is used for calculation, so that i is an integer from 0 to 7:
G2i(M2i,a,b,c,d):
Figure FDA0003572373070000031
c←c+d;
Figure FDA0003572373070000032
G2i+1(M2i+1,a,b,c,d):
Figure FDA0003572373070000041
c←c+d;
Figure FDA0003572373070000042
wherein, C, C(2i)、C(2i+1)For the constants defined in the white paper of the blake256R14 algorithm, i is an integer from 0 to 7, wherein the constants in the calculation model are fixed and do not change with the assignment operation.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the block chain data processing method according to any one of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for data processing of a blockchain according to any one of claims 1 to 5.
CN201810878178.4A 2018-08-03 2018-08-03 Data processing method of block chain and related equipment Active CN110795227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810878178.4A CN110795227B (en) 2018-08-03 2018-08-03 Data processing method of block chain and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810878178.4A CN110795227B (en) 2018-08-03 2018-08-03 Data processing method of block chain and related equipment

Publications (2)

Publication Number Publication Date
CN110795227A CN110795227A (en) 2020-02-14
CN110795227B true CN110795227B (en) 2022-07-19

Family

ID=69425976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810878178.4A Active CN110795227B (en) 2018-08-03 2018-08-03 Data processing method of block chain and related equipment

Country Status (1)

Country Link
CN (1) CN110795227B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113110921B (en) * 2021-06-11 2021-10-22 北京百度网讯科技有限公司 Operation method, device, equipment and storage medium of block chain system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106060036A (en) * 2016-05-26 2016-10-26 布比(北京)网络技术有限公司 Decentralized consenting method and apparatus
CN107402824A (en) * 2017-05-31 2017-11-28 阿里巴巴集团控股有限公司 A kind of method and device of data processing
CN107666387A (en) * 2016-07-27 2018-02-06 北京计算机技术及应用研究所 Low power consumption parallel Hash calculation circuit

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10255108B2 (en) * 2016-01-26 2019-04-09 International Business Machines Corporation Parallel execution of blockchain transactions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106060036A (en) * 2016-05-26 2016-10-26 布比(北京)网络技术有限公司 Decentralized consenting method and apparatus
CN107666387A (en) * 2016-07-27 2018-02-06 北京计算机技术及应用研究所 Low power consumption parallel Hash calculation circuit
CN107402824A (en) * 2017-05-31 2017-11-28 阿里巴巴集团控股有限公司 A kind of method and device of data processing

Also Published As

Publication number Publication date
CN110795227A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
CN109993299B (en) Data training method and device, storage medium and electronic device
CN107145939B (en) Computer vision processing method and device of low-computing-capacity processing equipment
CN107437110B (en) Block convolution optimization method and device of convolutional neural network
TWI830938B (en) Method and system of quantizing artificial neural network and artificial neural network apparatus
EP3267310A1 (en) Data processing method and device
CN112200300B (en) Convolutional neural network operation method and device
CN110020616B (en) Target identification method and device
CN106156159A (en) A kind of table connection processing method, device and cloud computing system
Zheng et al. On the PATHGROUPS approach to rapid small phylogeny
CN110415160B (en) GPU (graphics processing Unit) topology partitioning method and device
CN110795227B (en) Data processing method of block chain and related equipment
CN108595149B (en) Reconfigurable multiply-add operation device
CN109800078B (en) Task processing method, task distribution terminal and task execution terminal
CN111082922B (en) Data processing method of block chain and related equipment
CN111984414B (en) Data processing method, system, equipment and readable storage medium
CN110969527B (en) Data processing method of block chain and related equipment
DE102018126931A1 (en) Apparatus and method based on direct anonymous attestation
CN107220702B (en) Computer vision processing method and device of low-computing-capacity processing equipment
CN110837421B (en) Task allocation method and device
CN113222099A (en) Convolution operation method and chip
CN104123431A (en) Element modular inversion calculation method and device
Ganesan et al. Efficient ml models for practical secure inference
CN113672489A (en) Resource performance level determination method and equipment for super computer
CN109766515B (en) Matrix decomposition processing device and method
JP7137067B2 (en) Arithmetic processing device, learning program and learning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant