CN112286689A - Cooperative shunting and storing method suitable for block chain workload certification - Google Patents

Cooperative shunting and storing method suitable for block chain workload certification Download PDF

Info

Publication number
CN112286689A
CN112286689A CN202011224981.XA CN202011224981A CN112286689A CN 112286689 A CN112286689 A CN 112286689A CN 202011224981 A CN202011224981 A CN 202011224981A CN 112286689 A CN112286689 A CN 112286689A
Authority
CN
China
Prior art keywords
neural network
network model
data
training
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011224981.XA
Other languages
Chinese (zh)
Inventor
徐精忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Huanhangzhouwan Big Data Information Technology Co ltd
Original Assignee
Ningbo Huanhangzhouwan Big Data Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Huanhangzhouwan Big Data Information Technology Co ltd filed Critical Ningbo Huanhangzhouwan Big Data Information Technology Co ltd
Priority to CN202011224981.XA priority Critical patent/CN112286689A/en
Publication of CN112286689A publication Critical patent/CN112286689A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a cooperative shunting and storing method suitable for proving the workload of a block chain, which comprises the following steps: s1: obtaining a data set for training a neural network, the data set comprising different combinations of user workload certification requirements in a blockchain system; s2: training a general neural network model and a local neural network model by taking the data set acquired in the step S1 as a label, traversing the trained data set by using the trained neural network model, and updating network parameters; s3: and (4) performing further training optimization on the basis of the trained neural network model in the step (S2) to obtain a new neural network model. The invention adopts a deep learning algorithm to determine whether the workload certification of each user is calculated at a local user terminal or distributed to an edge server and whether the edge server stores a corresponding hash table, thereby realizing high-efficiency intelligent calculation distribution and storage optimization.

Description

Cooperative shunting and storing method suitable for block chain workload certification
Technical Field
The invention relates to the field of wireless communication, in particular to a cooperative shunting and storing method suitable for block chain workload certification.
Background
With the rapid development of blockchain technology, the ability of blockchain technology to establish distributed trust has been widely applied in various fields. However, because the traditional mobile device has limited computing resources and storage resources, it is not possible to support the enormous computing power and storage space required by the workload certification computing task of the block chain during the mining process. Therefore, it is necessary to reduce the overall overhead of the system and increase the overall yield of the system by using the edge computing technology to realize reasonable allocation of computing resources and storage resources of the mobile device.
Disclosure of Invention
In order to solve the problems, the invention provides a collaborative distribution and storage method suitable for block chain workload certification, which adopts a deep learning algorithm to determine whether the workload certification of each user is calculated at a local user terminal or distributed to an edge server and whether the edge server stores a corresponding hash table, thereby realizing efficient intelligent calculation distribution and storage optimization.
The technical scheme of the invention is as follows:
a cooperative offloading and storing method for blockchain workload attestation includes the following steps:
s1: obtaining a data set for training a neural network, the data set comprising different combinations of user workload certification requirements in a blockchain system, with I (I e {1,2, …, I }) representing the different combinations;
s2: training a general neural network model and a local neural network model by taking the data set acquired in the step S1 as a label, traversing the trained data set by using the trained neural network model, and updating network parameters;
s3: performing further training optimization on the basis of the trained neural network model in the step S2 to obtain a new neural network model;
the specific steps of acquiring the data set in step S1 are:
s1.1: presetting I calculation task combinations, and collecting F groups of channel gains for each combination I (I belongs to I);
s1.2: for each group of channel gains, 4 corresponding to N users are generatedNA binary caching decision;
s1.3: giving a certain combination i and a certain group of channel gains f, solving an optimization problem PI aiming at a shunting and storage optimization decision to obtain a maximum token profit value corresponding to the decision;
s1.4: given a certain combination i and a certain set of channel gains f, the token profit values calculated based on the optimization problem PI in step S1.3 are traversed 4NSeed dividing andstoring the optimization decision, obtaining the maximum value of the profit value of the token, and recording the split corresponding to the maximum profit value of the token, the storage optimization decision and the channel gain (h)f,sf)i
S1.5: given a certain combination i, repeating step S1.4 for all F groups of channel gains, and storing the data (h) of the F groups under the combination iff)i,f∈{1,2,…,F},i∈{1,2,…,I};
S1.6 for all I combinations, step S1.5 is repeated, generating data (h) for F groups for each combinationff)iF is equal to {1,2, …, F }, I is equal to {1,2, …, I }, and the whole number is stored and recorded as DataIAs a training data dataset for the neural network;
the maximum token profit value calculation formula described in step S1.3 is:
Figure BDA0002763355450000021
preferably, the channel gain (h) in step S1.4f,sf)iThe middle h and s each contain N pieces of data, corresponding to N users.
Preferably, the constraint in the calculation of the maximum token profit value is:
Figure BDA0002763355450000022
Figure BDA0002763355450000023
Figure BDA0002763355450000024
Figure BDA0002763355450000025
preferably, the training method of the neural network model in step S2 includes:
s2.1: establishing a general neural network model and a local neural network model with the same structure, and setting initial network parameters to be theta respectively0And theta1Learning rates are set to lrα,lrβ
S2.2: from a training Data set DataISelecting a batch of combined data containing J workload proving computing tasks, wherein each combination contains all data under the computing task combination to form a data set batch, and recording the training data of each computing task combination in the data set batch as batchj
S2.3: copying network parameters of a network model from a generic neural network model to a local neural network model, i.e. theta1=θ0. Under the condition of calculating a task combination j, randomly selecting K pieces of data, inputting channel gain h of the data into a local neural network model, and training the local neural network model by taking a corresponding optimal decision d as a label;
s2.4: calculating an error value L under a calculation task combination j by taking the mean square error as a loss functionj1) Updating the network parameter theta of the local neural network model1
S2.5: re-selecting K pieces of data from the calculation task combination j, and calculating the network parameters of the local neural network
Figure BDA0002763355450000031
Loss value of
Figure BDA0002763355450000032
And storing, if the combined data of each calculation task in the data set batch is not trained for one time, returning to S2.3; if all the data in the data set batch are used, executing S2.6;
s2.6: adding up all loss values
Figure BDA0002763355450000033
Updating network parameters of the general neural network model;
s2.7: after the updating is finished by utilizing the Data set batch, judging the DataIWhether all of the data in (1) is used. If yes, finishing training to obtain the universal neural network model after multiple updates and the network parameter theta thereof0(ii) a Otherwise, the procedure returns to step S2.2.
Preferably, the calculation formula of the updated local neural network parameter in step S2.4 is:
Figure BDA0002763355450000034
preferably, the network parameter calculation formula of the updated general neural network model in step S2.6 is:
Figure BDA0002763355450000035
preferably, the specific steps of optimizing, training and generating the new neural network core in step S3 are as follows:
s3.1: establishing a new neural network model with the same structure as the general neural network model, and recording the network parameter as theta2Setting the learning rate to lrγ
S3.2: copying the network parameters of the trained general neural network model to the new neural network model in step S3.1, i.e. let θ2=θ0
S3.3: collecting G groups of channel gains under the calculation task combination of the new neural network model;
s3.4: randomly selecting K pieces of data from the training data obtained in S3.3, inputting channel gain into a new neural network model, performing gradient descent by taking a corresponding optimal decision as a label, and finely adjusting network parameters;
s3.5: comparing the shunt predicted by the trimmed neural network model with the storage decision and the optimal decision obtained by the traversal method; if the error value is within 1%, executing step S3.6; otherwise, returning to the step S3.4, re-extracting K pieces of data, and further finely adjusting the network parameters;
s3.6: and finishing the optimization process aiming at the new calculation task combination to obtain a new neural network model.
Preferably, the parameter θ is2The adjustment formula of (2) is:
Figure BDA0002763355450000041
the invention also provides a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the cooperative offloading and storing method suitable for blockchain workload attestation when executing the computer program.
The present invention further provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the steps of the cooperative offloading and storing method for blockchain workload attestation.
The invention has the beneficial effects that:
1. for the block chain system, the workload certification is shunted from the user terminal to the edge server link, so that the delay and energy loss are reduced, and the service quality and the system yield are improved.
2. The method for optimizing the distribution and storage of the user terminal and the edge server link can meet more requirements of the user terminal, can also reasonably distribute workload proving tasks required by different calculated amounts, and has the advantages of quicker response and higher efficiency.
3. And for the variable wireless network channel gain, quickly predicting distribution and storage decisions by using a trained model, and realizing the on-line distribution of tasks.
4. The method is suitable for the condition that the workload of the user terminal proves the change of the calculation task, so that the shunting and storage model has higher robustness, the deep model is prevented from being trained for the specific combination every time, and the working efficiency is improved.
Drawings
Fig. 1 is a schematic view of a offloading scenario of a user terminal and an edge server in a wireless block chain network.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The invention provides a collaborative shunting and storage algorithm suitable for block chain workload certification, and the method can obtain the optimal shunting and storage decision under the time-varying wireless channel gain. And when the workload of the user in the block chain system proves that the calculation task combination is changed, the method can quickly adapt to the situation with few training steps and training data. The present invention in combination with moving edge computation techniques can be applied to a blockchain system, as shown in fig. 1. In order to obtain the maximum token profit for the blockchain system in this case, the following steps are specifically performed:
1. in an intelligent system combining a block chain and mobile edge calculation, a deep learning algorithm is adopted to determine whether the workload of each user proves to be calculated at a local user terminal or distributed to an edge server and whether the edge server stores a corresponding hash table, so that the distribution and storage optimization of efficient intelligent calculation is realized. First, a data set for training a neural network is obtained, containing different combinations of user workload certification requirements in a blockchain system, with I (I e {1,2, …, I }) representing the different combinations. Aiming at a block chain system comprising N users and M edge servers, under the condition that cooperation exists between the edge servers and the requirement combination is proved to be variable by the workload of the users, the invention researches a deep learning algorithm which is quickly adapted to the new requirement combination, and realizes the high-efficiency intelligent computation distribution of computation tasks and the storage optimization of the edge servers. The specific steps for obtaining training data are as follows:
step 1.1: i different workload certification requirement combinations are preset, and each combination comprises N block chain mobile terminal users and workload certification requirements corresponding to the users.
Step 1.2: for each combination of I (I ∈ I) computation tasks, F sets of channel gains are collected.
Step 1.3: for each group of different channel gains, a total of 4 corresponding to N users is generatedNAnd (4) distributing and storing optimization decisions.
Step 1.4: given a certain combination i and a certain set of channel gains f, the optimization problem is solved for a split and store optimization decision (P1) to obtain the maximum token profit value for that decision. Wherein the optimization problem (P1) is as follows:
P1:
Figure BDA0002763355450000051
constraint conditions are as follows:
Figure BDA0002763355450000052
Figure BDA0002763355450000053
Figure BDA0002763355450000061
Figure BDA0002763355450000062
variables are as follows: δ ═ δ (n)m)∈N},s={s(nm)∈N}
Wherein: e(A)(nm)=PUT(A,u)(nm)+κA(fA)3T(A,e)(nm) (1-6)
E(U)(nm)=κU(fU)3T(U,e)(nm) (1-7)
Figure BDA0002763355450000063
Figure BDA0002763355450000064
Figure BDA0002763355450000065
Figure BDA0002763355450000066
The following is a description of various parameters in the problem, as follows:
Figure BDA0002763355450000067
an nth user accessing the edge server link m;
δ(nm): user' s
Figure BDA0002763355450000068
If delta (n) proves the split decision of the taskm) A value of 1 indicates a user
Figure BDA0002763355450000069
Is entirely split into edge server link processing, δ (n)m) A value of 0 indicates that the computing task is processed locally;
s(nm): user' s
Figure BDA00027633554500000610
Computing task store decisions when s (n)m) When 1, the edge server link decides to store the hash table, s (n)m) 0 means no storage;
Nm: the number of user terminal groups;
L:APmthe total number of medium edge servers;
ρ(m,l):APmtask quantity coefficient processed by the first edge server;
Figure BDA00027633554500000611
when delta (n)m) When 1, the user
Figure BDA00027633554500000612
Distributing all computing tasks to the token gains obtained by the edge server link;
Figure BDA0002763355450000071
when delta (n)m) When 0, the user
Figure BDA0002763355450000072
Executing all computing tasks locally to obtain token revenue;
Figure BDA00027633554500000718
the revenue of tokens gained per unit energy consumption;
E(A)(nm): the total energy consumption of the edge server link to complete its computational tasks;
E(U)(nm): the user terminal locally completes the total energy consumption of the calculation task;
Figure BDA0002763355450000073
a request probability of caching content;
Figure BDA0002763355450000074
caching the obtained token income;
Figure BDA0002763355450000075
the cost of the memory stored;
z: a decision cost;
Figure BDA0002763355450000076
the edge server link executes the isolated probability of the ore digging task;
Figure BDA0002763355450000077
the user terminal executes the isolated probability of the ore digging task;
γO: a threshold value of the isolation probability of each task;
Figure BDA0002763355450000078
the size of each hash table;
c: total storage capacity of the edge server link;
b: channel bandwidth between the user terminal and the edge server link;
PU: the transmission power of the user terminal;
Figure BDA0002763355450000079
user terminal
Figure BDA00027633554500000710
Distance from edge server;
α: a path loss coefficient;
σ2: wireless network noise power;
κA: energy efficiency coefficients of the edge servers;
fA: the computing power of the edge server;
Figure BDA00027633554500000711
user terminal
Figure BDA00027633554500000712
The processor energy coefficient of (a);
Figure BDA00027633554500000713
user terminal
Figure BDA00027633554500000714
The computing power of (a);
T(A,u)(nm): user terminal
Figure BDA00027633554500000715
Time consumed to upload its computing tasks to the edge server;
T(A,e)(nm): the time it takes for the edge server to perform its computational tasks;
T(U,e)(nm): user terminal
Figure BDA00027633554500000716
The time it takes to perform its computational tasks locally;
Dm,n: end user
Figure BDA00027633554500000717
The size of the task being processed;
Xm,n: end user
Figure BDA0002763355450000081
The computational workload of (2);
Figure BDA0002763355450000082
end user
Figure BDA0002763355450000083
The channel gain of (a);
step 1.5: given some combination i and some set of channel gains f, the calculated token profit values based on the optimization problem (P1) in step 1.4 are traversed through all 4NThe seed splitting and storing optimization decision is made, the maximum value of the profit value is found, and the splitting and storing decision and the channel gain (h) corresponding to the maximum profit value are recordedf,df)i
Step 1.6: given a certain combination i, repeating step 1.5 for all F groups of channel gains, and storing the F group data (h) under the combination if,df)i,f∈{1,2,…,F}。
Step 1.7: for all the combinations of I, the combination is,repeat step 1.6 to generate F group data (h) for each combinationf,df)iF is equal to {1,2, …, F }, I is equal to {1,2, …, I }, and the whole Data set is saved and recorded as DataIAs training data for the neural network.
2. And (3) training a neural network model by taking the optimal decision d acquired in the step (1) as a label and taking the channel gain h as input. In the training stage, the neural network model is made to traverse the training data under different combinations of computing tasks to update the parameters of the network model. The specific training steps are as follows:
step 2.1: establishing a general neural network model and a local neural network model which have the same structure and are initialized to have parameters theta0And theta1And sets a learning rate lrα,lrβ
Step 2.2: from a training Data set DataISelecting a batch of combined data containing J workload proving computing tasks, wherein each combination contains all data under the computing task combination to form a data set batch, and recording the training data of each computing task combination in the data set batch as batchj
Step 2.3: copying the parameters of the network model from the generic neural network model to the local neural network model, i.e. theta1=θ0. And under the condition of calculating a task combination j, randomly selecting K pieces of data, inputting the channel gain h of the data into a local neural network model, and training the local neural network model by taking the corresponding optimal decision d as a label.
Step 2.4: calculating an error value L under a calculation task combination j by taking the mean square error as a loss functionj1) Updating local neural network model parameter theta1In the following formula:
Figure BDA0002763355450000084
in the formula, the parameters are defined as follows:
Figure BDA0002763355450000091
updating the parameters of the local neural network model under the calculation task combination j;
θ1: copying parameters of the local neural network model before updating from the universal neural network model;
lrα: the inner loop learning rate is used for updating the local neural network model parameters;
Figure BDA0002763355450000092
calculating the error value pair parameter theta under the task combination j1A gradient of (a);
step 2.5: and (5) re-selecting K pieces of data from the calculation task combination j. Computing local neural network parameters
Figure BDA0002763355450000093
Loss value of
Figure BDA0002763355450000094
And stored. If the neural network has not been trained once with the data for each combination of computational tasks in the data set batch, then step 2.3 is returned. If all the data in the data set batch is used, step 2.6 is performed.
Step 2.6: all loss values are compared
Figure BDA0002763355450000095
And accumulating to update the parameters of the general neural network. The update formula is as follows:
Figure BDA0002763355450000096
in the formula, the parameters are defined as follows:
θ0: parameters of a general neural network;
lrβ: an extrinsic cycle learning rate for updating the general neural network parameters;
Figure BDA0002763355450000097
the sum of the loss values of the data for each combination of calculation tasks in the data set batch is added to the parameter θ0A gradient of (a);
step 2.7: after the updating is finished by utilizing the Data set batch, judging the DataIWhether all of the data in (1) is used. If yes, finishing training to obtain the universal neural network model after multiple updates and the parameter theta thereof0(ii) a Otherwise, the step 2.2 is returned.
3. Aiming at a new block chain workload proof calculation task combination, further training and optimizing are carried out on the basis of a trained neural network model, the new calculation task combination can be quickly adapted, and intelligent calculation and distribution of a user terminal and intelligent storage decision of an edge server link are realized. The specific implementation process is as follows:
step 3.1: newly establishing a neural network model, wherein the architecture of the neural network model is the same as that of a general neural network model, and the parameter is recorded as theta2
Step 3.2: copying the general neural network parameters trained in step (2) to a new model, i.e. theta2=θ0And setting the learning rate of the new model to lrγ
Step 3.3: and (3) under the new calculation task combination, collecting G groups of data according to the step (1) to form a fine adjustment universal neural network model training set.
Step 3.4: k pieces of data are randomly selected from the training data obtained in step 3.3. And inputting the channel gain into a newly established neural network model, performing gradient descent by taking the corresponding optimal decision as a label, and finely adjusting network parameters. The parameter adjustment formula is as follows:
Figure BDA0002763355450000101
in the formula, the parameters are defined as follows:
θ2: parameters of the newly established neural network model;
lrγ: updating the learning rate of the parameters by using a gradient descent method;
k: the number of data for gradient descent;
L(θ2): training the error value of the test neural network, wherein the loss function is the mean square error;
step 3.6: and comparing the shunt predicted by the trimmed neural network model with the storage decision and the optimal decision obtained by the traversal method. If the error value is within 1%, executing step 3.7; otherwise, returning to the step 3.5, re-extracting K pieces of data, and further fine-tuning the neural network parameters.
Step 3.7: and finishing the optimization process aiming at the new calculation task combination to obtain a new neural network model. Under the combination, when the channel condition changes, the model can predict the optimal shunting strategy of the multi-user block chain system in real time, and realize intelligent calculation shunting and storage decision.
The invention also provides a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the cooperative offloading and storing method suitable for blockchain workload attestation when executing the computer program.
The present invention further provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the steps of the cooperative offloading and storing method for blockchain workload attestation.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for cooperative offloading and storing for blockchain workload attestation, comprising:
s1: obtaining a data set for training a neural network, the data set comprising different combinations of user workload certification requirements in a blockchain system, with I (I e {1,2, …, I }) representing the different combinations;
s2: training a general neural network model and a local neural network model by taking the data set acquired in the step S1 as a label, traversing the trained data set by using the trained neural network model, and updating network parameters;
s3: performing further training optimization on the basis of the trained neural network model in the step S2 to obtain a new neural network model;
the specific steps of acquiring the data set in step S1 are:
s1.1: presetting I calculation task combinations, and collecting F groups of channel gains for each combination I (I belongs to I);
s1.2: for each group of channel gains, 4 corresponding to N users are generatedNA binary caching decision;
s1.3: giving a certain combination i and a certain group of channel gains f, solving an optimization problem PI aiming at a shunting and storage optimization decision to obtain a maximum token profit value corresponding to the decision;
s1.4: given a certain combination i and a certain set of channel gains f, the token profit values calculated based on the optimization problem PI in step S1.3 are traversed 4NThe method comprises the steps of obtaining a maximum value of a token profit value by a seed distribution and storage optimization decision, and recording a distribution and storage optimization decision and a channel gain (h) corresponding to the maximum token profit valuef,sf)i
S1.5: given a certain combination i, the channel gains are set for all F groupsRepeat step S1.4, save the data of F group under combination i (h)ff)i,f∈{1,2,…,F},i∈{1,2,…,I};
S1.6 for all I combinations, step S1.5 is repeated, generating data (h) for F groups for each combinationff)iF is equal to {1,2, …, F }, I is equal to {1,2, …, I }, and the whole number is stored and recorded as DataIAs a training data dataset for the neural network;
the maximum token profit value calculation formula described in step S1.3 is:
Figure FDA0002763355440000011
2. the cooperative forking and storing method for blockchain workload certification according to claim 1, wherein the channel gain (h) in step S1.4 isf,sf)iThe middle h and s each contain N pieces of data, corresponding to N users.
3. The method of claim 2, wherein the constraint in calculating the maximum token profit value is:
Figure FDA0002763355440000021
Figure FDA0002763355440000022
Figure FDA0002763355440000023
Figure FDA0002763355440000024
4. the method for collaborative distribution and storage of blockchain workload proofs according to claim 1, wherein the training method of the neural network model in step S2 is:
s2.1: establishing a general neural network model and a local neural network model with the same structure, and setting initial network parameters to be theta respectively0And theta1Learning rates are set to lrα,lrβ
S2.2: from a training Data set DataISelecting a batch of combined data containing J workload proving computing tasks, wherein each combination contains all data under the computing task combination to form a data set batch, and recording the training data of each computing task combination in the data set batch as batchj
S2.3: copying network parameters of a network model from a generic neural network model to a local neural network model, i.e. theta1=θ0. Under the condition of calculating a task combination j, randomly selecting K pieces of data, inputting channel gain h of the data into a local neural network model, and training the local neural network model by taking a corresponding optimal decision d as a label;
s2.4: calculating an error value L under a calculation task combination j by taking the mean square error as a loss functionj1) Updating the network parameter theta of the local neural network model1
S2.5: re-selecting K pieces of data from the calculation task combination j, and calculating the network parameters of the local neural network
Figure FDA0002763355440000025
Loss value of
Figure FDA0002763355440000026
And storing, if the combined data of each calculation task in the data set batch is not trained for one time, returning to S2.3; if the data set batchIf all the data are used, executing S2.6;
s2.6: adding up all loss values
Figure FDA0002763355440000027
Updating network parameters of the general neural network model;
s2.7: after the updating is finished by utilizing the Data set batch, judging the DataIWhether all of the data in (1) is used. If yes, finishing training to obtain the universal neural network model after multiple updates and the network parameter theta thereof0(ii) a Otherwise, the procedure returns to step S2.2.
5. The single-edge server caching algorithm for blockchain workload attestation according to claim 4, wherein the calculation formula of the updated local neural network parameters in step S2.4 is:
Figure FDA0002763355440000031
6. the method for collaborative distribution and storage of blockchain workload proofs according to claim 4, wherein the network parameter calculation formula of the updated generic neural network model in step S2.6 is:
Figure FDA0002763355440000032
7. the method for collaborative splitting and storing of block chain workload proofs according to claim 1, wherein the step S3 of optimizing training to generate a new neural network core comprises the specific steps of:
s3.1: establishing a new neural network model with the same structure as the general neural network model, and recording the network parameter as theta2Setting the learning rate to lrγ
S3.2: will trainThe network parameters of the good generic neural network model are copied to the new neural network model in step S3.1, i.e. let θ2=θ0
S3.3: collecting G groups of channel gains under the calculation task combination of the new neural network model;
s3.4: randomly selecting K pieces of data from the training data obtained in S3.3, inputting channel gain into a new neural network model, performing gradient descent by taking a corresponding optimal decision as a label, and finely adjusting network parameters;
s3.5: comparing the shunt predicted by the trimmed neural network model with the storage decision and the optimal decision obtained by the traversal method; if the error value is within 1%, executing step S3.6; otherwise, returning to the step S3.4, re-extracting K pieces of data, and further finely adjusting the network parameters;
s3.6: and finishing the optimization process aiming at the new calculation task combination to obtain a new neural network model.
8. The method of claim 7, wherein the parameter θ is a value of a bit rate of the memory cells2The adjustment formula of (2) is:
Figure FDA0002763355440000033
9. a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 8 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202011224981.XA 2020-11-05 2020-11-05 Cooperative shunting and storing method suitable for block chain workload certification Pending CN112286689A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011224981.XA CN112286689A (en) 2020-11-05 2020-11-05 Cooperative shunting and storing method suitable for block chain workload certification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011224981.XA CN112286689A (en) 2020-11-05 2020-11-05 Cooperative shunting and storing method suitable for block chain workload certification

Publications (1)

Publication Number Publication Date
CN112286689A true CN112286689A (en) 2021-01-29

Family

ID=74350594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011224981.XA Pending CN112286689A (en) 2020-11-05 2020-11-05 Cooperative shunting and storing method suitable for block chain workload certification

Country Status (1)

Country Link
CN (1) CN112286689A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114490697A (en) * 2022-03-28 2022-05-13 山东国赢大数据产业有限公司 Data cooperative processing method and device based on block chain

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114490697A (en) * 2022-03-28 2022-05-13 山东国赢大数据产业有限公司 Data cooperative processing method and device based on block chain
CN114490697B (en) * 2022-03-28 2022-09-06 山东国赢大数据产业有限公司 Data cooperative processing method and device based on block chain

Similar Documents

Publication Publication Date Title
Tang et al. Computational intelligence and deep learning for next-generation edge-enabled industrial IoT
CN112512056B (en) Multi-objective optimization calculation unloading method in mobile edge calculation network
CN110928678A (en) Block chain system resource allocation method based on mobile edge calculation
CN114585006B (en) Edge computing task unloading and resource allocation method based on deep learning
CN115374853A (en) Asynchronous federal learning method and system based on T-Step polymerization algorithm
CN113255004A (en) Safe and efficient federal learning content caching method
CN116050540B (en) Self-adaptive federal edge learning method based on joint bi-dimensional user scheduling
CN114169543A (en) Federal learning algorithm based on model obsolescence and user participation perception
CN113225370A (en) Block chain multi-objective optimization method based on Internet of things
CN116132403A (en) Route distribution method and device of computing power network, electronic equipment and storage medium
CN112286689A (en) Cooperative shunting and storing method suitable for block chain workload certification
CN112887943B (en) Cache resource allocation method and system based on centrality
CN113194031B (en) User clustering method and system combining interference suppression in fog wireless access network
CN112996118B (en) NOMA downlink user pairing method and storage medium
Tian et al. Hierarchical federated learning with adaptive clustering on non-IID data
CN111327674B (en) Single-edge server caching method suitable for block chain workload certification
CN111885551B (en) Selection and allocation mechanism of high-influence users in multi-mobile social network based on edge cloud collaborative mode
CN117669741A (en) Unmanned aerial vehicle cluster size model dynamic collaborative reasoning method based on genetic algorithm
CN112416577A (en) Cooperative intelligent calculation and distribution method suitable for block chain workload certification
Chen et al. A channel aggregation based dynamic pruning method in federated learning
CN117574421A (en) Federal data analysis system and method based on gradient dynamic clipping
CN111275200A (en) Multi-edge server caching algorithm suitable for block chain workload certification
CN115695429A (en) Non-IID scene-oriented federal learning client selection method
Sang et al. A Hybrid Heuristic Service Caching and Task Offloading Method for Mobile Edge Computing.
CN115150246A (en) Mass real-time Internet of things-oriented chain loading method based on novel nested chain architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination