CN117114126A - Web3.0 federal learning cloud architecture and excitation method - Google Patents

Web3.0 federal learning cloud architecture and excitation method Download PDF

Info

Publication number
CN117114126A
CN117114126A CN202310827882.8A CN202310827882A CN117114126A CN 117114126 A CN117114126 A CN 117114126A CN 202310827882 A CN202310827882 A CN 202310827882A CN 117114126 A CN117114126 A CN 117114126A
Authority
CN
China
Prior art keywords
dao
federal learning
round
members
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310827882.8A
Other languages
Chinese (zh)
Other versions
CN117114126B (en
Inventor
何云华
刘勇
纪胜龙
马礼
罗明顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qax Technology Group Inc
North China University of Technology
Original Assignee
Qax Technology Group Inc
North China University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qax Technology Group Inc, North China University of Technology filed Critical Qax Technology Group Inc
Priority to CN202310827882.8A priority Critical patent/CN117114126B/en
Publication of CN117114126A publication Critical patent/CN117114126A/en
Application granted granted Critical
Publication of CN117114126B publication Critical patent/CN117114126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a Web3.0 federal learning cloud architecture and an excitation method, wherein the Web3.0 federal learning cloud architecture is characterized in that a federal learning decentralized autonomous organization is respectively connected with mining, intelligent contracts and local training; the cloud service provider digs ores in a communication mode, the cloud service provider stores the ores to a local training, the cloud service provider is linked to the intelligent contracts through regional chains, the intelligent contracts are aggregated to the local training through data, and the intelligent contracts are dug ores through calculation force.

Description

Web3.0 federal learning cloud architecture and excitation method
Technical Field
The invention belongs to the technical field of Internet, and particularly relates to a Web3.0 federal learning cloud architecture and an excitation method.
Background
In the Web3.0 era, users have a greater tendency to protect personal data privacy in a more comprehensive manner, which will lead to a shift in ownership and value of the data, making data privacy a focus of global regulatory. With application decentralization and visibility of in-chain data, privacy protection is required for user behavior, generated data, and even application protocols. Federal learning has become a key technology to address data privacy issues, aiming at achieving more private, secure and trusted data sharing and maximizing the value of the data.
In federal learning, DAO members can use the data to iteratively train a model and manage and coordinate the data weights of each member through the DAO. Through the remediation of DAOs, each participant's contribution may be fairly approved and rewarded. Meanwhile, the DAO can provide an effective incentive mechanism for federal learning, promote wider participation and cooperation among members, and realize the network world of co-establishment, co-treatment and value sharing.
In order to ensure the training effect and the running state of the federal learning in the Web3.0, the problems of the quality of the data set and the blockchain computing power need to be considered, and only sufficient data set and blockchain computing power can ensure the smooth running and good result of the federal learning in the Web 3.0. This key problem can be solved by an incentive mechanism in the smart contracts running within the DAO. However, due to the openness of Web3.0, DAO members can freely join or leave training, flexibly adjust data size, and cause uncertainty of the number of members in each round, so that game analysis becomes difficult.
Disclosure of Invention
The invention provides a Web3.0 federal learning cloud architecture and an excitation method, which are used for ensuring training effect and running state of Web3.0 federal learning, exciting DAO members to contribute sufficient data sets and blockchain computing power, promoting wider participation and collaboration among members and realizing the network world of co-construction, co-treatment and value sharing.
A web3.0 federal learning cloud architecture comprising federal learning decentralized autonomous organization, local training, mining, intelligent contracts, and cloud service providers;
the federal learning decentralization autonomous organization is respectively connected with mining, intelligent contracts and local training; the cloud service provider digs the mine through a communication mode, the cloud service provider stores the mine to a local training, the cloud service provider is linked to the intelligent contract through a regional chain, the intelligent contract is aggregated to the local training through data, and the intelligent contract is dug to the mine through calculation force.
Preferably, the federal learning decentralized autonomous organization comprises a decentralized identity identifier DID, an intelligent contract; the intelligent contracts are respectively connected with a plurality of off-center avatar identifiers DID; the ore digging comprises a plurality of miners; the local training includes a number of data owners and models; the data owner connects to the model.
Preferably, a web3.0 federal learning incentive method based on poisson game comprises the following steps:
step S1: according to a Web3.0 federal learning cloud architecture, a poisson game model is established, an excitation method is designed, the existence of Nash equilibrium under the excitation method is proved, and the excitation method is written into a plurality of intelligent contracts to be deployed on a blockchain;
step S2: members of the FL DAO select a role to play in Web3.0 Federal learning according to personal needs;
step S3: the task publisher publishes Web3.0 Federal learning training tasks on FL DAO and sends the initial model to the blockchain;
step S4: the data owner downloads the initial model from the blockchain, trains the initial model using the local privacy data, and then trains the trained model gradient grad i Encrypt and upload to blockchain and to add the contributed data amount d i Sending to the intelligent contract;
step S5: the miners contribute the calculation force to FL DAO to ensure the normal execution of all intelligent contracts, and the magnitude of the calculation force h to be contributed i Sending to the intelligent contract;
step S6: intelligent contract aggregation all model gradient grads trained in the current round of web3.0 federal learning i And according to the received data volumeAnd performing an incentive mechanism for calculating the force magnitude;
step S7: DAO members select roles to play in the next round of Web3.0 federal learning and adjust the algorithm to the strategy sigma according to the strategy i Adjusting;
step S8: the DAO member continues to perform iterative training until the loss function converges or the training round is enough;
step S9: the intelligent contract will eventually send the trained model to the task publisher.
Preferably, step S6 comprises the sub-steps of:
step S61: the intelligent contract gathers web3.0 federal learning strategy σ for all DAO members in this round from the blockchain i
Step S62: determining whether the DAO member participates in federal learning of the current round according to the strategy of the member, and recording an off-center avatar identifier DID in the set M;
step S63: collecting model gradient grad of DAO member training round according to DID in set M i And aggregate is uploaded to the blockchain,
the formula of the model gradient after aggregation is as follows:
wherein grad i Is the gradient uploaded by each member, n is the number of members participating in the round;
step S64: according to DID in the set M, collecting data quantity d of DAO member's current training contribution i Sum and calculate force h i And calculate the contribution C of each DAO member in the current round i Revenue U i
Preferably, step S7 comprises the sub-steps of:
step S71: the task publisher judges whether the data volume and the calculation force used by the web3.0 federal learning of the round accord with expectations, if not accord with expectations, the excitation factors alpha and beta are adjusted, or the budget B is increased; if so, proceeding to step S72;
step S72: other DAO members calculate web3.0 federal learning strategy σ for the next round i
Wherein U is i Is the benefit of DAO member i in the round, U i (d, h) is the benefit if the data amount is d and the calculation force is h;
step 7.3: all DAO members adjusted web3.0 federal learning strategy.
Preferably, the loss function convergence of step S8 is adjusted as:
in the nth iteration of web3.0 federal learning, there is a policy set σ for each type of DAO member * ={σ t,b T e T, b e C, making:
wherein sigma t,c Representing the conditional probability of player selection action c of type t, the expected benefit when a DAO member of type t selects action c is
In the Poisson game model < n, T, r, C and U > n represents the number of participants in each round of Web3.0 federal learning, is a random variable, has an average value of lambda and obeys Poisson distribution;
t represents a set of all types T of each participant, and represents in Web3.0 a data owner, miners, heat members and an offerer, the data owner uses a local data training model, the miners provide computational power to run intelligent contracts, the heat members perform model training and provide computational power at the same time, and the offerer chooses to quit training;
r represents the probability distribution of each type T e T selected by the participants;
c represents the set of actions C that DAO members learn in Web3.0 federation;
u represents the per round revenue of the web3.0 federally learned DAO member;
u (ω, c, t) represents the benefit of a participant of type t in selecting action c, where ω is the set of actions of the other participants in the present round of federal learning, and ω c Is the number of other participants selecting action c.
The Web3.0 federal learning cloud architecture and the excitation method provided by the invention have the following beneficial technical effects:
according to the excitation method provided by the invention, intelligent contracts firstly aggregate model gradients of each round of federal learning, record the decentralised avatar identifiers DID of members participating in the federal learning of the round and the types and actions of the members according to strategies sent by DAO members, calculate the total income of each DAO member in the federal learning of the round, excite the DAO members to contribute more data and calculation force, and adjust the strategies to enable poisson game models constructed by taking the DAO members as participants to have the Nash equilibrium with the maximum benefit of each type of DAO member.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the following description will briefly explain the drawings of the embodiments.
FIG. 1 is a block diagram of a Web3.0 Federal learning cloud architecture provided by the present invention;
FIG. 2 is a flow chart of an incentive method of a Web3.0 federal learning cloud architecture provided by the invention;
FIG. 3 is a flowchart of S6 in an excitation method of a Web3.0 federal learning cloud architecture provided by the invention;
FIG. 4 is a flowchart of S7 in an excitation method of a Web3.0 federal learning cloud architecture provided by the invention;
FIG. 5 is a flow chart of a game analysis of the Web3.0 Federal learning cloud architecture provided by the present invention;
FIG. 6 is an effect diagram of an incentive method of a Web3.0 federal learning cloud architecture provided by the invention;
FIG. 7 is an effect diagram of an incentive method of a Web3.0 federal learning cloud architecture provided by the invention;
FIG. 8 is a gas overhead diagram of an incentive method intelligent contract of a Web3.0 federal learning cloud architecture provided by the invention;
fig. 9 is a pressure test result diagram of an incentive method intelligent contract of web3.0 federal learning cloud architecture.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
In order to ensure the training effect and the running state of the federal learning in the Web3.0, the problems of the quality of the data set and the blockchain computing power need to be considered, and only sufficient data set and blockchain computing power can ensure the smooth running and good result of the federal learning in the Web 3.0. However, due to the openness of Web3.0, DAO members can freely join or leave training, flexibly adjust data size, and cause uncertainty of the number of members in each round, so that game analysis becomes difficult.
In order to solve the problems, the invention provides a Web3.0 federal learning cloud architecture and an excitation method, which are used for ensuring the training effect and the running state of Web3.0 federal learning, exciting DAO members to contribute sufficient data sets and blockchain computing power, promoting wider participation and collaboration among members and realizing the network world of co-construction, co-treatment and value sharing.
As shown in fig. 1, the present invention provides a web3.0 federal learning cloud architecture, including:
federal learning decentralized autonomous organization FL DAO, which consists of a plurality of open trusted intelligent contracts and DAO members, operates under the framework of cooperation and distributed data processing, and utilizes the strength of block chain technology to realize safe and transparent information sharing among participants;
smart contracts, which are core components of FL DAOs, establish DAO management rules, federal learning workflows, and incentive mechanisms;
members of the decentralized autonomous organization follow the principles of equal, voluntary and reciprocal, can freely select to participate in federal learning and take on roles of task publishers, data owners and miners, the task publishers publish federal learning tasks on FL DAO and put forward own budgets, the data owners use local data to perform model training, and the miners provide calculation power for the operation of intelligent contracts;
the cloud service provider provides computing resources and services for the decentralized autonomous organization FL DAO, including a blockchain service, a data storage service and a cloud computing service;
the intelligent contracts and the decentralized autonomous organization members form FL DAO, web3.0 federal learning is carried out, a cloud service provider provides blockchain service, data storage service and cloud computing service for the FL DAO, the intelligent contracts aggregate federal learning models and excite the DAO members to contribute more data and calculation force, the quality of Web3.0 federal learning is improved, and the established Poisson game model achieves Nash equilibrium with the maximum benefit of each participant.
As shown in fig. 2, the present invention provides an excitation method of web3.0 federal learning cloud architecture, including:
s1: establishing a poisson game model and designing an excitation method according to a Web3.0 federal learning cloud architecture, proving the existence of Nash equilibrium in the method, and then writing the excitation method into a plurality of intelligent contracts to be deployed on a blockchain;
s2: members of the FL DAO select a role to play in Web3.0 Federal learning according to personal needs;
s3: the task publisher publishes Web3.0 Federal learning training tasks on FL DAO and sends the initial model to the blockchain;
s4: the data owner downloads the initial model from the blockchain, trains the model using the local privacy data, and then trains the trained model gradient grad i Proceeding withEncrypt upload to blockchain and data size d of contribution i Sending to the intelligent contract;
s5: the miners contribute the calculation force to FL DAO to ensure the normal execution of all intelligent contracts, and the magnitude of the calculation force h to be contributed i Sending to the intelligent contract;
s6: the intelligent contract aggregates all model gradients trained in the current round of Web3.0 federal learning, and executes an excitation mechanism according to the received data quantity and calculation power;
s7: DAO members select roles to play in the next round of Web3.0 federal learning and adjust the algorithm to the strategy sigma according to the strategy i Adjusting;
s8: the DAO member continues to perform iterative training until the loss function converges or the training round is enough;
s9: the intelligent contract sends the final trained model to the task publisher.
As shown in fig. 3, the specific process of S6 includes the following steps:
s601: the intelligent contract gathers web3.0 federal learning strategy σ for all DAO members in this round from the blockchain i
S602: determining whether the DAO member participates in federal learning of the current round according to the strategy of the member, and recording an off-center avatar identifier DID in the set M;
s603: collecting model gradient grad of DAO member training round according to DID in set M i And aggregated and then uploaded to a blockchain, where the aggregated model gradientWherein grad is formed of i Is the gradient uploaded by each member, n is the number of members participating in the round;
s604: according to DID in the set M, collecting data quantity d of DAO member's current training contribution i Sum and calculate force h i And calculate the contribution C of each DAO member in the current round i Revenue U i Wherein C i =d i α+h i β,U i =C i ·B-d i ·u+h i V, α and β are incentive factors, B is the unit budget of the task issuer, u and v are the unit costs of data and computing power, respectively;
as shown in fig. 4, the specific process of S7 includes the following steps:
s701: the task publisher judges whether the data volume and the calculation force used by the web3.0 federal learning of the present round accord with expectations, and if not accord with expectations, the excitation factors alpha and beta are adjusted, or the budget B is increased;
s702: other DAO members calculate web3.0 federal learning strategy σ for the next round i WhereinU i Is the benefit of DAO member i in the round, U i (d, h) is the benefit if the data amount is d and the calculation force is h;
s703: all DAO members adjust web3.0 federal learning strategies;
as shown in fig. 5, the present invention provides a game analysis method of the cloud architecture, which includes the following steps:
s101, defining a Poisson game of a Web3.0 federal learning cloud architecture, and writing five-tuple < n, T, r, C, U >;
s102, calculating the total profit function of the DAO member in the kth round of iteration, and writing out the expected profit of the DAO member with the type t in the kth round of selection action c;
s103, writing out a Poisson game model of the DAO member in the Web3.0 federal learning;
s104, writing out Nash equilibrium and presence evidence in the Poisson game model.
The specific process of S101 includes the following steps:
s1011, defining the number n of DAO members, a type set T and probability distribution r in Web3.0 federal learning, an action set C and a benefit function U of each type of DAO member;
s1012, writing five-tuple < n, T, r, C, U >, n representing the number of participants in each round of Web3.0 Federal learning, which is a random variable with average value lambda and obeys the Poisson distribution, T being each participantThe set of all types T of users, representing in Web3.0 the data owners (t=p), miners (t=m), hot members (t=e) and the offerers (t=q), the data owners using a local data training model, the miners providing a computational effort to run the intelligent contracts, the hot members simultaneously model training and providing computational effort, and the offerers opting out of training, r being the probability distribution of each type T E T being selected by the participant, i.e. the probability distribution of the participating DAO members being selected by the data owners, miners, hot members and offerers, C representing the set of actions C of the DAO members in web3.0 federal learning, i.e. the data size of the participating in the kth round, and the computational effort provided to the intelligent contracts, U representing each round of revenue of the web3.0 federal learning participating DAO members, U (ω, C, T) representing the revenue of the participant of type T when selecting action C, wherein ω is the set of actions of other participants in the federal learning c Is the number of other participants selecting action c;
the specific process of S102 includes the following steps:
s1021, setting relevant excitation parameters in Web3.0 federal learning, wherein alpha and beta are excitation factors set by a task publisher according to actual requirements of federal learning in each round, and B is budget set by the task publisher for each unit contribution degree;
s1022, calculating a total benefit function U (ω, c, t) for all types of DAO members when selecting respective actions, wherein ω is a set of actions of other participants in the present round of federal learning, c is an action of the DAO member, t is a type of the DAO member, U (ω, c, t) =p i -q i Wherein p is i Is the rewards sent by intelligent contracts after completing one round of federal learning, p i =C i ·B,C i Is the contribution degree of the DAO member in the round, C i =d i α+h i β,d i And h i The data size and the calculation power size of the DAO member in the current round are respectively, q i Is the cost related to federal learning, q i =d i ·u+h i V, u and v are the training cost per unit data and the power cost per unit calculation force, respectively, and furthermore, if the DAO member type is the data owner (t=p),this means that DAO members only use local data for training and do not provide computational power for the smart contract, if DAO members are of the type miners (t=m), this means DAO members only provide computational power for the smart contract and do not do model training, if DAO members are of the type thermals (t=e), this means DAO members both use local data for training and provide computational power for the smart contract, if DAO members are of the type dropers (t=q), the obvious benefit is 0, from the above, a detailed benefit function as described in equation (1) can be written, where α and β are respectively set incentive factors, B is the federal learning budget, d k And h k The data size and the calculation force size of the DAO member in the current round are respectively calculated, and u and v are respectively the training cost of each unit data and the electric power cost of each unit calculation force;
s1023, writing out expected benefits U (c|t, sigma) of DAO members with t type in the kth round of selection action c, wherein the calculation formula is shown as formula (2), and lambda is c Is the average of the number of players (irrespective of their type), ω, of the selection action c c The number of other participants that are the selection of action c, U (ω, c, t) is the total revenue function when the set of other participant actions is ω, the DAO member's action is c, and the type is t;
in Web3.0 federal learning, the number of participants per round conforms to poisson distribution, i.e., n-pi (λ), let σ t,c The conditional probability representing the player selection action c of type t, the number of players per round of selection action c (irrespective of their type) also conforms to the poisson distribution, i.e. n, according to the nature of the poisson distribution c ~π(λ c ),Wherein r is t Is the probability that DAO members choose to be of type t, and, in addition, ω is the probability that other DAO members (irrespective of their type) choose each action, ω c Representing the number of other players selecting action c, and the set of all possible ω's throughout the federal learning process is represented as Z, so that the expected benefit when a DAO member of type t selects action c is U (c|t, σ) by the strategy function σ, the calculation formula is as described in equation (2);
and S103, writing out a Poisson game model of the DAO member in the Web3.0 federal learning, wherein the Poisson game model is specifically as follows:
participants: the k-th round of Web3.0 Federally learned DAO members, wherein the number n of the members is uncertain and accords with poisson distribution n-pi (lambda);
policy set: federally learning policies by DAO members at the kth round, including type t and action c of the participants;
expected benefits: the expected benefit of a DAO member of type t in the kth round of selecting action c is
And S104, writing out Nash equilibrium and presence demonstration in a Poisson game model, wherein the Nash equilibrium and presence demonstration are specifically as follows:
the equalization conditions were: in the nth iteration of web3.0 federal learning, there is a policy set σ for each type of DAO member * ={σ t,b T e T, b e C, making:
wherein sigma t,c The conditional probability of player selection action c of type t is represented,is the expected benefit when a DAO member of type T selects action C, by means of the policy function σ, T being the set of all types T for each participant, C representing the set of actions C that the DAO member learns in web3.0 federation;
to solve the above model equalization problem, it is necessary to determine the optimal strategy for each participant to maximize their respective benefits, which can be expressed as an optimization problem, with the goal of finding the optimal strategy in each round for each type of DAO member, namely to maximize their benefits:
wherein the method comprises the steps ofλ c Is the average of the number of players (irrespective of their type), ω, of the selection action c c The number of other participants who are the selection action c, U (ω, c, t) is the total profit function when the set of other participant actions is ω, the DAO member's action is c, the type is t, r t Is the probability of DAO member selection to be type t, σ t,c Representing the conditional probability of a player-selected action C of type t, C representing the set of actions C that DAO members learn in Web3.0 federation, Z representing the set of all possible ω's throughout the federation learning process,
to further simplify the computation, the properties of the logarithmic function can be exploited to convert the product into a sum by taking the logarithm, lettingAccording to Stirling equation->Can obtainλ c Is the average of the number of players (irrespective of their type), ω, of the selection action c c The number of other participants selecting action C, U (ω, C, t) is the total profit function for the DAO member when the set of other participant actions is ω, and C represents the set of actions C that the DAO member learns on Web3.0 federally;
furthermore, the probability of DAO member selection action c of type t is r t σ t,c Thus, the expected number of players of type t for action c is selected to be nr t σ t,c Total player quantity ω for each round of selection action c c Is the sum of all types of numbers, i.eThus->U (ω, c, t) is the total benefit function when the set of other participant actions is ω, the DAO member's action is c, the type is t, r t Is the probability of DAO member selection to be type t, σ t,c Representing the conditional probability of a player-selected action C of type T, which is the set of all types T for each participant, C represents the set of actions C that DAO members learn in Web3.0 federally, since the order of the elements in the original action set C is not fixed, we can sort actions C e C and T e T so that g (σ) monotonically increases, thusU (ω, c, t) is the total benefit function when the set of other participant actions is ω, the DAO member's action is c, the type is t, r t Is the probability of DAO member selection to be type t, σ t,c Representing the conditional probability of player selection action C of type T, T being the set of all types T of each participant, C representing the set of actions C of DAO members learning federally at Web3.0, and +.>Since g (σ) is monotonically increasing, maximizing f (σ) can translate into maximizing g (σ) problem, let h (σ) = -g (σ), maximizing g (σ) can translate into minimizing h (σ) problem, and thus the equalization condition can translate into:
wherein the method comprises the steps ofU (ω, c, t) is the total benefit function when the set of other participant actions is ω, the DAO member's action is c, the type is t, r t Is the probability of DAO member selection to be type t, σ t,c Representing the conditional probability of a player-selected action C of type T, T being the set of all types T for each participant, C representing the set of actions C that DAO members learn federally at web3.0, to solve this optimization problem, a constrained optimization problem with n variables and k constraints can be converted to an unconstrained optimization problem with n+k given variables using the Lagrangian multiplier method, setting μ and ν coefficients of the Lagrangian multiplier,
thus, the Lagrangian function of the optimization problem (5), i.e
U (ω, c, t) is the total benefit function when the set of other participant actions is ω, the DAO member's action is c, the type is t, r t Is the probability of DAO member selection to be type t, σ t,c Representing the conditional probability of player-selected action C of type T, T being the set of all types T for each participant, C representing the set of actions C that DAO members learn federally at Web3.0, μ and ν being coefficients of Lagrangian multipliers, KKT conditions being
After solving the KKT condition equation, the method can obtainσ t,c Representing the conditional probability of player selection action c of type t, U (ω, c, t) being the total profit function for the DAO member's action c when the set of other participant actions is ω, i.e., in the present inventionThe strategy in the excitation method adjusts the algorithm so that the poisson game satisfies the equalization condition, and the model can realize equalization for each type of DAO member.
The excitation method provided by the invention can improve the data volume and calculation force of the DAO members participating in the Web3.0 federal learning contribution, improve the quality of the Web3.0 federal learning, meet the effectiveness, low cost and robustness, and is suitable for the Web3.0 federal learning cloud architecture.
Fig. 6 to fig. 9 show experimental and simulation evaluation effects of the web3.0 federal learning cloud architecture and the excitation method provided by the invention, namely effectiveness in improving data quantity and calculation power, and low cost and robustness of intelligent contracts.
The effect of the motivation method provided by the present invention is shown in figure 6, although the total data usage after 25 rounds of training exceeds 95%, the total data usage of federal learning reaches 90% after about 5 rounds compared to 15 rounds without the mechanism. In addition, the flexibility of the incentive method of the invention is that the values of alpha, beta and B can be adjusted, compared with the fixed values of alpha and beta, the incentive method of the invention can promote the rapid increase of the early data scale of federal learning, compared with the fixed values of B, the incentive method of the invention can continue to generate a certain incentive effect at the later stage of federal learning instead of convergence, and certainly, the incentive method of the invention depends on whether a task publisher is willing to increase budget at the later stage. Furthermore, the excitation method of the present invention results in a larger total data usage at the same number of rounds. Since the accuracy of federal learning models is related to sample size, the excitation method of the present invention has a positive impact on the quality of the model.
The effect of the incentive method provided by the present invention as shown in fig. 7, the incentive method of the present invention results in a reduction in computational effort in the first few rounds because task publishers tend to increase the proportion of data size in contribution coefficients to motivate more data usage in the early stages of federal learning. When the data size reaches 90%, the task publisher starts increasing the coefficient of the computing force, and the increased computing force becomes larger. Because of cost constraints, DAO members are no longer willing to increase the computational effort beyond a certain level, resulting in a smaller increase in post-computational effort. Overall, however, the incentive method of the present invention results in a higher computational effort starting from about 12 rounds, the final scale of the computational effort also being superior to the case without the incentive mechanism. Furthermore, α, β and B in the excitation method of the present invention are adjustable, and the calculation force will increase as the data size increases, depending on the requirements.
The gas cost of the intelligent contracts in the cloud architecture provided by the invention is shown in the figure 8, and comprises contract deployment cost, contract execution cost and internal transaction combustion cost. The contract deployment overhead is used to deploy contracts on the blockchain, the contract execution overhead is used to execute functions or operations in the contracts, and the internal transaction overhead is used to invoke another contract. Model aggregate contracts and revenue calculation contracts are less expensive, while policy adjustment contracts require complex calculations, resulting in higher overhead, but still within acceptable limits.
The pressure test result of the intelligent contract in the cloud architecture provided by the invention is shown in figure 9, when the intelligent contract is tested at different TPS levels, the transaction success rate is 100%, and the average delay is gradually increased along with the increase of TPS. Also, policy adjustment contracts are delayed higher than other contracts due to their complex computation. When TPS is about 4500, the delay of the model aggregate contract is 946 milliseconds, the average delay of the revenue calculation contract is 1005 milliseconds, and the average delay of the policy adjustment contract is 1891 milliseconds. The figures illustrate the low complexity and robustness of the excitation method provided by the present invention.

Claims (6)

1. A web3.0 federal learning cloud architecture comprising federal learning decentralized autonomous organization, local training, mining, intelligent contracts, and cloud service providers;
the federal learning decentralization autonomous organization is respectively connected with mining, intelligent contracts and local training; the cloud service provider digs ores in a communication mode, the cloud service provider stores the ores in a local training mode, the cloud service provider is linked to intelligent contracts through regional chains, the intelligent contracts are aggregated to the local training mode through data, and the intelligent contracts dig ores through calculation force.
2. The web3.0 federal learning cloud architecture of claim 1, wherein the federal learning decentralized autonomous organization includes a decentralized identity identifier DID, a smart contract; the intelligent contracts are respectively connected with a plurality of off-center avatar identifiers DID; the ore digging comprises a plurality of miners; the local training includes a number of data owners and models; the data owner connects to the model.
3. The Web3.0 federal learning excitation method based on the poisson game is characterized by comprising the following steps of:
step S1: according to a Web3.0 federal learning cloud architecture, a poisson game model is established, an excitation method is designed, the existence of Nash equilibrium under the excitation method is proved, and the excitation method is written into a plurality of intelligent contracts to be deployed on a blockchain;
step S2: members of the FL DAO select a role to play in Web3.0 Federal learning according to personal needs;
step S3: the task publisher publishes Web3.0 Federal learning training tasks on FL DAO and sends the initial model to the blockchain;
step S4: the data owner downloads the initial model from the blockchain, trains the initial model using the local privacy data, and then trains the trained model gradient grad i Encrypt and upload to blockchain and to add the contributed data amount d i Sending to the intelligent contract;
step S5: the miners contribute the calculation force to FL DAO to ensure the normal execution of all intelligent contracts, and the magnitude of the calculation force h to be contributed i Sending to the intelligent contract;
step S6: intelligent contract aggregation all model gradient grads trained in the current round of web3.0 federal learning i Executing an excitation mechanism according to the received data quantity and the calculation force;
step S7: DAO members select roles to play in the next round of Web3.0 federal learning and adjust the algorithm to the strategy sigma according to the strategy i Adjusting;
step S8: the DAO member continues to perform iterative training until the loss function converges or the training round is enough;
step S9: the intelligent contract will eventually send the trained model to the task publisher.
4. The web3.0 federal learning excitation method based on poisson gaming according to claim 1, wherein the step S6 includes the sub-steps of:
step S61: the intelligent contract gathers web3.0 federal learning strategy σ for all DAO members in this round from the blockchain i
Step S62: determining whether the DAO member participates in federal learning of the current round according to the strategy of the member, and recording an off-center avatar identifier DID in the set M;
step S63: collecting model gradient grad of DAO member training round according to DID in set M i And aggregate is uploaded to the blockchain,
the formula of the model gradient after aggregation is as follows:
wherein grad i Is the gradient uploaded by each member, n is the number of members participating in the round;
step S64: according to DID in the set M, collecting data quantity d of DAO member's current training contribution i Sum and calculate force h i And calculate the contribution C of each DAO member in the current round i Revenue U i
5. The web3.0 federal learning excitation method based on poisson gaming according to claim 1, wherein the step S7 includes the sub-steps of:
step S71: the task publisher judges whether the data volume and the calculation force used by the web3.0 federal learning of the round accord with expectations, if not accord with expectations, the excitation factors alpha and beta are adjusted, or the budget B is increased; if so, proceeding to step S72;
step S72: other DAO members calculate web3.0 federal learning strategy σ for the next round i
Wherein U is i Is the benefit of DAO member i in the round, U i (d, h) is the benefit if the data amount is d and the calculation force is h;
step 7.3: all DAO members adjusted web3.0 federal learning strategy.
6. The web3.0 federal learning excitation method based on poisson game according to claim 1, wherein the loss function convergence of step S8 is adjusted as follows:
in the nth iteration of web3.0 federal learning, there is a policy set σ for each type of DAO member * ={σ t,b T e T, b e C, making:
wherein sigma t,c Representing the conditional probability of player selection action c of type t, the expected benefit when a DAO member of type t selects action c is
In the Poisson game model < n, T, r, C and U > n represents the number of participants in each round of Web3.0 federal learning, is a random variable, has an average value of lambda and obeys Poisson distribution;
t represents a set of all types T of each participant, and represents in Web3.0 a data owner, miners, heat members and an offerer, the data owner uses a local data training model, the miners provide computational power to run intelligent contracts, the heat members perform model training and provide computational power at the same time, and the offerer chooses to quit training;
r represents the probability distribution of each type T e T selected by the participants;
c represents the set of actions C that DAO members learn in Web3.0 federation;
u represents the per round revenue of the web3.0 federally learned DAO member;
u (ω, c, t) represents the benefit of a participant of type t in selecting action c, where ω is the set of actions of the other participants in the present round of federal learning, and ω c Is the number of other participants selecting action c.
CN202310827882.8A 2023-07-07 2023-07-07 Web3.0 federal learning cloud architecture and excitation method Active CN117114126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310827882.8A CN117114126B (en) 2023-07-07 2023-07-07 Web3.0 federal learning cloud architecture and excitation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310827882.8A CN117114126B (en) 2023-07-07 2023-07-07 Web3.0 federal learning cloud architecture and excitation method

Publications (2)

Publication Number Publication Date
CN117114126A true CN117114126A (en) 2023-11-24
CN117114126B CN117114126B (en) 2024-05-31

Family

ID=88806277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310827882.8A Active CN117114126B (en) 2023-07-07 2023-07-07 Web3.0 federal learning cloud architecture and excitation method

Country Status (1)

Country Link
CN (1) CN117114126B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657608A (en) * 2021-08-05 2021-11-16 浙江大学 Excitation-driven block chain federal learning method
CN113992676A (en) * 2021-10-27 2022-01-28 天津大学 Incentive method and system for layered federal learning under terminal edge cloud architecture and complete information
CN114048515A (en) * 2022-01-11 2022-02-15 四川大学 Medical big data sharing method based on federal learning and block chain
CN116128051A (en) * 2022-11-08 2023-05-16 浙江大学 Excitation-driven on-chain semi-asynchronous federal learning method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657608A (en) * 2021-08-05 2021-11-16 浙江大学 Excitation-driven block chain federal learning method
CN113992676A (en) * 2021-10-27 2022-01-28 天津大学 Incentive method and system for layered federal learning under terminal edge cloud architecture and complete information
CN114048515A (en) * 2022-01-11 2022-02-15 四川大学 Medical big data sharing method based on federal learning and block chain
CN116128051A (en) * 2022-11-08 2023-05-16 浙江大学 Excitation-driven on-chain semi-asynchronous federal learning method

Also Published As

Publication number Publication date
CN117114126B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
CN112367109B (en) Incentive method for digital twin-driven federal learning in air-ground network
Chai et al. A hierarchical blockchain-enabled federated learning algorithm for knowledge sharing in internet of vehicles
CN111966698B (en) Block chain-based trusted federation learning method, system, device and medium
CN111931242B (en) Data sharing method, computer equipment applying same and readable storage medium
CN108364190B (en) Mobile crowd sensing online excitation method combined with reputation updating
CN110490335A (en) A kind of method and device calculating participant&#39;s contribution rate
CN112770291A (en) Distributed intrusion detection method and system based on federal learning and trust evaluation
CN103365953B (en) Inherit user to evaluate
CN112437690A (en) Determining action selection guidelines for an execution device
CN115345317B (en) Fair reward distribution method facing federal learning based on fairness theory
Shi et al. Fee-free pooled mining for countering pool-hopping attack in blockchain
Cai et al. 2cp: Decentralized protocols to transparently evaluate contributivity in blockchain federated learning environments
Yin et al. A game-theoretic approach for federated learning: A trade-off among privacy, accuracy and energy
Li et al. A difficulty-aware framework for churn prediction and intervention in games
Ilias et al. Machine learning for all: A more robust federated learning framework
CN112533681B (en) Determining action selection guidelines for executing devices
CN117114126B (en) Web3.0 federal learning cloud architecture and excitation method
Zhang et al. AI for global climate cooperation: modeling global climate negotiations, agreements, and long-term cooperation in RICE-N
CN117196058A (en) Fair federation learning method based on node contribution clustering
CN116451806A (en) Federal learning incentive distribution method and device based on block chain
Lee et al. Pooled mining makes selfish mining tricky
US11478716B1 (en) Deep learning for data-driven skill estimation
Barroso et al. Market power issues in bid-based hydrothermal dispatch
Elliott Nash equilibrium of multiple, non-uniform bitcoin block withholding attackers
CN114528992A (en) Block chain-based e-commerce business analysis model training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant