CN115640305B - Fair and reliable federal learning method based on blockchain - Google Patents

Fair and reliable federal learning method based on blockchain Download PDF

Info

Publication number
CN115640305B
CN115640305B CN202211651581.6A CN202211651581A CN115640305B CN 115640305 B CN115640305 B CN 115640305B CN 202211651581 A CN202211651581 A CN 202211651581A CN 115640305 B CN115640305 B CN 115640305B
Authority
CN
China
Prior art keywords
model
client
local
model parameters
local model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211651581.6A
Other languages
Chinese (zh)
Other versions
CN115640305A (en
Inventor
古天龙
王梦圆
李龙
李晶晶
郝峰锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN202211651581.6A priority Critical patent/CN115640305B/en
Publication of CN115640305A publication Critical patent/CN115640305A/en
Application granted granted Critical
Publication of CN115640305B publication Critical patent/CN115640305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a fair and reliable federation learning method based on a blockchain, which comprises the following steps: the model demander issues a training task and a transaction contract transfer task on the blockchain, and the client trains the global model to generate local model parameters, and encrypts and transmits the local model parameters to corresponding nodes on the blockchain; the corresponding node transmits and verifies the encrypted local model parameters, the corresponding node aggregates the local model parameters after verification, updates the global model according to the aggregation result, generates a new block based on the update result, and broadcasts the new block; all nodes verify the new area block and reach consensus; the incentive contract calculates the contribution of the client based on the consensus result and generates the latest global model; and repeating S2-S6 until the training ending condition is met, and obtaining the optimization model. The trade contracts pass the optimized global model to the model requirers.

Description

Fair and reliable federal learning method based on blockchain
Technical Field
The invention relates to the technical field of mobile communication, in particular to a fair and reliable federal learning method based on a blockchain.
Background
Federal learning was first proposed by google in 2016 to solve the privacy disclosure problem of a server collecting data to train a model. The method is essentially a distributed machine learning framework, and can complete global model training under the condition of not directly acquiring client data, thereby effectively protecting data privacy. In addition, it does not need to transmit the client data to the server, thereby reducing the communication overhead due to data transmission. Federal learning enables the design and training of several cross-institution, cross-sector machine learning models, algorithms, and therefore has received great attention and has been applied to the fields of industrial manufacturing, healthcare, product sales, and the like. However, traditional federal learning also has significant limitations such as server single point failure problems, inability to trace the training process, inefficient motivation mechanisms, etc.
Blockchains have been widely focused in recent years as a data sharing technology with the advantages of decentralization, non-falsification, traceability, etc., and are used to assist in improving the realization effect of federal learning, namely, federal learning (BCFL) based on blockchains. In the BCFL framework, the decentralization architecture of the blockchain eliminates the dependence of traditional federal learning on a central server, and implements decentralization model aggregation. In addition, the distributed account book can ensure that the process record of federal learning training is not tamperable and undeniable, and the client behavior can be tracked and audited by utilizing the traceability of the data, so that the credibility of the system and the client is improved. In addition, by means of a blockchain excitation mechanism, according to the performance (such as the provided data quantity, the performance of a local model and the communication time delay) of the client in the training process, the client is excited to actively participate in the training process in a mode of improving the reputation, providing a global model with better performance and the like, more training data and a better local model are provided, and therefore the performance of the global model is improved.
BCFL has typical advantages as a new paradigm of machine learning, but also has unsolved problems. From the point of view of trusted AI, machine learning is required to meet the requirements of privacy, accountability, fairness, etc. BCFL has privacy and traceability, but does not guarantee attribute fairness. The global model trained by BCFL may be biased or ambiguous when applied to domain decisions or predictions. In order to alleviate bias, the prior art uses sensitive attribute values of all users, but the goal of the BCFL is to protect privacy by not granting data access rights to users, and in addition, the characteristics of cross-equipment, data isomerism and the like in various application fields are provided, so that the attribute fairness of a model trained by the BCFL is ensured, and a great challenge is provided. However, in order to enhance the trust level of society and public on BCFL, deepen the application depth and breadth of AI in people's work and life, it is necessary to enhance the attribute fairness of global model while protecting the user's local data privacy.
Disclosure of Invention
In order to solve the problem that the global model lacks attribute fairness in the prior art, the invention provides a fair and reliable federal learning method based on a blockchain, which not only ensures the privacy of local data of a client and the traceability of training, but also remarkably enhances the attribute fairness of a trained model obtained by a model demander in decision making.
In order to achieve the technical purpose, the invention provides the following technical scheme: a blockchain-based fair and trusted federal learning method, comprising:
s1, a model demander issues a training task and a transaction contract transfer task on a blockchain, wherein the transaction contract comprises an initial global model and training ending conditions;
s2, training a global model by a client, generating local model parameters, and encrypting and transmitting the local model parameters to corresponding nodes on a blockchain, wherein the global model comprises an initial global model and a latest global model, the initial training is aimed at the initial global model, and the subsequent training is aimed at the corresponding latest global model;
s3, the corresponding node transmits and verifies the encrypted local model parameters,
s4, the corresponding node aggregates the local model parameters after verification, updates the corresponding global model according to the aggregation result, generates a new block based on the update result, and broadcasts the new block;
s5, all nodes verify the new area block and achieve consensus;
s6, the incentive contract calculates the contribution of the client based on the consensus result and generates the latest global model;
s7, repeating the steps S2-S6 until the training ending condition is met, and obtaining an optimized model;
s8, the transaction contract transmits the optimization model to a model demander.
Optionally, the trade contract further includes a learning rate and an optimizer.
Optionally, the process of training the global model by the client includes:
and acquiring local data and a global model at the client, training the global model through a fairness sampling algorithm and a fairness constraint function by using the local data, and generating local model parameters.
Optionally, the process of propagating and verifying the encrypted local model parameters by the corresponding node includes:
the corresponding nodes broadcast the encrypted local model parameters to all nodes through the eight diagrams protocol, decrypt and verify the encrypted local model parameters, wherein the signature and the components of the local model parameters are verified to be valid, if the signature and the components of the local model parameters are valid, the verification is passed, and the local model parameters which are passed through the verification are added into the corresponding transaction pools.
Optionally, the process of aggregating the local model parameters after the verification is completed by the corresponding node includes:
when the local model parameters passing verification in the transaction pool of the corresponding node reach a threshold value, the corresponding node carries out model aggregation on the local model parameters in the transaction pool through an aggregation mechanism based on double weights, wherein the aggregation weights of the local model parameters are adaptively adjusted according to the attribute fairness measurement and the delay degree of the local model parameters in the aggregation mechanism.
Optionally, the process of verifying and consensus by all nodes for the new block includes:
the corresponding node broadcasts the new block to all nodes, and the nodes receiving the new block check the information of the new block and reach consensus in a voting mode; wherein the information of the new block includes: signature of the new block, data of local model parameters, verification condition of the local model parameters and aggregation condition of the global model.
Optionally, the incentive contract calculating the contribution of the client and generating the latest global model process includes:
after consensus is achieved, the incentive contract is triggered, the incentive contract obtains the performance of the local model corresponding to the client and the score of the node to the client, the contribution of the client in the training process is calculated according to the performance and the score, the latest global model is generated according to the contribution result, and the latest global model is distributed to the client.
The invention has the following technical effects:
1. according to the method, the attribute fairness index is considered in a plurality of links such as model aggregation, client contribution evaluation and the like, so that the attribute fairness of the model obtained by a model demander is improved; the aggregation mechanism based on the dual weights also considers the staleness degree of the local model, and improves the aggregation efficiency of the global model; the contribution of the client can be objectively and comprehensively measured by combining the multidimensional evaluation indexes provided by the self-reporting-based method and the evaluation-based method; the incentive mechanism can provide model rewards and can meet the requirements of different clients.
2. The invention builds a fair and reliable federal learning system based on the blockchain, solves the problem that the federal learning depends on a central server to cause single-point failure and is not trusted to cause the client to be unwilling to participate in training, can obviously improve the attribute fairness of the global model in decision making while ensuring the accuracy of the model, and has good aggregation efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an overall structure according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a data structure of a distributed ledger according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an overall workflow provided in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention discloses a fair and reliable federal learning method based on a blockchain, and relates to the technical field of mobile communication. The method comprises the following steps: the model demander issues training tasks and transacts contract transfer tasks; the client locally trains the local model and sends the encrypted local model parameters to the blockchain node; node propagation and verification of local model parameters; the nodes aggregate the local model parameters to generate new blocks and broadcast the new blocks to other nodes; other nodes verify the blocks and achieve consensus; incentive contracts calculate the contributions of clients and distribute model rewards; repeating the steps until the training ending condition is met; the trade contracts return the model to the model demander. The invention realizes the decentralization and traceable training process by means of the blockchain, ensures the privacy, accountability and stability of the trusted AI, simultaneously utilizes the local training of the client, the blockchain aggregation and the excitation to ensure that the system has attribute fairness, and finally realizes the fairness and credibility of federal learning.
In order to achieve the technical purpose, the invention discloses a fair and reliable federal learning method based on a blockchain, which comprises the following steps:
step 1: model requirers issue training tasks, and trade contracts deliver tasks. Wherein the trade contracts define important content related to federal learning training, such as initial models, learning rates, optimizers, and training end conditions. If the model requirer provides incomplete content, then the task will not be initiated, i.e., the transaction contract execution fails.
Step 2: the client side locally trains and generates a local model and sends encrypted local model parameters to the blockchain node. The client uses the local data and the latest global model to train and generate a local model, and the fairness of the local model is improved through a fairness sampling algorithm and a fairness constraint function.
Step 3: the nodes propagate and validate the local model parameters. After the nodes receive the local model parameters, the eight diagrams protocol is used for diffusing the local model parameters among the nodes, decrypting and verifying the local model parameters, and checking whether the signatures, the components and the like of the local model parameters are valid or not. If so, the local model parameters are placed in its own transaction pool.
Step 4: the nodes aggregate the local model parameters to generate new blocks and broadcast them to other nodes. When a transaction pool of the node contains a certain number of local model parameters, the node adopts an aggregation mechanism based on double weights to conduct model aggregation. The aggregation mechanism adaptively adjusts the aggregation weight of the local model parameters according to the attribute fairness metrics and the delay degree of the local model parameters.
Step 5: other nodes verify the block and agree. After receiving the new block, the node will verify the block, and check the signature of the block, the number of local model parameters, the verification condition of the local model parameters, and the aggregation condition of the global model. Nodes reach consensus through voting, namely, vote to the generator of a block after verifying that the block is valid, and terminate the verification of the block in the same batch. The node that obtains the highest ticket number will obtain the accounting right, and other nodes store their blocks in its own distributed ledger.
Step 6: incentive contracts calculate the client's contribution and distribute model rewards. The method comprises the steps of collecting performance of a local model of a client and reputation scores of nodes on the client by an incentive contract, calculating contribution of each client in the training process of the round, and distributing a corresponding sparse global model to the client according to calculation results.
Step 7: repeating the steps 2-6 until the training ending condition is met.
Step 8: the trade contracts return the model to the model demander. Wherein the transaction contract downloads the most current global model back to the model demander.
The above will now be explained in detail:
fig. 1 is a schematic diagram of the overall structure of the fair and reliable federal learning method based on blockchain in this embodiment. As shown, the system includes entities mainly including model requirers, clients, and blockchain networks. Because the system mainly uses nodes in the blockchain network, intelligent contracts and distributed account book to complete model training, the system is respectively described as follows:
1. model requirers are publishers of federal learning tasks that propose to build machine learning models jointly to make predictions or decisions.
2. The client is a trainer of local models in federal learning, mainly works to train the local models based on the received global models by utilizing the collected and stored training data, and sends updated local model parameters to the blockchain network for global model aggregation.
3. The nodes are maintainers of the blockchain, responsible for receiving and validating local model parameters, performing model aggregation, and generating new blocks for recording the global model. The new block is added to the existing block after authentication and consensus is achieved by each node.
4. The smart contracts are described by a computer language and may be automatically executed according to preset trigger conditions without a trusted third party. Two intelligent contracts are mainly designed in the system, the trading contracts are used for issuing federal learning tasks and returning to a global model, the incentive contracts are used for calculating the contribution of each client in the FL, and model rewards are distributed according to the calculation results.
5. The distributed ledger is used for storing blocks, the contents of which are shared and synchronized by all nodes, and the data structure of which is shown in fig. 2. The block consists of a block head and a block body. The block header includes the number of the block, the global model, the hash value of the last block, and the ID and signature of the node that mined the block. The tile body contains validated local model parameters for aggregating global models. In order to prevent the local model parameters from being tampered, the local model parameters comprise a client signature and a node signature for verifying the model parameters, namely double protection is realized.
Fig. 3 is an overall workflow diagram of the blockchain-based fair and reliable federal learning method of the present embodiment. The flow of this embodiment is as follows:
s1: model requirers trigger trading contracts, and according to the specifications of the contracts, FL model training tasks are set up, wherein important content related to FL training, such as initial models, learning rates, optimizers, training end conditions and the like are specified. If the model requirer provides incomplete content, then the task will not be initiated, i.e., the transaction contract execution fails. If the task is successfully initiated, the trading contract will automatically pass the FL training task to all online clients and nodes.
S2: after receiving the task, the client trains to obtain a local model by using the local data and the latest global model, namely the global model with different compression degrees in the step S6, signs the local model parameters by using a private key, and then sends the local model parameters to the blockchain node, and the initial training is performed aiming at the initial model:
s2.1: after the client obtains the latest global model, the local data is used for testing the global model, and the sampling frequency of the sensitive attribute group is dynamically adjusted according to the performance of the local data in the aspect of attribute fairness. The aim of balancing the sample number of each sensitive attribute group is achieved by increasing the sampling frequency of the sensitive attribute group with poor test results and reducing the sampling frequency of the sensitive attribute group with good test results, and the attribute fairness of the local model is improved.
S2.2: the client trains the local model by means of a fairness constraint function shown in equation (1). The function is composed of two parts, wherein the first part is used for measuring the difference degree between the predicted value and the true value of the local model on the whole data, and the second part is used for measuring the difference degree between the predicted value and the true value of the local model on each attribute group data.
(1)
wherein ,for the loss function of the client->Is the firstqTrue label of individual samples->Is the firstqCharacteristics of individual samples, +_>Is to->Predicted value of +.>For training sample total amount, +.>Representing cross entropy loss between true and predicted values of overall data [ Y ]]Value set for real tag +_>Is the firstqSensitive property of individual samples, +.>Mean value representing sensitivity properties of the whole sample, +.>Indicating when the predicted value isyWhen (I)>The difference between the predicted probability and the average predicted probability.
S3: after the nodes receive the local model parameters, the eight diagrams protocol is used for diffusing the local model parameters among the nodes, decrypting and verifying the local model parameters, and checking whether the signatures, the components and the like of the local model parameters are valid or not. If so, the local model parameters are placed in its own transaction pool.
S4: after a certain number of local model parameters are contained in the transaction pool of the node, the node adopts an aggregation mechanism to conduct model aggregation, and a global model is obtained. Further, the node signs the local model parameters used for aggregation and the obtained global model by using the private key, packages the local model parameters to generate a new block, and broadcasts the new block to other nodes:
s4.1: when the transaction pool of the node comprisesLWhen the local model parameters are calculated, the node calculates fairness weight related to fairness of the local model parameters according to the formula (2), and calculates time weight related to training delay degree according to the formula (3).
wherein ,representing a clientiIn the first placetFair weight of round, ->Is a client terminaliIn the first placetLocal model obtained by training the wheel, < >>Is a local model->Accuracy of (A)>Is a local model->Is fair in nature.
wherein ,representing a clientiIn the first placetTime weight of wheel, ++>Is a client terminaliIn the first placet-1 round of model rewards, +.>Representing clientsiReceive->Is (are) time of day->Representing local model +.>Is a time of upload of (a). />As a time weight function, it can be calculated by equation (4).
wherein ,b>and 0, for controlling the degree of difference of the time weights among different clients.
S4.2: the nodes are polymerized according to formula (5)LAnd obtaining the latest local model aggregation by the local model parameters.
wherein ,representative nodejIn the first placetThe local model from the wheel is aggregated.
S4.3: the node updates the global model by means of equation (6).
wherein Representative nodejIn the first placetThe global model obtained by the round is used,α t the average delay parameter can be obtained by the formula (7).
By means of mixed super parametersαControlling the influence of the average delay degree on the model aggregation weight,α∈(0, 1)。
s5: after receiving the new block, the other nodes verify the block, and check the signature of the block, the number of local model parameters, the verification condition of the local model parameters and the aggregation condition of the global model. Nodes reach consensus through voting, namely, vote to the generator of a block after verifying that the block is valid, and terminate the verification of the block in the same batch. The node that obtains the highest ticket number will obtain the accounting right, and other nodes store their blocks in its own distributed ledger.
S6: after the new block is confirmed, the incentive contract is triggered, the performance of the local model of the client and the reputation score of the node to the client are collected, the contribution of each client in the training process is calculated, and the corresponding sparse global model is distributed to the client according to the calculation result:
s6.1: the incentive contract combines a self-reporting based method and an evaluation based method for the multidimensional nature of the client contribution, and evaluates the client's contribution by means of a multidimensional evaluation index as shown in equation (8).
wherein ,representing a clientiIn the first placetWheel contribution->Representing a clientiIn the first placetThe contribution value of the round based on the self-reporting method can be calculated by the formula (9),/, and>for the normalized contribution based on the self-reporting method, the ++can be calculated from equation (10)>Representing a clientiIn the first placetThe client contribution value obtained based on the evaluation method can be calculated by the formula (11)>The normalized contribution value based on the evaluation method can be calculated by the formula (12)
wherein ,Nfor the number of clients, max { · } and min { · } are functions of obtaining the maximum and minimum values of the elements in the set, CT t Representing participation in the firsttThe set of clients of the round training process.
wherein ,representing nodes respectivelyjTo the clientiThe weighted average of the three represents the assessment of the contribution of the node to the client, i.e. the client reputation,Kis the number of nodes.
S6.2: after the global model aggregation is completed, model rewards are issued according to the client contributions, and the clients with different contributions obtain global model parameters with different compression degrees, as shown in a formula (13).
wherein ,representing a clientiIn the first placetModel rewards of round, sparse (·) as sparsification function, GM t Is the firsttRound final global model, ++>Is based on clientiContribution derived firsttThe round global model compression ratio can be calculated from equation (14).
Wherein index (·) and sort (·) functions order the clients according to the contribution degree, ensure that the clients with large contribution will obtain a global model with low compression rate,βfor controlling the degree of difference in the compression ratio.
S7: and repeatedly executing the second step to the sixth step until the training ending condition is met.
S8: when the federal learning task is finished, the transaction contract is triggered again, the latest global model is downloaded and returned to the model demander, and the total contribution of the client in the whole training process is calculated.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. A blockchain-based fair and trusted federal learning method, comprising:
s1, a model demander issues a training task and a transaction contract transfer task on a blockchain, wherein the transaction contract comprises an initial global model and training ending conditions;
s2, training a global model by a client, generating local model parameters, and encrypting and transmitting the local model parameters to corresponding nodes on a blockchain, wherein the global model comprises an initial global model and a latest global model, the initial training is aimed at the initial global model, and the subsequent training is aimed at the corresponding latest global model;
s3, the corresponding node transmits and verifies the encrypted local model parameters,
s4, the corresponding node aggregates the local model parameters after verification, updates the corresponding global model according to the aggregation result, generates a new block based on the update result, and broadcasts the new block;
s5, all nodes verify the new area block and achieve consensus;
s6, the incentive contract calculates the contribution of the client based on the consensus result and generates the latest global model;
s7, repeating the steps S2-S6 until the training ending condition is met, and obtaining an optimized model;
s8, transmitting the optimization model to a model demander by the transaction contract so as to realize fair optimization of the initial global model;
the incentive contract computes the contribution of the client and generates an up-to-date global model process including:
after consensus is achieved, the incentive contract is triggered, the incentive contract obtains the performance of the local model corresponding to the client and the score of the node to the client, the contribution of the client in the training process of the round is calculated according to the performance and the score, the latest global model is generated according to the contribution result, and the latest global model is distributed to the client;
s6.1: the incentive contract combines the self-reporting based method and the evaluation based method to evaluate the contribution of the client by multi-dimensional evaluation metrics as shown in equation (8);
wherein ,/>Representing a clientiIn the first placetWheel contribution->Representing a clientiIn the first placetThe round is calculated by formula (9) based on the contribution value obtained by the self-reporting method>For the normalized contribution value based on the self-reporting method, the ++is calculated from equation (10)>Representing a clientiIn the first placetThe client contribution value obtained by the round based on the evaluation method is calculated by the formula (11)>For the normalized contribution value based on the evaluation method,
wherein ,Nfor the number of clients, max { · } and min { · } are functions of obtaining the maximum and minimum values of the elements in the set, CT t Representing participation in the firsttClient set of round training procedure, +.>Is a client terminaliIn the first placetLocal model obtained by training the wheel, < >>Is a local model->Accuracy of (A)>Is a local model->Is fair in attribute;
wherein ,/>Representing nodes respectivelyjTo the clientiThe weighted average of the three represents the assessment of the contribution of the node to the client, i.e. the client reputation,Kthe number of the nodes;
s6.2: after the global model aggregation is completed, the incentive contracts issue model rewards according to the contribution of the clients, and the clients with different contributions acquire global model parameters with different compression degrees, as shown in a formula (12);
wherein ,/>Representing a clientiIn the first placetModel rewards of round, sparse (·) as sparsification function, GM t Is the firsttRound final global model, ++>Is based on clientiContribution derived firsttThe compression ratio of the global model is calculated by a formula (13);
wherein index (·) and sort (·) functions order the clients according to the contribution degree, ensure that the clients with large contribution will obtain a global model with low compression rate,βfor controlling the degree of difference in the compression ratio.
2. The blockchain-based fair and trusted federal learning method of claim 1, wherein:
the trade contract also includes a learning rate and an optimizer.
3. The blockchain-based fair and trusted federal learning method of claim 1, wherein:
the process of training the global model by the client comprises the following steps:
and acquiring local data and a global model at the client, training the global model through a fairness sampling algorithm and a fairness constraint function by using the local data, and generating local model parameters.
4. The blockchain-based fair and trusted federal learning method of claim 1, wherein:
the process of the corresponding node transmitting and verifying the encrypted local model parameters comprises the following steps:
the corresponding nodes broadcast the encrypted local model parameters to all nodes through the eight diagrams protocol, decrypt and verify the encrypted local model parameters, wherein the signature and the components of the local model parameters are verified to be valid, if the signature and the components of the local model parameters are valid, the verification is passed, and the local model parameters which are passed through the verification are added into the corresponding transaction pools.
5. The blockchain-based fair and trusted federal learning method of claim 1, wherein:
the process of the corresponding node for aggregating the local model parameters after the verification is completed comprises the following steps:
when the local model parameters passing verification in the transaction pool of the corresponding node reach a threshold value, the corresponding node carries out model aggregation on the local model parameters in the transaction pool through an aggregation mechanism based on double weights, wherein the aggregation weights of the local model parameters are adaptively adjusted according to the attribute fairness measurement and the delay degree of the local model parameters in the aggregation mechanism.
6. The blockchain-based fair and trusted federal learning method of claim 1, wherein:
the process of verifying and consensus for the new block by all nodes includes:
the corresponding node broadcasts the new block to all nodes, and the nodes receiving the new block check the information of the new block and reach consensus in a voting mode; wherein the information of the new block includes: signature of the new block, data of local model parameters, verification condition of the local model parameters and aggregation condition of the global model.
CN202211651581.6A 2022-12-22 2022-12-22 Fair and reliable federal learning method based on blockchain Active CN115640305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211651581.6A CN115640305B (en) 2022-12-22 2022-12-22 Fair and reliable federal learning method based on blockchain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211651581.6A CN115640305B (en) 2022-12-22 2022-12-22 Fair and reliable federal learning method based on blockchain

Publications (2)

Publication Number Publication Date
CN115640305A CN115640305A (en) 2023-01-24
CN115640305B true CN115640305B (en) 2023-09-29

Family

ID=84948229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211651581.6A Active CN115640305B (en) 2022-12-22 2022-12-22 Fair and reliable federal learning method based on blockchain

Country Status (1)

Country Link
CN (1) CN115640305B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597498B (en) * 2023-07-07 2023-10-24 暨南大学 Fair face attribute classification method based on blockchain and federal learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467927A (en) * 2021-05-20 2021-10-01 杭州趣链科技有限公司 Block chain based trusted participant federated learning method and device
CN115426353A (en) * 2022-08-29 2022-12-02 广东工业大学 Method for constructing federated learning architecture integrating block chain state fragmentation and credit mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220210140A1 (en) * 2020-12-30 2022-06-30 Atb Financial Systems and methods for federated learning on blockchain

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467927A (en) * 2021-05-20 2021-10-01 杭州趣链科技有限公司 Block chain based trusted participant federated learning method and device
CN115426353A (en) * 2022-08-29 2022-12-02 广东工业大学 Method for constructing federated learning architecture integrating block chain state fragmentation and credit mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王超.基于区块链的联邦学习安全机制研究.万方数据知识服务平台.2022,第8-52页. *

Also Published As

Publication number Publication date
CN115640305A (en) 2023-01-24

Similar Documents

Publication Publication Date Title
Baza et al. B-ride: Ride sharing with privacy-preservation, trust and fair payment atop public blockchain
Malik et al. Trustchain: Trust management in blockchain and iot supported supply chains
Wang et al. A blockchain based privacy-preserving incentive mechanism in crowdsensing applications
CN112434280B (en) Federal learning defense method based on blockchain
CN106682825A (en) System and method for evaluating credit of Social Internet of Things based on block chain
Pasdar et al. Blockchain oracle design patterns
Ye et al. A trust-centric privacy-preserving blockchain for dynamic spectrum management in IoT networks
CN115640305B (en) Fair and reliable federal learning method based on blockchain
Bruschi et al. Tunneling trust into the blockchain: A merkle based proof system for structured documents
Zhang et al. A hybrid trust evaluation framework for e-commerce in online social network: a factor enrichment perspective
Chen et al. A blockchain-based creditable and distributed incentive mechanism for participant mobile crowdsensing in edge computing
Ali et al. Incentive-driven federated learning and associated security challenges: A systematic review
CN113452681B (en) Internet of vehicles crowd sensing reputation management system and method based on block chain
CN101242410B (en) Grid subjective trust processing method based on simple object access protocol
US20230274183A1 (en) Processing of machine learning modeling data to improve accuracy of categorization
Lax et al. CellTrust: a reputation model for C2C commerce
Zhang et al. A fuzzy collusive attack detection mechanism for reputation aggregation in mobile social networks: A trust relationship based perspective
Li et al. An incentive mechanism for nondeterministic vehicular crowdsensing with blockchain
Zhang et al. Integrating blockchain and deep learning into extremely resource-constrained IoT: an energy-saving zero-knowledge PoL approach
Ali et al. A systematic review of federated learning incentive mechanisms and associated security challenges
Wu et al. A trusted paradigm of data management for blockchain-enabled Internet of Vehicles in smart cities
Dong et al. DAON: A decentralized autonomous oracle network to provide secure data for smart contracts
Zhao et al. Safe and Efficient Delegated Proof of Stake Consensus Mechanism Based on Dynamic Credit in Electronic Transaction
Liang et al. DTC-MDD: A spatiotemporal data acquisition technology for privacy-preserving in MCS
Jung et al. Efficient, verifiable and privacy-preserving combinatorial auction design

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant