CN115795518A - Block chain-based federal learning privacy protection method - Google Patents
Block chain-based federal learning privacy protection method Download PDFInfo
- Publication number
- CN115795518A CN115795518A CN202310052917.5A CN202310052917A CN115795518A CN 115795518 A CN115795518 A CN 115795518A CN 202310052917 A CN202310052917 A CN 202310052917A CN 115795518 A CN115795518 A CN 115795518A
- Authority
- CN
- China
- Prior art keywords
- task
- local
- committee
- model
- participant
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention relates to a block chain-based federal learning privacy protection method, which comprises the following steps: a task publisher generates a public and private key pair, initializes a global model, and publishes a parameter signature related to the federal learning task to a block chain; each participant updates the global model from the block chain, performs local training by using a local data set to obtain a local model, encrypts the local model by using a public key and then sends the encrypted local model to a committee; the committee performs quality detection on the local models, performs global model aggregation on the local models passing the detection and then sends the local models to the task publisher; and the task publisher updates the global model and detects whether the convergence requirement is met, if the convergence requirement is not met, the steps are repeatedly executed until the global model meets the convergence requirement, and the intelligent contract committee and each participant distribute the reward. The invention ensures the confidentiality of the local data set of the user, the enthusiasm of the user for participating in the federal learning task and the fairness of reward distribution.
Description
Technical Field
The invention relates to the technical field of block chain privacy protection, in particular to a block chain-based federal learning privacy protection method.
Background
With the development of artificial intelligence technology and big data, machine learning technology is widely applied, however, machine learning technology needs a large number of high-quality data sets as training data, which contradicts the reality that organizations are unwilling to share data sets, and federal learning is widely concerned as an effective method for solving the problem of data island. The federal learning mainly comprises two steps of distributed training and aggregation, wherein the federal learning framework responsible for the aggregation step is roughly divided into two types: one is that aggregation is handled by a centralized aggregation server, but there is a problem that the aggregation server fails at a single point and leaks the privacy of the data owner. The distributed nature of blockchains is naturally connected with federal learning, however, research shows that data characteristics of local data sets of users can be deduced by local model parameters uploaded by the users each time, so that privacy of the users is leaked, and how to realize federal learning aggregation with privacy protection in a public and transparent blockchain environment becomes a challenge.
In order to solve the problem of Privacy leakage of users in federal learning, researchers at home and abroad design a series of Privacy protection methods based on block chains, which are mainly classified into modes based on Differential Privacy (DP), homomorphic Encryption (HE) and Multi-party computing (MPC) according to different Encryption modes. The privacy is protected mainly by adding noise in original data or model parameters in a mode based on differential privacy, any analysis on private data can be resisted by a data set meeting the differential privacy, the differential privacy provides statistical privacy guarantee for single record, and therefore data cannot be recovered to protect the privacy of a data owner, but the noise is added to influence final model convergence, and therefore model accuracy is reduced. The multi-party calculation mode and the homomorphic encryption mode only disclose the calculation results to the participants and the coordinators, and any additional information except the calculation results is not disclosed in the process. In fact, the two ways are similar and only differ in details, and the way of multi-party computing based on the protection of the interactive data between the participants retains the original quasi-certainty and has high security assurance, but the way generates a large amount of extra communication and computing cost. And the mode based on the homomorphic encryption protects the interactive data packet between the participant and the coordination room, thereby providing the security of the semi-trusted centralized coordination party, and compared with the mode based on the multi-party calculation, the homomorphic encryption computation amount is also reduced.
In the block chain-based framework for federated learning aggregation, because the block chain is used as a distributed public account book, a user can obtain a series of activities on a legal identity participation chain only by completing registration, and therefore the system user can be used as a malicious user to intentionally perform a series of activities such as destructive model convergence. In this case, it is very important to ensure that the global model can still converge. In addition, the local training itself consumes the calculation overhead and communication overhead of the device, and how to improve the motivation of the user training, thereby ensuring the flexibility of the whole system is a practical problem to be considered.
Disclosure of Invention
The invention aims to guarantee the confidentiality of a local data set of a user, the enthusiasm of the user for participating in a federal learning task and the fairness of reward distribution, and provides a block chain-based federal learning privacy protection method.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
a block chain-based federated learning privacy protection method comprises the following steps:
step 2, after updating the global model from the block chain, participants of all intentions use a local data set to carry out local training on the global model to obtain a local model, and encrypt the local model by using a homomorphic public key and then send the local model to a committee;
step 3, the committee performs gradient quality detection on the received local models of all participants, performs global model aggregation on the local models passing the detection, and then sends the aggregated global model to the task publisher;
and step 5, the federal learning task is finished, and the intelligent contract committee and each participant are automatically triggered to distribute the reward.
The specific steps of the step 1 comprise:
the task publisher selects two large prime numbers p and q and randomly selects parameters,For one element within the multiplicative group,representing a multiplicative group; calculating out,Lcm represents the least common multiple; generating a homomorphic public key based on a Pailler homomorphic encryption algorithmAnd homomorphic private keys;
Task publishers send requests to blockchains,S p Using sk for task publishers p Signature on Inf, sk p A system private key distributed for the task publisher by the block chain; inf is a parameter related to the federal learning task in the global model, and includes:
wherein, the address is the address of the task publisher; m is the number of endorsement nodes; n is the number of participants in the intent;to initialize a global model; max t Is the maximum iteration number, and t is the iteration number; money is a prepaid amount;in order to award the distribution ratio parameter,;is a connector, represents a sum.
The step 1 further comprises the steps of: intelligent contract verification signature S p And checking the account balance of the task publisher, refusing the request of the task publisher if the balance is not enough, otherwise freezing the prepaid amount money from the account of the task publisher, and selecting m endorsement nodes for the federal learning task according to a DPOS election mechanism to form a committee.
The specific steps of the step 2 comprise:
participant i uses the local dataset to model the global of the previous iterationGo on thisGround training to obtain local model of the iteration of the roundT is iteration times, i is an i-th intention participant, and i is more than or equal to 1 and less than or equal to N;
participant i will model the localUsing a homomorphic public key of a task publisherEncrypting to obtain ciphertext(ii) a And for the ciphertextSystem public key pk distributed by block chain for committee members j Performing encryption again, calculatingC is the ciphertext after being encrypted again, j is the member of the jth committee, j is more than or equal to 1 and less than or equal to m,system public key pk for representation j For ciphertextEncrypting;
computing a signature of participant i,sk i The system private key assigned to participant i for the blockchain,system private key for representationSigning the ciphertext c, and allowing the participant i to transmit the information dataIs sent toCommittee.
The specific steps of the step 3 comprise:
the committee receives the information data Model sent by the participant i i Then, the certificate signature S is a priori i If the verification is passed, calculating the ciphertext,sk j The system private key assigned to committee member j for the blockchain,system private key for representationDecrypting the ciphertext c;
the committee calculates the parameter mean of the local model for each participant i: by pairing ciphertextAdding a first noise to obtainWill beSending the task to a task publisher;
task publishers use homomorphic private keysTo pairAfter decryption, calculating a local model parameter average value Ave' added with first noise; task publishers use homomorphic public keysEncrypting the Ave' again to obtainWill beSending to a committee;
committee pairsDenoising to obtain the parameter mean value of the local model of the participant i(ii) a Committee pairsAndadding second noise respectively to obtainAndand will beAndsending the task to a task publisher;
task publishers use homomorphic private keysTo pairAndafter decryption, obtainAnd(ii) a Calculating Pearson's correlation coefficient:
The task publisher willSent to the committee, which calculates the quality parameters of the tth iteration of participant i:
If it isIf the value of the local model gradient quality of the participant i is 0, the local model gradient quality of the participant i does not pass the detection, and the local model of the participant i in the t-th iteration is discarded and does not participate in the global model aggregation;
committee aggregated local models of gradient mass detection passage:
wherein the content of the first and second substances,is the aggregated global model; n1 is the number of participants i of the local model detected by the gradient quality;
The specific steps of the step 4 comprise:
task publishers receive aggregated global modelsPost-use of homomorphic private keysDecrypting to obtain(ii) a Computing,For learning rate, obtaining the global model of the t-th iteration;
Detecting global modelsIf the convergence requirement is not met, updating the global model toAnd updating the iteration times to t +1, and repeatedly executing the steps 2 to 3 until the global model meets the convergence requirement or until t = Max t (ii) a And if the convergence requirement is met, ending the broadcasting federal learning task, and executing the step 5.
The specific steps of the step 5 comprise: and after the federal learning task is completed, automatically triggering the intelligent contract to execute reward distribution, wherein the reward acquired by the participant i is as follows:
wherein, rew i A reward earned for participant i;the quality parameter of the participant i in the T-th iteration is shown, and T is the total number of iterations;
the awards earned by the members of the committee were:
wherein, the rew is the reward obtained by each member in the committee.
Compared with the prior art, the invention has the beneficial effects that:
(1) System flexibility: in the scheme, three roles, namely a task publisher, a committee and a participant are common system users, namely the system users can select different roles according to own requirements; if the user wants to obtain a global model, the user can submit an application to become a task publisher, namely a converged global model is obtained by using local data sets of other users of the system in a mode of payment; when a user wants to use the value of the local data set of the user to obtain reward, the user can apply for becoming a participant of the federal learning task; if the user does not want to use the local data set and needs to obtain the remuneration, the user can apply to become a committee member and obtain the remuneration by the operation, communication and other work of the local equipment; and the same user is required to only become one role in the federal learning task in the system, but the same user can become different roles in different federal learning tasks.
(2) Robustness: the robustness mainly comprises two aspects, on one hand, the robustness is derived from the instability of participants, and in practical application, a user can not submit relevant parameters of a networked learning task on time due to the fact that a local learning task is not completed within a specified time or a concurrent fault and the like, so that the scheme allows the participants to be disconnected, and supports the user to join in related activities again after the user is disconnected; because the global model aggregation in the scheme adopts a homomorphic encryption mode, the summation operation is essentially carried out on the plaintext, and therefore, even if some users are disconnected, the aggregation operation cannot be influenced; on the other hand, the system user is uncertain, the user is honest and curious in the system, meanwhile, the user can be a malicious user due to virus attack, and under the condition, the system introduces the quality detection based on the ciphertext, namely, the Pearson correlation coefficient is calculated to ensure the convergence of the global model.
(3) Resisting virus attack: in the framework of the scheme, users who become participants adopt a random selection mode to distribute all federal learning tasks, quality detection is carried out on local model parameters of the participants before the global model is aggregated, user parameters which are unqualified in quality detection are discarded, and the method can effectively resist adverse effects of toxic attack on convergence of the global model.
(4) Correctness: the paillier homomorphic encryption algorithm has additive homomorphic properties, i.e.According to the scheme, a paillier homomorphic encryption algorithm is adopted to encrypt local model parameters of participants, and committees directly use local model parameter ciphertexts to perform global model aggregation and use plaintext to perform aggregation, so that the result is consistent.
(5) Stable astringency: according to the scheme, quality detection is carried out by calculating the Pearson correlation coefficient, whether a local data set is attacked by virus or whether the local data set of a participant is low in quality or not can not pass quality detection, the data of the part is discarded and does not participate in the subsequent global model aggregation process, and therefore stable convergence of the global model in the system can be guaranteed.
(6) Confidentiality: the scheme realizes privacy protection of the participant local data set by using two encryption technologies; if the gradient plaintext uploaded by the participant every time can be obtained by the user, the data characteristics of the local data set of the participant can be deduced from the gradient plaintext, and privacy of the local data set of the participant is leaked; the local model parameters of the participants are firstly unknown to the local model parameters by the committee by adopting the Paillier homomorphic encryption technology, and the ciphertext encrypted in the homomorphic way is encrypted again by adopting the public key encryption technology based on the elliptic curve to realize the unknown of the task publisher and other attackers to the local models of the participants, so that the confidentiality of the local models of the participants is ensured, and the availability and invisibility of the local data sets of the participants are realized.
(7) Fairness: the local data sets of all participants have great difference in quantity and quality, and naturally, the local model accuracy of all participants is different, so that the reward amount obtained by all the participants is different.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic diagram of a system framework of the present invention;
FIG. 2 is a flow chart of the method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Also, in the description of the present invention, the terms "first", "second", and the like are used for distinguishing between descriptions and not necessarily for describing a relative importance or implying any actual relationship or order between such entities or operations. In addition, the terms "connected", "connecting", and the like may be used to directly connect the elements or indirectly connect the elements via other elements.
Example (b):
the invention is realized by the following technical scheme, as shown in fig. 1, a block chain-based federal learning privacy protection method comprises three parts: one task publisher (publisher), one committee (committee), several intended participants (participant). The task publisher and the participants represent users with different requirements, and the task publisher finally obtains a global model (namely a federal learning model) meeting the convergence requirement; the committee consists of high-reputation endorsement node elections on a plurality of block chains; each participant has a local data set and at least one device with certain computing and communication capabilities.
Referring to fig. 2, the method includes the following steps:
The step is a task issuing stage, a task issuer selects two large prime numbers p and q, the large prime numbers are the problem of difficulty in decomposing large numbers and are unavailable through calculation, and if the large prime numbers are the prime numbers, an attacker can completely decode the large prime numbers in an exhaustive mode. Randomly selecting parameters,Is an element in the multiplicative group and is obtained by a paillier homomorphic encryption algorithm,representing a multiplicative group. Computing,And lcm represents the least common multiple. Generating a homomorphic public key based on a Pailler homomorphic encryption algorithmAnd homomorphic private keys。
Task publishers send requests to blockchains,S p Using sk for task publishers p After Inf signature, task publisher and other users register, the blockchain allocates a system public and private key pair, sk, to the user p I.e. the system private key assigned to the task publisher by the blockchain, and the system public key assigned to the task publisher by the blockchain can be regarded as the identity of the task publisher, the system private key sk p May be used for encryption, signing, etc. Inf is a parameter related to the federal learning task in the global model, and includes:
wherein, the address is the address of the task publisher; m is the number of endorsement nodes, namely the number of members forming a committee; n is the number of participants in the intent;to initialize a global model; max (maximum of ten) t Is the maximum iteration number, and t is the iteration number; money is a prepaid amount;in order to award the distribution ratio parameter,;in order to be a connector, the first connector is a connector,the sum is shown.
Intelligent contract verification signature S p If the verification fails, directly discarding the request Req of the task publisher; and if the verification is passed, checking the account balance of the task publisher, rejecting the request of the task publisher if the balance is insufficient, otherwise freezing the prepaid amount money from the account of the task publisher, and selecting m endorsement nodes for the federal learning task of the task publisher according to a DPOS election mechanism to form a committee.
For the same federal learning task, the intersection of any two sets of task publishers, participants and endorsement nodes needs to be satisfied as an empty set, that is, each user plays at most one role.
And 2, after updating the global model from the block chain, the participants of all intentions use the local data set to carry out local training on the global model to obtain a local model, encrypt the local model by using a homomorphic public key and send the encrypted local model to a committee.
This step is a local training phase, participant i uses a local data set D i Global model for last iterationLocal training is carried out to obtain a local model of the iteration of the roundT is iteration times, i is the ith participant, i is more than or equal to 1 and less than or equal to N, D i Is the local data set of the ith participant. It should be noted that the global model obtained from the previous iteration is aggregated, and is defined as follows in this document。
Participant i will model the localUsing a homomorphic public key of a task publisherEncrypting to obtain ciphertext(ii) a And for the ciphertextSystem public key pk distributed by block chain for committee members j Performing encryption again, calculatingC represents a ciphertext obtained by calculation through a paillier homomorphic encryption algorithm and a public key encryption algorithm based on an elliptic curve, j is a jth committee member, and j is more than or equal to 1 and less than or equal to m;meaning that content b is encrypted with public key a, hereSystem public key pk for representation j For ciphertextAnd (4) encrypting. It should be noted that the system public key allocated by the blockchain to the user is obtained by a public key encryption algorithm based on an elliptic curve.
Computing a signature of participant iWherein sk i The system private key assigned to participant i for the blockchain,representing signing content b with private key a, hereSystem private key for representationSigning the ciphertext c; participant i sends the information dataAnd sending to a committee.
And 3, performing gradient quality detection on the received local models of all participants by the committee, performing global model aggregation on the local models passing the detection, and then sending the aggregated global model to the task publisher.
The committee receives the information data Model sent by the participant i i Then, the certificate signature S is a priori i If the verification fails, the Model sent by the participant i is discarded i (ii) a If the verification passes, the ciphertext is calculated,sk j The system private key assigned to the blockchain for committee member j,represents the decryption of content b with private key a, hereSystem private key for representationAnd decrypting the ciphertext c.
The committee calculates the mean of the parameters of each participant's local model: in particular by pairing ciphertextAdding a first noise to obtainWill beAnd sending the task to the task publisher.
Task publishers use homomorphic private keysFor is toAfter decryption, calculating a local model parameter average value Ave' added with first noise; task publishers use homomorphic public keysRe-encrypt AveTo obtainWill beAnd sending to a committee.
Committee pairDenoising to obtain the parameter mean value of the local model of the participant i(ii) a Committee pairAndadding second noise respectively to obtainAndand will beAndand sending the task to the task publisher.
It should be noted that, in order to ensure that the committee can subsequently calculate the quality parameter of the tth iteration, the committee needs to obtain the pearson correlation coefficient, and the pearson correlation coefficient needs to obtain the plaintext of the local gradient model parameter (i.e., the plaintext of the parameter of the local model), which can be obtained only by the task issuer, because only the task issuer has the homomorphic private keyDecryption is possible, but if the ciphertext is directly sent to the task publisher, the problem of privacy disclosure of participants is caused when the task publisher decrypts to obtain the original plaintext information, and therefore the problem of privacy disclosure of the participants is causedThe committee needs to add noise to the ciphertext and then sends the ciphertext to the task publisher, and the noise does not affect the calculation of the Pearson correlation coefficient by the task publisher.
Task publishers use homomorphic private keysTo pairAndafter decryption, obtainAnd(ii) a Calculating Pearson's correlation coefficient:
The task publisher willSent to the committee, which calculates the quality parameters of the tth iteration of participant i:
If it isIs 0, then the local model gradient quality detection of participant i does not passAnd the local model of the participant i in the t-th iteration is discarded, and the participant i does not participate in the subsequent global model aggregation.
Committee aggregated local models passed gradient quality testing:
wherein the content of the first and second substances,is the aggregated global model; n1 is the number of participants i of the local model detected by the gradient quality, i.e. the quality parameterThe number of participants i.
Task publishers receive aggregated global modelsPost-use homomorphic private keysDecrypting to obtain(ii) a Computing,For learning rate, obtaining the global model of the t-th iteration。
Detecting global modelsIf the convergence requirement is not met, updating the global model toAnd updating the iteration number to t +1, and repeatedly executing the steps 2-3 until the global model meets the convergence requirement or until t = Max t (ii) a And if the convergence requirement is met, ending the broadcasting federal learning task, and executing the step 5.
And 5, finishing the federal learning task, and automatically triggering the intelligent contract committee and each participant to distribute the reward.
After the federal learning mission is completed, the intelligent contract automatically executes reward distribution, wherein participant i will obtain:
wherein, rew i A reward earned for participant i;the quality parameter of the participant i in the T-th iteration is T, and T is the total number of iterations.
The awards earned by the members of the committee were:
wherein, the rew is the reward obtained by each member in the committee.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (7)
1. A block chain-based federated learning privacy protection method is characterized in that: the method comprises the following steps:
step 1, a task publisher generates a homomorphic public and private key pair of a Pailler homomorphic encryption algorithm, initializes a global model based on federal learning, and publishes a parameter signature related to the federal learning task in the global model to a block chain;
step 2, after updating the global model from the block chain, participants of all intentions use a local data set to carry out local training on the global model to obtain a local model, and encrypt the local model by using a homomorphic public key and then send the local model to a committee;
step 3, the committee performs gradient quality detection on the received local models of all participants, performs global model aggregation on the local models passing the detection, and then sends the aggregated global model to a task publisher;
step 4, the task publisher updates the global model, detects whether the global model meets the convergence requirement, and if not, repeatedly executes the step 2 to the step 3 until the global model meets the convergence requirement; if the convergence requirement is met, executing the next step;
and 5, finishing the federal learning task, and automatically triggering the intelligent contract committee and each participant to distribute the reward.
2. The block chain-based federal learned privacy protection method as claimed in claim 1, wherein: the specific steps of the step 1 comprise:
the task publisher selects two large prime numbers p and q and randomly selects parameters,For one element within the multiplicative group,representing a multiplicative group; calculating out,Lcm represents the least common multiple; generating a homomorphic public key based on a Pailler homomorphic encryption algorithmAnd homomorphic private keys;
Task publishers send requests to blockchains,S p Using sk for task publishers p Signature on Inf, sk p A system private key distributed for the task publisher by the block chain; inf is a parameter related to the federal learning task in the global model, and includes:
wherein, the address is the address of the task publisher; m is the number of endorsement nodes; n is the number of participants in the intent;to initialize a global model; max (maximum of ten) t Is the maximum iteration number, and t is the iteration number; money is a prepaid amount;in order to award the distribution ratio parameter,;is a connector, represents a sum.
3. The block chain-based federal learned privacy protection method as claimed in claim 2, wherein: the step 1 further comprises the steps of: intelligent contract verification signature S p And checking the account balance of the task publisher, refusing the request of the task publisher if the balance is insufficient, otherwise, freezing the prepaid money from the account of the task publisher, and selecting m endorsement nodes to form a committee for the federal learning task according to a DPOS election mechanism.
4. The block chain-based federal learned privacy protection method of claim 2, wherein: the specific steps of the step 2 comprise:
participant i uses the local dataset to model the global of the previous iterationLocal training is carried out to obtain a local model of the iteration of the roundT is iteration times, i is an i-th intention participant, and i is more than or equal to 1 and less than or equal to N;
participant i will model the localUsing a homomorphic public key of a task publisherEncrypting to obtain ciphertext(ii) a And for the ciphertextSystem public key pk distributed by block chain for committee members j Performing encryption and calculation againC is the ciphertext after being encrypted again, j is the member of the jth committee, j is more than or equal to 1 and less than or equal to m,system public key pk for representation j For ciphertextEncrypting;
5. The block chain-based federal learned privacy protection method of claim 4, wherein: the specific steps of the step 3 comprise:
the committee receives the information data Model sent by the participant i i Then, the certificate signature S is a priori i If the verification is passed, calculating the ciphertext,sk j The system private key assigned to the blockchain for committee member j,system private key for representationDecrypting the ciphertext c;
the committee calculates the parameter mean of the local model of each participant i: by pairing ciphertextAdding a first noise to obtainWill beSending the task to a task publisher;
task publishers use homomorphic private keysTo pairAfter decryption, calculating a local model parameter average value Ave' added with first noise; task publishers use homomorphic public keysEncrypting the Ave' again to obtainWill beSending to a committee;
committee pairsDenoising to obtain the parameter mean value of the local model of the participant i(ii) a Committee pairsAndadding second noise respectively to obtainAndand will beAndsending the task to a task publisher;
task publishers use homomorphic private keysTo pairAndafter decryption, obtainAnd(ii) a Calculating Pearson's correlation coefficient:
The task publisher willSent to the committee, which calculates the quality parameters of the tth iteration of participant i:
If it isIf the value of the local model gradient quality of the participant i is 0, the local model gradient quality of the participant i does not pass the detection, and the local model of the participant i in the t-th iteration is discarded and does not participate in the global model aggregation;
committee aggregated local models of gradient mass detection passage:
wherein the content of the first and second substances,is the aggregated global model; n1 is the number of participants i of the local model detected by the gradient quality;
6. The block chain-based federal learned privacy protection method of claim 5, wherein: the specific steps of the step 4 comprise:
task publishers receive aggregated global modelsPost-use homomorphic private keysDecrypt and obtain(ii) a Computing,For learning rate, obtaining the global model of the t-th iteration;
Detecting global modelsIf the convergence requirement is not met, updating the global model toAnd updating the iteration times to t +1, and repeatedly executing the steps 2 to 3 until the global model meets the convergence requirement or until t = Max t (ii) a And if the convergence requirement is met, ending the broadcasting federal learning task, and executing the step 5.
7. The block chain-based federal learned privacy protection method as claimed in claim 6, wherein: the specific steps of the step 5 comprise: and after the federal learning task is completed, automatically triggering the intelligent contract to execute reward distribution, wherein the reward acquired by the participant i is as follows:
wherein, rew i A reward earned for participant i;for participation in the t-th iterationThe quality parameter of the i, T is the total number of iterations;
the awards earned by the members of the committee were:
wherein, the rew is the reward obtained by each member in the committee.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310052917.5A CN115795518B (en) | 2023-02-03 | 2023-02-03 | Block chain-based federal learning privacy protection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310052917.5A CN115795518B (en) | 2023-02-03 | 2023-02-03 | Block chain-based federal learning privacy protection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115795518A true CN115795518A (en) | 2023-03-14 |
CN115795518B CN115795518B (en) | 2023-04-18 |
Family
ID=85429608
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310052917.5A Active CN115795518B (en) | 2023-02-03 | 2023-02-03 | Block chain-based federal learning privacy protection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115795518B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116016610A (en) * | 2023-03-21 | 2023-04-25 | 杭州海康威视数字技术股份有限公司 | Block chain-based Internet of vehicles data secure sharing method, device and equipment |
CN117473559A (en) * | 2023-12-27 | 2024-01-30 | 烟台大学 | Two-party privacy protection method and system based on federal learning and edge calculation |
CN117473559B (en) * | 2023-12-27 | 2024-05-03 | 烟台大学 | Two-party privacy protection method and system based on federal learning and edge calculation |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200193292A1 (en) * | 2018-12-04 | 2020-06-18 | Jinan University | Auditable privacy protection deep learning platform construction method based on block chain incentive mechanism |
CN112434280A (en) * | 2020-12-17 | 2021-03-02 | 浙江工业大学 | Block chain-based federal learning defense method |
US20210314140A1 (en) * | 2020-04-02 | 2021-10-07 | Epidaurus Health, Inc. | Methods and systems for a synchronized distributed data structure for federated machine learning |
CN113657608A (en) * | 2021-08-05 | 2021-11-16 | 浙江大学 | Excitation-driven block chain federal learning method |
CN114491616A (en) * | 2021-12-08 | 2022-05-13 | 杭州趣链科技有限公司 | Block chain and homomorphic encryption-based federated learning method and application |
CN114897190A (en) * | 2022-05-18 | 2022-08-12 | 中国农业银行股份有限公司 | Method, device, medium and equipment for constructing federated learning framework |
CN115037477A (en) * | 2022-05-30 | 2022-09-09 | 南通大学 | Block chain-based federated learning privacy protection method |
CN115292413A (en) * | 2022-08-09 | 2022-11-04 | 湘潭大学 | Crowd sensing excitation method based on block chain and federal learning |
CN115510494A (en) * | 2022-10-13 | 2022-12-23 | 贵州大学 | Multi-party safety data sharing method based on block chain and federal learning |
CN115549888A (en) * | 2022-09-29 | 2022-12-30 | 南京邮电大学 | Block chain and homomorphic encryption-based federated learning privacy protection method |
-
2023
- 2023-02-03 CN CN202310052917.5A patent/CN115795518B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200193292A1 (en) * | 2018-12-04 | 2020-06-18 | Jinan University | Auditable privacy protection deep learning platform construction method based on block chain incentive mechanism |
US20210314140A1 (en) * | 2020-04-02 | 2021-10-07 | Epidaurus Health, Inc. | Methods and systems for a synchronized distributed data structure for federated machine learning |
CN112434280A (en) * | 2020-12-17 | 2021-03-02 | 浙江工业大学 | Block chain-based federal learning defense method |
CN113657608A (en) * | 2021-08-05 | 2021-11-16 | 浙江大学 | Excitation-driven block chain federal learning method |
CN114491616A (en) * | 2021-12-08 | 2022-05-13 | 杭州趣链科技有限公司 | Block chain and homomorphic encryption-based federated learning method and application |
CN114897190A (en) * | 2022-05-18 | 2022-08-12 | 中国农业银行股份有限公司 | Method, device, medium and equipment for constructing federated learning framework |
CN115037477A (en) * | 2022-05-30 | 2022-09-09 | 南通大学 | Block chain-based federated learning privacy protection method |
CN115292413A (en) * | 2022-08-09 | 2022-11-04 | 湘潭大学 | Crowd sensing excitation method based on block chain and federal learning |
CN115549888A (en) * | 2022-09-29 | 2022-12-30 | 南京邮电大学 | Block chain and homomorphic encryption-based federated learning privacy protection method |
CN115510494A (en) * | 2022-10-13 | 2022-12-23 | 贵州大学 | Multi-party safety data sharing method based on block chain and federal learning |
Non-Patent Citations (3)
Title |
---|
"A Blockchain-Based Decentralized Federated Learning Framework with Committee Consensus" * |
周炜 等: "基于区块链的隐私保护去中心化联邦学习模型" * |
熊玲 等: "车联网环境下基于区块链技术的条件隐私消息认证方案" * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116016610A (en) * | 2023-03-21 | 2023-04-25 | 杭州海康威视数字技术股份有限公司 | Block chain-based Internet of vehicles data secure sharing method, device and equipment |
CN116016610B (en) * | 2023-03-21 | 2024-01-09 | 杭州海康威视数字技术股份有限公司 | Block chain-based Internet of vehicles data secure sharing method, device and equipment |
CN117473559A (en) * | 2023-12-27 | 2024-01-30 | 烟台大学 | Two-party privacy protection method and system based on federal learning and edge calculation |
CN117473559B (en) * | 2023-12-27 | 2024-05-03 | 烟台大学 | Two-party privacy protection method and system based on federal learning and edge calculation |
Also Published As
Publication number | Publication date |
---|---|
CN115795518B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109120398B (en) | Secret sharing method and device based on block chain system | |
Zhao et al. | Secure pub-sub: Blockchain-based fair payment with reputation for reliable cyber physical systems | |
Delgado-Segura et al. | A fair protocol for data trading based on bitcoin transactions | |
US11232478B2 (en) | Methods and system for collecting statistics against distributed private data | |
Ziegeldorf et al. | Coinparty: Secure multi-party mixing of bitcoins | |
EP2081143B1 (en) | Method and system for mediated secure computation | |
CN112232527A (en) | Safe distributed federal deep learning method | |
CN104754570B (en) | Key distribution and reconstruction method and device based on mobile internet | |
Hei et al. | Making MA-ABE fully accountable: A blockchain-based approach for secure digital right management | |
Lyu et al. | Towards fair and decentralized privacy-preserving deep learning with blockchain | |
CN115037477A (en) | Block chain-based federated learning privacy protection method | |
CN112597542B (en) | Aggregation method and device of target asset data, storage medium and electronic device | |
Qu et al. | A electronic voting protocol based on blockchain and homomorphic signcryption | |
CN115795518B (en) | Block chain-based federal learning privacy protection method | |
Canetti | Obtaining universally compoable security: Towards the bare bones of trust | |
Alwen et al. | Collusion-free multiparty computation in the mediated model | |
Zhou et al. | Distributed bitcoin account management | |
Dery et al. | Fear not, vote truthfully: Secure Multiparty Computation of score based rules | |
CN110502905A (en) | A kind of distributed account book method of commerce and system of secret protection | |
Wadhwa et al. | Breaking the chains of rationality: Understanding the limitations to and obtaining order policy enforcement | |
Howlader et al. | Sealed‐bid auction: a cryptographic solution to bid‐rigging attack in the collusive environment | |
Abadi et al. | Earn while you reveal: private set intersection that rewards participants | |
CN112911018A (en) | Block chain-based network community credit investigation management method | |
Lu et al. | Self-tallying e-voting with public traceability based on blockchain | |
Fetzer et al. | PUBA: Privacy-preserving user-data bookkeeping and analytics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |