CN116187471A - Identity anonymity and accountability privacy protection federal learning method based on blockchain - Google Patents

Identity anonymity and accountability privacy protection federal learning method based on blockchain Download PDF

Info

Publication number
CN116187471A
CN116187471A CN202310078935.0A CN202310078935A CN116187471A CN 116187471 A CN116187471 A CN 116187471A CN 202310078935 A CN202310078935 A CN 202310078935A CN 116187471 A CN116187471 A CN 116187471A
Authority
CN
China
Prior art keywords
client
local
aggregator
malicious
update
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310078935.0A
Other languages
Chinese (zh)
Inventor
高莹
陈晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202310078935.0A priority Critical patent/CN116187471A/en
Publication of CN116187471A publication Critical patent/CN116187471A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6254Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Storage Device Security (AREA)

Abstract

The application discloses a blockchain-based identity anonymity and accountability privacy protection federation learning method, which comprises the processes of generating public and private key pairs, paying deposit, signing and updating, broadcasting and verifying, reconstructing traceable keys, global model aggregation, malicious identity decryption and accountability. In each client cluster, identity anonymity of the clients is achieved by using a threshold-based accountability ring signature, and robustness aggregation is performed by an aggregator according to cosine similarity and a Multi-Krum algorithm. Global aggregation is then achieved by selecting a leader from the m aggregators by a verifiable random function. After the malicious update is determined, the traceability person can verify the secret sharing by using the Pedersen to reconstruct the traceable key, so that the anonymization of the malicious client is realized. An adaptive reputation incentive mechanism is designed to perform accountability on malicious clients and aggregators and rewards other honest nodes.

Description

Identity anonymity and accountability privacy protection federal learning method based on blockchain
Technical Field
The application relates to the technical field of information security, in particular to a blockchain-based identity anonymity and accountability privacy protection federal learning method.
Background
Machine learning is a core technology of artificial intelligence and data science that extracts valuable information or knowledge by model training of centralized data, facilitating users to make better decisions or predictions. However, large-scale data collection not only makes it inefficient, but also faces privacy threat and security issues for personal privacy protection. It is mainly shown that an attacker may infer sensitive information from the shared data. On the other hand, the data islanding problem prevents multiple users from performing efficient data collaboration, which makes it difficult to exploit the potential value of the data. The data heterogeneity results in non-independent co-distribution of the client training data sets such that the optimal solution of the global model cannot fit the data sets of all clients at the same time.
Federal learning (Federated Learning) is a new branch of machine learning that can satisfy efficient joint modeling and model training among multiple users without local privacy data, protecting local data privacy. Specifically, the client first downloads an initialized global model from the server and trains the model using a local data set. And then only the model parameters are sent to a server for aggregation, and the aggregated global model parameters are returned to each client. At present, federal learning has been widely used in the fields of keyboard prediction, signal recognition, security detection, and the like. Although federal learning can solve the privacy problem of local data to some extent, the shared model parameters may still reveal private information. Research shows that an attacker can deduce the original data through model parameters or launch a poisoning attack to reduce the model performance. Thus, providing privacy protection for federal learning without reducing system robustness is an important issue of research. Currently, privacy-preserving federal learning is largely divided into two types of methods, cryptography and perturbation.
The cryptography method encodes the data plaintext into ciphertext which can be decrypted by a specific user, keeps confidentiality of sensitive data, and directly processes the ciphertext according to a security protocol, such as homomorphic encryption, secret sharing, a garbled circuit and the like. Homomorphic encryption can provide strong privacy protection, which allows parameters to be added or multiplied in the ciphertext state. The batch gradient is encoded into a long integer and then homomorphic encryption is carried out to reduce the calculation cost, so that the efficiency is greatly improved. The ShieldFL is a privacy protection federal learning defense strategy based on double trapdoor homomorphic encryption to resist poisoning of an encryption model, and a Bayesian fault tolerance aggregation mechanism with robustness is designed for heterogeneous data scenes. Bonawitz et al for the first time applied secret sharing, key agreement and signature schemes to privacy protection of model parameters in machine learning. Meanwhile, in order to resist the unstable influence caused by random exit of the user, a privacy protection federal learning scheme (VerifyNet) based on (t, n) threshold secret sharing is adopted. However, cryptography methods can result in additional computational and communication overhead for the system.
The disturbance method blurs the real result by adding random noise, so that an attacker cannot deduce sensitive information such as differential privacy, local differential privacy, global differential privacy and the like according to the output difference. While differential privacy does not require additional computational overhead, randomizing noise reduces data utility and model accuracy, or introduces longer model convergence delays. To measure accuracy and communication overhead, truex et al utilized secure multiparty computing and differential privacy techniques to generate a high-accuracy model that was resistant to inference attacks, keeping a predefined trust rate without sacrificing privacy.
The blockchain is a distributed account book technology with decentralization, tamper resistance and traceability, can solve the single point fault problem of the federal learning center server, and becomes an effective method for solving the single point fault problem and enhancing the privacy protection effect. In traditional federal learning studies based on blockchains, on the one hand blockchains are introduced to build a decentralised training process to enhance the reliability and reliability between nodes. On the other hand, the motivation mechanism of introducing blockchains promotes the participation enthusiasm and fairness of the nodes. Independent of the centralized coordination server, the Biscotti scheme stores the aggregated updates and intermediate parameters in the blockchain during each round of training. The BlockFL provides an equal proportion of rewards depending on the sample size to facilitate joint training between devices, but the BlockFL may be inefficient by workload proven consensus. Trust fed is a public and trusted cross-device federal learning framework that maintains reputation of participating devices using blockchains and smart contracts. And the abnormal behavior of the equipment is hooked with an excitation mechanism, so that the damage of a malicious attacker to the system is avoided, and the nodes bring positive model contribution. However, data processing on the blockchain still incurs expensive computational overhead.
Disclosure of Invention
The utility model provides a blockchain-based identity anonymity and accountability privacy protection federal learning method and system, which realize privacy protection in federal learning based on accountability ring signature and verifiable secret sharing, without using complex cryptography technology to encrypt and update, and without reducing model precision.
An embodiment of a first aspect of the present application provides a blockchain-based identity anonymity and accountability privacy protection federation learning method, including the steps of: downloading an initial global model for m clients which have paid deposit, and training the global model by using local private data sets of the m clients to obtain m local models; encrypting the self public key of the client by utilizing a random number and the public key of an aggregator to which the client belongs, generating a knowledge signature of the local model by utilizing the public key set of other clients and the self private key of the client, and transmitting a message list formed by the reputation value of the client, the local model and the corresponding knowledge signature to the corresponding aggregator; verifying the validity of each knowledge signature in the message list, detecting a local model in the verified message list, identifying malicious update and honest update of the local model, aggregating honest update according to local aggregation weight to obtain n local aggregation models, transmitting the n local aggregation models to a leader, and broadcasting the malicious update and the corresponding knowledge signature, wherein n is the cluster number of the client; calculating the update quality of the n local aggregation models, distributing global aggregation weights for the n local aggregation models according to the update quality, and aggregating the n local aggregation models according to the global aggregation weights to obtain updated global models; reconstructing a tracing key according to the private key share contributed by the aggregator, decrypting a malicious client corresponding to malicious update according to the tracing key, and rewarding and punishing the client and the aggregator in the update process.
Optionally, in one embodiment of the present application, before downloading the initial global model for the m clients that have paid the deposit, the method further includes: clustering the m clients according to the local private data similarity to obtain n clusters of clients; selecting a client with the highest reputation value from the clients in each cluster as an aggregator of the cluster to obtain n aggregators; and selecting a global aggregator from the n aggregators by utilizing a consistent hash algorithm and a verifiable random function to obtain a leader.
Optionally, in an embodiment of the present application, the deposit paid by the client is determined by a reputation value of the client, wherein client c i Deposit V to be paid i The method comprises the following steps:
Figure BDA0004066912150000031
wherein d is the unit price of a fixed model, r i For client c i Thh is the lowest reputation value that allows clients to participate in the model training set.
Optionally, in one embodiment of the present application, before aggregating the honest updates according to the local aggregation weights, the method further includes: calculating the update score of the local model, and marking m-f updates with the lowest update score as honest updates and the other f updates as malicious updates; and taking the update score and the reputation value as evaluation indexes, and calculating the local aggregation weight for m-f honest updates by using an entropy weight method.
Optionally, in an embodiment of the present application, the selecting a global aggregator among the n aggregators using a consistent hash algorithm and a verifiable random function, to obtain a leader includes: constructing a reputation hash ring, and distributing the reputation hash ring to the aggregator according to the reputation specific gravity of the aggregator; transmitting the private key and the random number of the aggregator to a verifiable random function to calculate a hash value and a proof; mapping the hash value to the reputation hash ring, and obtaining a corresponding aggregator as a leader.
Optionally, in an embodiment of the present application, calculating update qualities of the n local aggregation models, and assigning global aggregation weights to the n local aggregation models according to the update qualities includes: and calculating cosine similarity between the local aggregation model of the leader and the local aggregation models sent by other aggregators, and determining global aggregation weights of the local aggregation models according to the cosine similarity.
Optionally, in one embodiment of the present application, before reconstructing the traceback key according to the private key share of the aggregator contribution, the method further includes: and verifying the validity of the signature and promise of the malicious update, verifying whether the cosine distance between the malicious update and the average value of the n local aggregation models is smaller than 0, and contributing the private key share of the aggregator to the traceability when the verification passes.
Optionally, in one embodiment of the present application, rewarding clients and aggregators in the update process includes: deducting the deposit of the malicious client, reducing the reputation value of the malicious client, improving the reputation value of other clients except the malicious client, and giving fixed rewards; and reducing the reputation value of the aggregator which does not contribute to the private key share of the malicious client and aggregating the reputation value of the local model aggregator of the malicious client, and improving the reputation value of the aggregator which contributes to the private key share of the malicious client and the reputation value of the local model aggregator of the malicious client.
Embodiments of a second aspect of the present application provide a blockchain-based identity anonymity and accountability privacy protection federal learning system, comprising: m clients for paying deposit to the intelligent contract, downloading the initial global model, and training the global model according to the local private data set to obtain m local models; n aggregators, configured to aggregate the m local models of the client to obtain n local aggregation models, and globally aggregate the n local aggregation models by using a leader in the aggregators to obtain an updated global model; and the traceability person is used for reconstructing a traceability key according to the private key share contributed by the aggregator, decrypting the malicious client corresponding to the malicious update according to the traceability key, and performing rewarding and punishment on the client and the aggregator in the update process.
Optionally, in one embodiment of the present application, further includes: and the key generation center is used for generating and distributing public and private key pairs for the client and the aggregator, and carrying out system initialization to generate public parameters.
The identity anonymity and accountability privacy protection federation learning based on the blockchain has the following beneficial effects:
1) In combination with the accountability ring signature and the verifiable secret sharing, a threshold-based accountability ring signature is designed to achieve unlinkability between client identity and model update. Encryption without using complex cryptographic techniques further reduces model accuracy while providing privacy protection.
2) The robustness is good, the Multi-Krum algorithm and cosine similarity are utilized to detect model update uploaded by the client, weight is given, and aggregation is carried out, so that model accuracy can be effectively improved.
3) A novel reputation incentive mechanism is designed to penalize malicious clients by increasing the cost of downloading global models and reducing reputation, while reducing the cost of downloading global models for honest clients without exposing identities. Only the identities of part of malicious clients are disclosed, the identity privacy of honest clients is protected, and the trade-off between anonymity and accountability is realized.
4) Finally, the leader capable of verifying random function selection realizes the decentralization global model aggregation, and the single-point fault problem can be effectively prevented.
5) Based on the blockchain implementation, blockchain techniques make global model and intermediate parameter information traceable, and non-tamperable. The intelligent contract automatically executes the system for realizing self-adaption, and the credibility of the operation flow is ensured.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a blockchain-based identity anonymity and accountability privacy protection federation learning method provided in accordance with embodiments of the present application;
FIG. 2 is a schematic diagram of an execution flow of each principal of a blockchain-based identity anonymity and accountability privacy protection federation learning method provided in accordance with embodiments of the present application;
FIG. 3 is a flow chart of threshold-based accountability ring signatures provided in accordance with an embodiment of the present application;
FIG. 4 is a flow chart of a reputation incentive mechanism provided in accordance with an embodiment of the present application;
FIG. 5 is an example diagram of a blockchain-based identity anonymity and accountability privacy protection federal learning system in accordance with embodiments of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
The application provides a blockchain-based identity anonymity and accountability privacy protection federation learning method, which comprises four participating subjects including a client, an aggregator, a traceability agent and a key generation center. Specifically, all clients are divided into n clusters according to the data similarity, so that the data in each cluster is more nearly and uniformly distributed, and the precision of the group model of the joint construction is improved. In each cluster, identity anonymity of the client is achieved by utilizing a accountability ring signature, and robustness aggregation is carried out by an aggregator according to cosine similarity and a Multi-Krum algorithm. Global aggregation is then achieved by selecting a leader from the n aggregators through a verifiable random function (Verifiable Random Function, VRF). After determining the malicious update, the traceback person reconstructs the traceback key by using the Pedersen verifiable secret sharing (Verifiable Secret Sharing, VSS), thereby implementing the anonymization of the malicious client. Finally, an adaptive reputation incentive mechanism is designed to perform accountability on malicious clients and aggregators and rewards other honest nodes.
The detailed description of the four entity roles is as follows:
1) Key generation center: the key generation center is a trusted mechanism for generating and distributing public and private key pairs for clients and aggregators and is responsible for initializing a system to generate public parameters.
2) Client side: the clients are main participants for calculating model updating according to the local private data set, and each client keeps confidentiality and identity anonymity of private data when hopeing to send model updating and has public and private key pairs provided by a key generation center i ,k i )。
3) The polymerizer: the aggregator is a high reputation client selected from the cluster, and the aggregator does not participate in model training. The aggregator has a public key PS and a private key SK provided by a key generation center i A share.
4) The tracers are high-reputation aggregators selected from all aggregators and are responsible for revoking anonymity of malicious clients. The tracers wish to receive not less than k private key shares SK from the aggregator i And further reconstruct the private key (traceable key) SK.
The symbols used are shown in Table 1.
The symbols used in Table 1
Figure BDA0004066912150000051
Figure BDA0004066912150000061
The blockchain-based identity anonymity and accountability privacy protection federal learning method of embodiments of the present application is described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a blockchain-based identity anonymity and accountability privacy protection federation learning method according to an embodiment of the present application.
As shown in fig. 1, the blockchain-based identity anonymity and accountability privacy protection federation learning method comprises the following steps:
in step S101, an initial global model is downloaded for m clients that have paid a deposit, and the global model is trained using local private datasets of the m clients to obtain m local models.
Optionally, in one embodiment of the present application, before downloading the initial global model for the m clients that have paid the deposit, the method further includes: clustering m clients according to the local private data similarity to obtain n clusters of clients; selecting a client with the highest reputation value from the clients in each cluster as an aggregator of the cluster to obtain n aggregators; and selecting a global aggregator from the n aggregators by utilizing a consistent hash algorithm and a verifiable random function to obtain a leader.
In the system initialization stage, all clients are clustered according to the data similarity, and each cluster generates public parameters by a key generation center. The key generation center generates public and private key pairs for the client and the aggregators, wherein the private keys of the aggregators are distributed by the key generation center through secret sharing, and all the aggregators share the same public key. In order to ensure fairness for clients, each client needs to pay deposit V to the intelligent contract before training begins i Then the global model of the previous round can be obtained. The client then uses the local data set and the global model to make local model updates.
The deposit paid by the client is determined by the reputation value of the client, wherein client c i Deposit V to be paid i The method comprises the following steps:
Figure BDA0004066912150000062
wherein d is the unit price of a fixed model, r i For client c i Is the lowest reputation value that allows clients to participate in the model training set.
Specifically, the key generation center (Key Generation Center, KGC) generates common parameters pp for the system: = (gk, pp) SoK Crs), pp therein SoK ←SoKSetup(gk),crs←CRSGen (gk). Each client generates a public-private key pair (pk) through a PKEGen (gk) algorithm i ,sk i ) For signing and verification, wherein
Figure BDA0004066912150000064
In addition, KGC generates public-private key pairs (PK, SK) ≡PKEGen (gk) using variants of the ElGamal encryption algorithm, where +.>
Figure BDA0004066912150000063
PK=e(SK,g 2 ). KGC distributes SK as secret values to n aggregators by Share (SK, k, n) algorithm, as shown in fig. 2, and specifically operates as follows:
step 1.KGC random selection
Figure BDA0004066912150000078
And issues a commitment C 0 =Com(SK,t);
Step 2.Kgc optionally two k-1 th order polynomials F,
Figure BDA0004066912150000079
satisfy F (0) =sk, and calculate SK i =F(i)
F(x)=SK+F 1 x+…+F k-1 x k-1 (1)
Step 3.KGC random selection
Figure BDA00040669121500000710
When commitment F i I=1, …, k-1 with G i And (3) representing. At the same time KGC calculates and broadcasts the promise +.>
Figure BDA00040669121500000711
Step 4.Kgc construction polynomial G (x) =t+g 1 x+…+G k-1 x k-1 And let t i G (i), i=1, 2, …, n, then KGC transmit share (SK i ,t i ) To all aggregators. When each aggregator receives its own share, it verifies the following to authenticate the authenticity of the share:
Figure BDA0004066912150000071
to ensure fairness of training, each client needs to pay deposit V to the intelligent contract before the jth iterative training begins i
Figure BDA0004066912150000072
Wherein d is the set fixed model unit price, r i For client c i Reputation value of (affects the cost and trust of subsequent payments). Threshold (constant threshold)<0) For the lowest reputation value of the model training set that the client is kicked out, i.e., r i >And (3) threshold. Reputation value r of newly added client i Are set to 0, thus ensuring that new clients and low reputation clients need to provide more deposit. The higher the reputation is, the less deposit will be paid.
Client c then i Downloading the global model w of the previous round from the blockchain j-1 . Local model of client in last round
Figure BDA0004066912150000073
On the basis of utilizing a local data set D i ,i∈[1,m]Training a local model with the aim of minimizing the loss function
Figure BDA0004066912150000074
Thus, the local model is updated as:
Figure BDA0004066912150000075
wherein, gamma is the learning rate,
Figure BDA0004066912150000076
is the derivative. When the objective function of the client converges, the whole training process ends. Otherwise, the client starts the next iteration.
In step S102, the private key of the client is encrypted by using the random number and the public key of the aggregator to which the client belongs, and the public key set of other clients and the private key of the client are used to generate a knowledge signature of the local model, so that the reputation value of the client, the local model and the corresponding knowledge signature form a message list to be sent to the corresponding aggregator.
In the signature generation stage, each client in the cluster encrypts the own public key by using the public key of the aggregator and the random number, and generates a knowledge signature updated by the model by using the public key set of other clients and the own private key. All clients send signature and model updates to the aggregator.
As shown in fig. 3, the signature generation phase specifically includes the following steps:
step 1, client c i Selecting random numbers
Figure BDA0004066912150000077
Calculation of ciphertext c≡enc (PK, PK) i The method comprises the steps of carrying out a first treatment on the surface of the Rnd) and uses the public key set r= { pk 1 ,pk 2 ,…,pk n The local model update w is obtained by the method that n is not equal to i i Is to be added to the knowledge signature of (a):
σ SoK =SoKSign(pp SoK ,(PK,R,c),(sk i ,Rnd),w i ) (5)
c i the final signature of the local model at the jth round of iteration is sigma i :=(c,σ SoK )。
Step 2.M clients perform the same signature operation to obtain the final signature { σ ] on the local model 1 ,…,σ m }. They will sign sigma i Model update w i And reputation value r i And sent to the aggregator. After a waiting time τ, the aggregator receives a set of unordered message lists l= { (σ) i ,w i ,r i )|i=1,..,m}。
Step 3, after the aggregator receives the message list L, it uses SoKVERIFy (pp) SoK ,(PK,R,c),σ SoK ,w i ) The algorithm sequentially verifies m signatures sigma soK Is effective in the following. If the signature is valid indicating that the model update was issued by a member in the cluster,but the owner of the signature is not known. Due to signature sigma i For identity addresses (i.e. public keys pk i ) Anonymizing is performed, and the unlinkability of the local model and the identity address is ensured, even if the aggregator receives w i The identity information of the client cannot be inferred.
In step S103, verifying the validity of each knowledge signature in the message list, detecting the local model in the verified message list, identifying malicious updates and honest updates of the local model, aggregating the honest updates according to the local aggregation weights to obtain n local aggregation models, transmitting the n local aggregation models to the leader, and broadcasting the malicious updates and the corresponding knowledge signatures, wherein n is the cluster number of the client.
In the local aggregation stage, after the aggregator receives the message list sent by the client, the aggregator sequentially verifies the validity of each signature. And carrying out robustness detection on the model corresponding to the signature which passes verification, thereby distinguishing honest update and malicious update. The aggregator assigns corresponding weights based on the quality of the honest updates and then performs local aggregation, while malicious updates and corresponding signatures will be broadcast to the blockchain network.
Optionally, in one embodiment of the present application, before aggregating the honest updates according to the local aggregation weights, the method further includes: calculating the update score of the local model, and marking m-f updates with the lowest update score as honest updates and marking the other f updates as malicious updates; and taking the update score and the reputation value as evaluation indexes, and calculating local aggregation weights for m-f honest updates by using an entropy weight method.
And setting model aggregation weights according to the model quality evaluation result and the reputation of the participants in order to improve the accuracy of the aggregation model and the enthusiasm of the participants. Model quality assessment relies on Multi-Krum to pick honest updates. Specifically:
step 1, the aggregator calculates an updated score s i Selecting m-f-2 updates to be closest to client c i The sum of the updated euclidean distances, where m is the number of updates available in the received message list L and f is the number of tolerant bayer clients.
s i =∑ i→k ||w i -w k || 2 (6)
For any i+.k, i→k represents w k W belonging to m-f-2 closed update i
Step 2. The m-f lowest scoring updates are selected and marked as honest, and the remaining f updates are marked as malicious. Updating the score s with these malicious updates i Recorded on the blockchain.
And 3, in order to make the global model more biased towards model quality and updating uploaded by the high-reputation client. Weighting m-f honest updates using entropy weighting, based on an update score s i And reputation value r i These two evaluation indexes. The lower the update score, the greater the weight should be assigned, and the lower the reputation value, the lesser the weight should be assigned. And normalizing the updated score and the reputation value.
Figure BDA0004066912150000091
Figure BDA0004066912150000092
Step 4, calculating the proportion of the updated fractions respectively
Figure BDA0004066912150000093
And reputation specific gravity->
Figure BDA0004066912150000094
And the information entropy of two evaluation indexes, namely:
Figure BDA0004066912150000095
Figure BDA0004066912150000096
if p i Or q i When equal to 0, then define
Figure BDA0004066912150000097
Step 5, calculating m-f updated weights
Figure BDA0004066912150000098
Wherein the method comprises the steps of
Figure BDA0004066912150000099
The aggregator locally aggregates the m-f honest updates:
Figure BDA00040669121500000910
and commits Com (W) to W. The aggregator then broadcasts the following message to the whole network and stores it on the blockchain: 1) Message list L of f malicious updates =((σ 1 ,1,1),…,( f F, f), and malicious client list L m The method comprises the steps of carrying out a first treatment on the surface of the 2) The partial aggregate update W, the committed value Com (W) and the total update number m.
In step S104, the update quality of the n local aggregation models is calculated, global aggregation weights are allocated to the n local aggregation models according to the update quality, and the n local aggregation models are aggregated according to the global aggregation weights to obtain updated global models.
In the global aggregation phase, global aggregation is performed by a leader that selects an aggregator from among all aggregators by a verifiable random function. The leader receives local updates of all aggregators and calculates the quality of these updates, assigns corresponding weights and globally aggregates. The leader then commits it and stores the commitment and global model on the blockchain.
Optionally, in one embodiment of the present application, selecting a global aggregator among the n aggregators using a consistent hashing algorithm and a verifiable random function, obtaining the leader includes: constructing a reputation hash ring, and distributing the reputation hash ring to the aggregator according to the reputation specific gravity of the aggregator; transmitting the private key and the random number of the aggregator to a verifiable random function to calculate a hash value and a proof; mapping the hash value to a reputation hash ring, and taking the obtained corresponding aggregator as a leader.
Optionally, in one embodiment of the present application, calculating update qualities of the n local aggregation models, and assigning global aggregation weights to the n local aggregation models according to the update qualities includes: and calculating cosine similarity between the local aggregation model of the leader and the local aggregation models sent by other aggregators, and determining global aggregation weights of the local aggregation models according to the cosine similarity.
To prevent single point failure and malicious behavior of the centralized aggregation, an off-centralized global aggregation is designed. Selecting aggregators from all aggregators for global aggregation by means of a consistent hash sha-256 and a Verifiable Random Function (VRF), in particular:
step 1, constructing a reputation hash ring according to the reputation specific gravity q of an aggregator i The rings are assigned to them.
Step 2, the traceability person uses the private key SK and the random number thereof
Figure BDA0004066912150000106
Passed to a VRF to calculate the hash value h SK And a proof pi.
Step 3, the hash value h SK Mapping onto a reputation hash ring, the corresponding aggregator is selected as the leader.
And 4, verifying the authenticity of the identity of the leader by other n-1 aggregators through proving pi. If the verification is passed, the aggregator sends a local aggregate update W i And promise Com (W) i ) I=1, …, to the leader.
Step 5. The leader receives and verifies the promise Com (W i ) I=1, …, n. Then, aggregate updates W for the leaders are calculated separately L And these aggregate updates W i Cosine similarity of cos (W) i ,W L ) Polymerization update W less than or equal to 0 i ' HairTo the smart contract. The leader calculates cos (W) i ,W L )>Aggregate update W of 0 i Weight of (2):
Figure BDA0004066912150000101
and step 6, finally, the security weight of the leader is aggregated in a global model:
Figure BDA0004066912150000102
when a new leader is selected in the next round, the hash value h of the round is compared with the hash value h of the round SK Secondary hash h (h SK ) The above steps are repeated.
Global aggregated leaders promise of models
Figure BDA0004066912150000103
And sum the promise->
Figure BDA0004066912150000104
Together stored on the blockchain. Other aggregators can verify whether their own local model is used by:
Figure BDA0004066912150000105
since the adversary cannot infer the output of the consistent hash and strategically perform the attack before the key SK is reconstructed. Due to the pseudo-randomness of the VRF, it is ensured that aggregators with higher reputation values can only be selected with higher probability. When the aggregate model converges or reaches the maximum iteration number, the whole training process is ended. Otherwise, the next iteration is started.
In step S105, the traceback key is reconstructed according to the private key share contributed by the aggregator, the corresponding malicious client is maliciously updated according to the traceback key decryption, and the client and the aggregator in the updating process are rewarded and punished.
In the retrospective accountability stage, after other aggregators acquire malicious updates from the network, whether the malicious updates deviate from the local model aggregation greatly is verified. If the verification is passed, the aggregator shares out its own private key share to the tracers. And the traceability key is reconstructed after the traceability person receives most of the shares, and the sender identity of the malicious update is obtained through decryption. The intelligent contracts punish malicious clients and aggregators, rewarding other honest clients and aggregators.
Optionally, in one embodiment of the present application, before reconstructing the traceback key according to the private key share of the aggregator contribution, the method further includes: and verifying the validity of the signature and promise of the malicious update, verifying whether the cosine distance between the malicious update and the average value of n local aggregation models is smaller than 0, and contributing the private key share of the aggregator to the traceability when the verification passes.
The method and the device only trace the identity of the malicious update, and still maintain anonymity for honest update. After receiving the message list L', the other aggregators update W and the commitment value Com (W) by local aggregation, sequentially perform verification:
1) By means of Verify (PK, w f ,R,σ f ) The algorithm verifies the validity of f signatures;
2) Verifying validity of the aggregate value W with a commitment Com (W);
3) Verifying whether cosine distance between average value of malicious update and aggregated update W is less than 0, i.e. judging
Figure BDA0004066912150000111
Whether or not it is.
Most aggregators are required to verify that the above equation holds true to prevent the aggregation from marking honest model updates as malicious when the aggregation is a hidden adversary. At the same time, the aggregator shares out its secret share SK after passing the verification i And sent to the tracers. The tracers verify (SK) through formula (2) i ,t i ) Otherwise the aggregator is required to commit the correct secret shares. When the tracers receive not less than k secret shares SK i Thereafter, recon ({ SK) i } i∈m,i≥k ) The algorithm reconstructs the private key SK as follows:
Figure BDA0004066912150000112
tracers use Trace (w f ,R,σ f SK) algorithm traces back these f bad updates: 1) Calculating a decryption algorithm Dec (SK, c) to obtain a public key pk i ' i.e. identity address; 2) By Prove (crs, (PK, c, PK) i ) SK) algorithm to obtain pk i One of the 'owners' certificates ψ. The traceability person traces back the result pk i ' and ψ are sent to the blockchain, all aggregators can be sent by VerTrace (PK, w f ,R,σ f ,pk i ' psi) algorithm to verify the correctness of the proof psi.
Optionally, in one embodiment of the present application, rewarding clients and aggregators in the update process includes: deducting deposit of the malicious client, reducing reputation value of the malicious client, improving reputation value of other clients except the malicious client, and giving fixed rewards; the reputation value of the aggregator which does not contribute to the private key share of the aggregator and the reputation value of the aggregator of the local model of the aggregation malicious client are reduced, and the reputation value of the aggregator which contributes to the private key share of the aggregator and the reputation value of the aggregator of the local model of the non-aggregation malicious client are improved.
The training goal of most participants in federal learning is to achieve better model quality at minimum cost. A few clients may be malicious and an aggregator may be a hidden adversary. In this application, the smart contract will automatically perform reputation penalties for both entities, as shown in FIG. 4, as follows:
1) Client side: punishment of malicious clients includes withholding deposit V i And reputation value reduction, wherein the reputation value of the malicious client is reset to:
Figure BDA0004066912150000113
wherein f/m is the proportion of malicious clients in the cluster,s i Updated scores for malicious clients. If r i ′<At threshold, the client will be kicked out of the model training set. Because the invention achieves identity anonymity of honest clients, rewards cannot be associated to individuals based on contributions. Thus, other client reputation values than malicious clients increase
Figure BDA0004066912150000121
And get rewards->
Figure BDA0004066912150000122
2) The polymerizer: to penalize non-contributing SK i N-k aggregators of secret shares, the smart contract synchronously reduces their reputation values (-k)/. Mixing malicious gradients into honest gradients for local model aggregation is a serious departure from protocol behavior. Updating W by intelligent dating based on detected deviation of leader i Updating the reputation value of the corresponding aggregator as:
r i i (1-|cos(W i , L )|))(18)
wherein, |cos (W i , L ) The I is the degree of deviation of the aggregator. In addition, the credit value of the aggregator of the honest execution protocol is increased
Figure BDA0004066912150000123
The reputation value of k aggregators participating in secret sharing is increased by k/n. The reputation values of all participants are updated, which are maintained by the smart contracts.
The identity anonymity and accountability privacy protection federation learning based on the blockchain provided by the embodiment of the application not only realizes the anonymity of the identity of the client, but also effectively promotes the contribution of the client. The payment deposit in the initial stage is reduced along with the improvement of the credit value, and the honest clients in the excitation stage obtain the same rewards, so that the rewards obtained by the clients with higher final credit values are higher, and accurate excitation of the honest clients with unexposed identities is realized. On the other hand, the reputation value also affects the model aggregate weight, so the higher the reputation value, the higher the client weight may be, and thus the higher the final model accuracy.
In each client cluster, anonymity is realized by the model update sent by each client, and other participants cannot acquire the sender identity corresponding to the update, so that privacy protection is realized. In order to ensure the robustness of the system, the model updates are subjected to robustness detection, weights are given according to the model quality, aggregation is carried out, the malicious update weights are reduced, and the overall model accuracy is improved. By designing a novel incentive mechanism, targeted incentive can be realized under the condition of anonymity, so that the contribution of a client is promoted. Finally, the single point fault problem is prevented by using the decentralised global aggregation, and the single point fault problem is stored on the blockchain and automatically executed by the intelligent contract.
Next, a blockchain-based identity anonymity and accountability privacy protection federal learning system according to embodiments of the present application is described with reference to the accompanying drawings.
FIG. 5 is an example diagram of a blockchain-based identity anonymity and accountability privacy protection federal learning system in accordance with embodiments of the present application.
As shown in fig. 5, the blockchain-based identity anonymity and accountability privacy-preserving federal learning system 10 includes: m clients 100, n aggregators 200, and a tracer 300.
The m clients 100 are configured to pay deposit to the intelligent contract, download an initial global model, and train the global model according to the local private data set to obtain m local models; n aggregators 200, configured to aggregate m local models of a client to obtain n local aggregation models, and globally aggregate the n local aggregation models by using a leader in the aggregators to obtain an updated global model; and the tracker 300 is configured to reconstruct a tracking key according to the private key share contributed by the aggregator, decrypt a malicious update corresponding malicious client according to the tracking key, and perform rewarding and punishment on the client and the aggregator in the update process.
Optionally, in one embodiment of the present application, the blockchain-based identity anonymization and accountability privacy protection federal learning system further comprises: the key generation center is used for generating and distributing public and private key pairs for the client and the aggregator, carrying out system initialization and generating public parameters.
It should be noted that the foregoing explanation of the embodiment of the blockchain-based identity anonymity and accountability privacy protection federation learning method is also applicable to the blockchain-based identity anonymity and accountability privacy protection federation learning system of the embodiment, and is not repeated herein.
According to the blockchain-based identity anonymity and accountability privacy protection federation learning system provided by the embodiment of the application, a client updates a local model, performs local signature, and sends a signature list to an aggregator. The aggregator screens out malicious updates, which are verified by other aggregators. And finally, reconstructing a tracing key by a traceability person to trace the malicious identity. The tracers incentives and responsibilities for clients and aggregators. And deducting deposit from a malicious client and reducing reputation, wherein a dishonest aggregator reduces reputation, and other dishonest participants increase reputation value.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "N" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more N executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.

Claims (10)

1. The identity anonymity and accountability privacy protection federation learning method based on the blockchain is characterized by comprising the following steps of:
downloading an initial global model for m clients which have paid deposit, and training the global model by using local private data sets of the m clients to obtain m local models;
encrypting the self public key of the client by utilizing a random number and the public key of an aggregator to which the client belongs, generating a knowledge signature of the local model by utilizing the public key set of other clients and the self private key of the client, and transmitting a message list formed by the reputation value of the client, the local model and the corresponding knowledge signature to the corresponding aggregator;
verifying the validity of each knowledge signature in the message list, detecting a local model in the verified message list, identifying malicious update and honest update of the local model, aggregating honest update according to local aggregation weight to obtain n local aggregation models, transmitting the n local aggregation models to a leader, and broadcasting the malicious update and the corresponding knowledge signature, wherein n is the cluster number of the client;
calculating the update quality of the n local aggregation models, distributing global aggregation weights for the n local aggregation models according to the update quality, and aggregating the n local aggregation models according to the global aggregation weights to obtain updated global models;
reconstructing a tracing key according to the private key share contributed by the aggregator, decrypting a malicious client corresponding to malicious update according to the tracing key, and rewarding and punishing the client and the aggregator in the update process.
2. The method of claim 1, further comprising, prior to downloading the initial global model for the m clients that have paid the deposit:
clustering the m clients according to the local private data similarity to obtain n clusters of clients;
selecting a client with the highest reputation value from the clients in each cluster as an aggregator of the cluster to obtain n aggregators;
and selecting a global aggregator from the n aggregators by utilizing a consistent hash algorithm and a verifiable random function to obtain a leader.
3. The method of claim 1, wherein the deposit paid by the client is determined by a reputation value of the client, wherein client c i Deposit V to be paid i The method comprises the following steps:
Figure FDA0004066912140000011
wherein d is the unit price of a fixed model, r i For client c i Thh is the lowest reputation value that allows clients to participate in the model training set.
4. The method of claim 1, further comprising, prior to aggregating the honest updates according to the local aggregation weights:
calculating the update score of the local model, and marking m-f updates with the lowest update score as honest updates and the other f updates as malicious updates;
and taking the update score and the reputation value as evaluation indexes, and calculating the local aggregation weight for m-f honest updates by using an entropy weight method.
5. The method of claim 2, wherein selecting a global aggregator among the n aggregators using a consistent hashing algorithm and a verifiable random function to obtain a leader comprises:
constructing a reputation hash ring, and distributing the reputation hash ring to the aggregator according to the reputation specific gravity of the aggregator;
transmitting the private key and the random number of the aggregator to a verifiable random function to calculate a hash value and a proof;
mapping the hash value to the reputation hash ring, and obtaining a corresponding aggregator as a leader.
6. The method of claim 1, wherein computing update qualities of the n local aggregate models, assigning global aggregate weights to the n local aggregate models based on the update qualities, comprises:
and calculating cosine similarity between the local aggregation model of the leader and the local aggregation models sent by other aggregators, and determining global aggregation weights of the local aggregation models according to the cosine similarity.
7. The method of claim 1, further comprising, prior to reconstructing a traceback key from the private key shares of the aggregator's contribution,:
and verifying the validity of the signature and promise of the malicious update, verifying whether the cosine distance between the malicious update and the average value of the n local aggregation models is smaller than 0, and contributing the private key share of the aggregator to the traceability when the verification passes.
8. The method of claim 1, wherein rewarding clients and aggregators in the update process comprises:
deducting the deposit of the malicious client, reducing the reputation value of the malicious client, improving the reputation value of other clients except the malicious client, and giving fixed rewards;
and reducing the reputation value of the aggregator which does not contribute to the private key share of the malicious client and aggregating the reputation value of the local model aggregator of the malicious client, and improving the reputation value of the aggregator which contributes to the private key share of the malicious client and the reputation value of the local model aggregator of the malicious client.
9. A blockchain-based identity anonymity and accountability privacy protection federation learning system, comprising:
m clients for paying deposit to the intelligent contract, downloading the initial global model, and training the global model according to the local private data set to obtain m local models;
n aggregators, configured to aggregate the m local models of the client to obtain n local aggregation models, and globally aggregate the n local aggregation models by using a leader in the aggregators to obtain an updated global model;
and the traceability person is used for reconstructing a traceability key according to the private key share contributed by the aggregator, decrypting the malicious client corresponding to the malicious update according to the traceability key, and performing rewarding and punishment on the client and the aggregator in the update process.
10. The system of claim 9, further comprising:
and the key generation center is used for generating and distributing public and private key pairs for the client and the aggregator, and carrying out system initialization to generate public parameters.
CN202310078935.0A 2023-01-17 2023-01-17 Identity anonymity and accountability privacy protection federal learning method based on blockchain Pending CN116187471A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310078935.0A CN116187471A (en) 2023-01-17 2023-01-17 Identity anonymity and accountability privacy protection federal learning method based on blockchain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310078935.0A CN116187471A (en) 2023-01-17 2023-01-17 Identity anonymity and accountability privacy protection federal learning method based on blockchain

Publications (1)

Publication Number Publication Date
CN116187471A true CN116187471A (en) 2023-05-30

Family

ID=86433889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310078935.0A Pending CN116187471A (en) 2023-01-17 2023-01-17 Identity anonymity and accountability privacy protection federal learning method based on blockchain

Country Status (1)

Country Link
CN (1) CN116187471A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117150255A (en) * 2023-10-26 2023-12-01 合肥工业大学 Clustering effect verification method, terminal and storage medium in cluster federation learning
CN117290887A (en) * 2023-11-16 2023-12-26 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Account blockchain-based accountability privacy protection intelligent contract implementation method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117150255A (en) * 2023-10-26 2023-12-01 合肥工业大学 Clustering effect verification method, terminal and storage medium in cluster federation learning
CN117150255B (en) * 2023-10-26 2024-02-02 合肥工业大学 Clustering effect verification method, terminal and storage medium in cluster federation learning
CN117290887A (en) * 2023-11-16 2023-12-26 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Account blockchain-based accountability privacy protection intelligent contract implementation method
CN117290887B (en) * 2023-11-16 2024-04-23 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Account blockchain-based accountability privacy protection intelligent contract implementation method

Similar Documents

Publication Publication Date Title
Hahn et al. Versa: Verifiable secure aggregation for cross-device federated learning
Lyu et al. Towards fair and privacy-preserving federated deep models
Zhao et al. Secure pub-sub: Blockchain-based fair payment with reputation for reliable cyber physical systems
Wu et al. Enabling data trustworthiness and user privacy in mobile crowdsensing
CN112749392B (en) Method and system for detecting abnormal nodes in federated learning
Zhao et al. Blockchain based privacy-preserving software updates with proof-of-delivery for internet of things
CN109862114B (en) Safe vehicle crowd-sourcing sensing method based on fog calculation
CN116187471A (en) Identity anonymity and accountability privacy protection federal learning method based on blockchain
Feng et al. Anonymous authentication on trust in blockchain-based mobile crowdsourcing
Hamouda et al. PPSS: A privacy-preserving secure framework using blockchain-enabled federated deep learning for industrial IoTs
Lyu et al. Towards fair and decentralized privacy-preserving deep learning with blockchain
Sun et al. A blockchain-based audit approach for encrypted data in federated learning
Wu et al. Blockchain-based reliable and privacy-aware crowdsourcing with truth and fairness assurance
CN116405187A (en) Distributed node intrusion situation sensing method based on block chain
Tran et al. An efficient privacy-enhancing cross-silo federated learning and applications for false data injection attack detection in smart grids
CN117077192B (en) Method and device for defending attack of taking and riding in federal study with privacy protection
Tan et al. A privacy-preserving attribute-based authenticated key management scheme for accountable vehicular communications
Dharani et al. A privacy-preserving framework for endorsement process in hyperledger fabric
Lin et al. Blockchain-based complete self-tallying E-voting protocol
CN118133311A (en) Federal learning privacy protection method based on improved group signature
Hu et al. Privacy-preserving combinatorial auction without an auctioneer
Li et al. A Privacy-Preserving Federated Learning Scheme Against Poisoning Attacks in Smart Grid
Zhang et al. Accountable monero system with privacy protection
Cheng et al. A blockchain-enabled decentralized access control scheme using multi-authority attribute-based encryption for edge-assisted Internet of Things
Liu et al. A Trustworthy and Consistent Blockchain Oracle Scheme for Industrial Internet of Things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination