CN117395067B - User data privacy protection system and method for Bayesian robust federal learning - Google Patents

User data privacy protection system and method for Bayesian robust federal learning Download PDF

Info

Publication number
CN117395067B
CN117395067B CN202311482298.XA CN202311482298A CN117395067B CN 117395067 B CN117395067 B CN 117395067B CN 202311482298 A CN202311482298 A CN 202311482298A CN 117395067 B CN117395067 B CN 117395067B
Authority
CN
China
Prior art keywords
matrix
protocol
server
column
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311482298.XA
Other languages
Chinese (zh)
Other versions
CN117395067A (en
Inventor
程珂
穆旭彤
向凤凯
刘奕婷
李佳雯
王建东
祝幸辉
沈玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202311482298.XA priority Critical patent/CN117395067B/en
Publication of CN117395067A publication Critical patent/CN117395067A/en
Application granted granted Critical
Publication of CN117395067B publication Critical patent/CN117395067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Storage Device Security (AREA)

Abstract

The invention provides a user data privacy protection system and method for Bayesian robust federal learning, wherein each server utilizes an RPCA method improved by SecQR protocol and SecEigen protocol to perform safe dimension reduction processing on encrypted update local parameters to obtain dimension reduction results, and utilizes a clustering algorithm to cluster the dimension reduction results to obtain a clustering result so as to cooperatively update a global model. The invention emphasizes protecting the privacy of the client, applies SecQR protocol and SecEigen protocol to the encryption data processing, ensures the safety of sensitive information of the client and has good robustness to cope with potential threats, even if the malicious client adopts various Bayesian attacks, malicious model parameters can be removed as much as possible before model aggregation, and the robustness of the global model is ensured. The invention also has the efficient malicious client detection and local model aggregation capability on the premise of ensuring the robustness and the safety of the system.

Description

User data privacy protection system and method for Bayesian robust federal learning
Technical Field
The invention belongs to the technical field of network security, and particularly relates to a user data privacy protection system and method for Bayesian robust federal learning.
Background
Federal learning (FEDERATED LEARNING, FL) is an emerging distributed machine learning mechanism that is widely used in the fields of medical image diagnosis, risk identification for banking transactions, and autopilot training. Unlike traditional centralized machine learning modes, federal learning allows multiple user devices to complete model training locally, without exchanging data with a central server, sharing only model parameters to generate a global model. Therefore, the mechanism not only reduces the uploading of the user data, but also fully utilizes the computing capacity of the user side and reduces the computing power requirement on the central server. Although federal learning achieves positive effects in terms of both privacy protection and computational efficiency, the mechanism still faces security threats such as byesting attacks and user privacy leaks.
The Bayesian attack is a typical attack in federal learning, and aims to tamper with model update parameters submitted by participants, so that the actual convergence process of the model parameters deviates from an expected path, thereby negatively affecting the accuracy and convergence of the global model. In addition to the problem of the Bayesian attack, the federal learning model training process also has the problem of user privacy disclosure. Although model training by the participants locally does not directly share data, through in-depth analysis of the local model, honest but curious servers may still derive some of the user's information.
Currently, referring to fig. 1, the prior art has proposed some solutions to the problems of the bayer attack and privacy disclosure, however most studies split the two problems apart and ignore the inherent links between them. In essence, the bayer attack and privacy disclosure constitute a complex problem of interaction. On the one hand, the bayer node injects tampered data or model parameters into the system, which if not properly identified and handled, may cause the output of the model to deviate from expected. In this scenario, the bayer pattern node can infer the original data or specific privacy information of the other participants from the deviated pattern. On the other hand, the privacy data of the participants may be compromised, which compromised information provides a priori knowledge of the bayer attack. With such a priori knowledge, an attacker can design more accurate strategies to launch attacks targeted, thus exposing other nodes to more serious threats. Therefore, there is an urgent need to design a federal learning framework that can achieve both privacy protection and barthological robustness.
Achieving privacy preserving federal learning against a bayer attack presents a number of challenges. A straightforward solution is to combine general cryptography techniques such as secure multi-party Computation (SMC) and homomorphic encryption (Homomorphic Encryption, HE) with existing bayer robust federal learning algorithms, and some efforts have been tried to consider both bayer attacks and privacy protection issues, examples of which are as follows:
(1) DPBFL combine differential privacy and robust random aggregation rules to protect against bayer attacks while preserving privacy, but the introduced noise may negatively impact global model accuracy. (2) Flag eliminates the impact of malicious clients by adding sufficient noise, however, this can lead to a significant drop in honest model performance. (3) PPRAgg combines homomorphic encryption and random noise confusion techniques to protect model training and evaluates the bayer pattern with cosine similarity as reputation. (4) PEFL employs homomorphic encryption as the underlying technique, providing a path to penalize the contributors through efficient gradient data extraction, but this requires complex computation. (5) SecureF custom a series of homomorphic cryptography components based on FLTrust, FLOD improves FLTrust privacy by using homomorphic encryption and two-party computing techniques, but requires homomorphic encryption operations on participants, which results in a large amount of local client computing overhead, and both methods robustness rely on clean datasets on the server, which limits the method application scenarios. (6) In the secure multiparty computing field, LSFL and BREA provide innovative approaches. LSFL combines the characteristics of privacy protection and Bayesian robustness, designs a lightweight double-server security aggregation protocol, and applies a K-nearest neighbor algorithm to improve the robustness of federal learning, but is easy to break through by malicious clients. (8) BREA is a secure aggregation framework against bayer attacks, but requires multiple communications between the server and the participants, resulting in a large communication overhead. Thus, although the above studies apply different techniques to solve the privacy protection and malicious node attack problems, most cannot ensure the barthological robustness of the federal learning system while protecting privacy. These problems will be further amplified in a large scale and real-time federal learning environment.
To sum up, the prior art has the following disadvantages:
1. Common cryptographic techniques such as secure multi-party Computation (SMC) and homomorphic encryption (Homomorphic Encryption, HE) are combined with existing bezimuth robust federal learning algorithms. This approach requires performing a large number of time-consuming ciphertext domain operations. For example, a large-scale matrix multiplication on ciphertext is required to measure the quality of the gradients of the parties, and a series of safe nonlinear functions are required to eliminate the abnormal gradients.
2. Attempts have been made to consider both the bayer attack and privacy protection issues, but the above approach has made a partial compromise in order to guarantee the efficiency of execution of the ciphertext protocol. For example, the solution may reveal some intermediate values in the model parameter detection process, and the solution and solution utilize simple aggregation rules that are weak against the bayer attack.
In summary, the privacy protection scheme of the Bayesian robust federal learning is difficult to consider the calculation efficiency, the data privacy and the model robustness.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a user data privacy protection system and method for Bayesian robust federal learning. The technical problems to be solved by the invention are realized by the following technical scheme:
In a first aspect, the present invention provides a user data privacy protection system for bezier robust federal learning, comprising: at least two servers and a plurality of clients, each client having a local data set;
S100, each server transmits parameters of the global model to each client by using an addition secret sharing protocol;
S200, each client recovers local parameters needing local training according to own share parameters, and trains the local parameters by utilizing a local data set to obtain updated local parameters; sharing the updated local parameters by using the addition secret sharing protocol and uploading the updated local parameters to a corresponding server;
S300, each server receives the corresponding share and encrypts the updated local parameters, performs safe dimension reduction processing on the encrypted updated local parameters by using an RPCA method improved by SecQR protocol and SecEigen protocol to obtain a dimension reduction result, and clusters the dimension reduction result by using a clustering algorithm to obtain a clustering result; and according to the clustering result, the global model is updated in cooperation with other servers.
In a second aspect, the invention provides a user data privacy protection method facing to Bayesian robust federal learning, which is applied to a user data privacy protection system facing to Bayesian robust federal learning, wherein the privacy protection system comprises at least two servers and a plurality of clients, and each client has a local data set; the user data privacy protection method facing Bayesian robust federal learning comprises the following steps:
S100, each server transmits parameters of the global model to each client by using an addition secret sharing protocol;
S200, each client recovers local parameters needing local training according to own share parameters, and trains the local parameters by utilizing a local data set to obtain updated local parameters; sharing the updated local parameters by using the addition secret sharing protocol and uploading the updated local parameters to a corresponding server;
S300, each server receives the corresponding share and encrypts the updated local parameters, performs safe dimension reduction processing on the encrypted updated local parameters by using an RPCA method improved by SecQR protocol and SecEigen protocol to obtain a dimension reduction result, and clusters the dimension reduction result by using a clustering algorithm to obtain a clustering result; and according to the clustering result, the global model is updated in cooperation with other servers.
In a third aspect, the present invention provides a server for processing the steps of the server in the user data privacy protection system for the bayer robust federal learning of the first aspect.
In a fourth aspect, the present invention provides a client, and the step of processing the client in the user data privacy protection system for the bezating robust federal learning according to the first aspect.
The beneficial effects are that:
The invention provides a user data privacy protection system and a method for Bayesian robust federal learning, wherein each server receives an updated local parameter which corresponds to share and is encrypted, the encrypted updated local parameter is subjected to safe dimension reduction processing by using an RPCA method which is improved by SecQR protocol and SecEigen protocol to obtain a dimension reduction result, and the dimension reduction result is clustered by using a clustering algorithm to obtain a clustering result; and according to the clustering result, the global model is updated in cooperation with other servers. The invention applies SecQR protocol and SecEigen protocol to the encryption data processing, can efficiently and accurately identify the Bayesian attack node on the premise of protecting the privacy of users, and has better robustness. Compared with the prior art, the method and the system emphasize protection of the privacy of the client, guarantee the safety of sensitive information of the client, have good robustness to cope with potential threats, and even if a malicious client adopts various Bayesian attacks, malicious model parameters can be removed as much as possible before model aggregation, so that the robustness of a global model is ensured. The invention also has the efficient malicious client detection and local model aggregation capability on the premise of ensuring the robustness and the safety of the system.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a schematic illustration of the safety problem presented in the Federal learning system provided by the present invention;
FIG. 2 is a schematic diagram of SECFEDDMC system models provided by the present invention;
fig. 3 is a schematic process diagram of a user data privacy protection system for bayer robust federal learning provided by the invention;
fig. 4 is a RPCA workflow diagram provided by the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
The invention provides a user data privacy protection system for Bayesian robust federal learning, and the federal learning framework SECFEDDMC of the invention aims to realize efficient Bayesian attack detection and defense on the premise of protecting user data privacy.
The SECFEDDMC scheme focuses on data privacy, model robustness, and computational efficiency:
data privacy: the SECFEDDMC scheme ensures the privacy of the local model parameters of each client and prevents the leakage of sensitive information. There may be honest but curious servers, or other malicious participants who attempt to obtain private information from clients by inferring sharing gradients, etc. Therefore, the SECFEDDMC scheme emphasizes protecting the privacy of the client and guaranteeing the safety of sensitive information of the client.
Model robustness: the SECFEDDMC scheme must be robust to effectively cope with these potential threats due to the risk of participants submitting malicious model parameters that may compromise model accuracy. Even if a malicious client is faced with various Bayesian attacks, malicious model parameters need to be removed as much as possible before model aggregation, so that the accuracy of the global model is ensured.
Calculation efficiency: the SECFEDDMC scheme is based on a dual server model and aims to reduce the calculation and communication burden of the dual server. Even in the case of malicious participants, the scheme can prevent the time from being greatly increased for the whole training process. Thus, the efficiency of completing tasks can be kept high while the robustness and the safety are ensured.
The SECFEDDMC scheme mainly addresses the following two threats:
1) Honest but curious servers: servers S 0 and S 1 may attempt to obtain more private information when properly executing the protocol. Although S 0 and S 1 do not disrupt the federal learning process, there may be attempts to infer the user' S private information due to the commercial benefits that exist.
2) Malicious clients: there are some clients that may attempt to tamper in the local training phase by launching a bayer attack, using their private data, or sending any local model parameters, thus breaking the global model or stealing the private information of other clients.
In the present invention, it is assumed that the number of malicious clients is limited, and the number of malicious clients M does not exceed half of the total number n of all clients, i.e. M < n/2. Notably, some clients may attempt to infer private information of other users using a global model. This attack pattern is orthogonal to the threat of interest to the present invention. Thus, the SECFEDDMC scheme does not provide a defense against such attacks.
Referring to fig. 2, in the SECFEDDMC framework, a set of clients, C 1,C2,…,Cn, and two servers, perform secure federal learning. Each client C i has its own local dataset D i, where i e n, the training dataset of all clients can be represented as d= u i∈[n]Di. The common goal of the clients is to jointly train the global model by solving the optimization problem min wED [ L (D, w) ] where w is the weight parameter of the global model and L is the loss function, e.g., cross entropy loss function.
In connection with fig. 2 and 3, the system of the present invention comprises at least two servers and a plurality of clients, each client having a local data set.
S100, each server transmits parameters of the global model to each client by using an addition secret sharing protocol;
S200, each client recovers local parameters needing local training according to own share parameters, and trains the local parameters by utilizing a local data set to obtain updated local parameters; sharing the updated local parameters by using the addition secret sharing protocol and uploading the updated local parameters to a corresponding server;
S300, each server receives the corresponding share and encrypts the updated local parameters, performs safe dimension reduction processing on the encrypted updated local parameters by using an RPCA method improved by SecQR protocol and SecEigen protocol to obtain a dimension reduction result, and clusters the dimension reduction result by using a clustering algorithm to obtain a clustering result; and according to the clustering result, the global model is updated in cooperation with other servers.
As shown in fig. 2, in the system model SECFEDDMC, each iteration of the present invention is mainly divided into the following four steps:
① Servers S 0 and S 1 will " -Sharing the latest global model of the form/>And/>And transmitting to each client, wherein t represents the current training round, and when t=0, the two servers initialize the global model structure.
② Each client C i receives the global model shares issued by the two servers in the local training, and restoresThe client updates the model parameter w t to/>, on the local dataset D i, using a random gradient descent (Stochastic GRADIENT DESCENT, SGD) algorithmWhere eta is the learning rate,/>Is the gradient of the local dataset D i to the current global model weight w t. Client-side updated local model parameters/>Go "/>Sharing ", will/>, respectivelyAnd/>To the servers S 0 and S 1.
③ Uploading n clientsAnd/>Expressed as/>And/>Servers S 0 and S 1 co-compute from/>And/>And judging and filtering out malicious model parameters.
④ After M malicious local model parameters are determined and filtered, S 0 and S 1 aggregate model parameters of all honest clients, and update the global model. Assuming a total of n-M honest clients, the update of the global model parameters on server S j, j ε 0,1 can be done by the following formula:
The clustering algorithm of the invention is a K-means clustering method. On the basis of the data after dimension reduction, the method classifies the clients by adopting a K-means clustering method. Specifically, the invention relates to the model parameters after dimension reduction K-means operations are performed, noted: /(I)Wherein S [t] is a binary set, which is represented by 0 and 1, and the clustering result of the t-th round is obtained. The invention sets the K value to 2, groups the clients into two categories, and represents possible honest clients (marked as 1) and malicious clients (marked as 0) respectively. In the clustering result, the larger number of the categories are regarded as honest clients, and the smaller number of the categories are regarded as malicious clients. Such a setting is based on a reasonable assumption that in most cases the number of honest clients should be greater than malicious clients.
The invention can efficiently and accurately identify the Bayesian attack node on the premise of protecting the privacy of the user, and has better robustness. Compared with the prior art, the method and the system emphasize protection of the privacy of the client, guarantee the safety of sensitive information of the client, have good robustness to cope with potential threats, and even if a malicious client adopts various Bayesian attacks, malicious model parameters can be removed as much as possible before model aggregation, so that the robustness of a global model is ensured. The scheme also has efficient malicious client detection and local model aggregation capability on the premise of ensuring system robustness and security.
Noteworthy are: the secure RPCA algorithm needs to solve two main problems: 1) How to implement safe QR decomposition; 2) How to implement secure feature decomposition. To this end, the present invention proposes a secure orthogonal triangular decomposition (SecQR) protocol and a secure feature decomposition (SecEigen) protocol based on iASS.
Referring to fig. 4, the method for performing secure dimension reduction processing on the encrypted updated local parameter to obtain a dimension reduction result according to the RPCA method modified by SecQR protocol and SecEigen protocol of the present invention includes:
s310, taking an input matrix M1 formed by the updated local parameters uploaded by encryption and a random projection matrix Q1 epsilon Q n×ρ as inputs; where ρ=k+α, k is the desired number of principal components, α is the oversampling parameter;
s320, carrying out mean value centering on each column of the input matrix M1 to eliminate offset among columns and obtain a centered input matrix M2;
S330, multiplying the projection matrix Q1 and the covariance matrix M2M2 T once to obtain a multiplication matrix, orthogonalizing the multiplication matrix by using SecQR protocol, taking the orthogonalization result as the projection matrix of the next time, repeating the steps of calculating the multiplication matrix P times and orthogonalizing the multiplication matrix to obtain a recursively optimized projection matrix Q2;
S340, multiplying the centralized input matrix M2 by the optimized projection matrix Q2 to project the input matrix M to a lower dimensional space to obtain a low-dimensional projection matrix Q3;
S350, calculating a small symmetric matrix B by using the low-dimensional projection matrix Q3 and the centralized input matrix M2, wherein the matrix represents the characteristic variance in a low-dimensional space and is obtained by multiplying Q3 and Q3 T on two sides of a covariance matrix M2M2 T;
s360, calculating the feature vector of B through SecEigen protocol Is provided with a secret share;
s370, using the feature vector Reconstructing a feature vector W in an original space;
S380, projecting the centralized input matrix M2 onto the eigenvector W to obtain a dimension reduction result of the input matrix M1
In the process, the RPCA algorithm reduces the original decomposition problem of M epsilon R n×d into the decomposition of the constant size matrix B epsilon R ρ×ρ, and rho < d, so that the calculation speed is greatly improved.
In the design of the dimension reduction algorithm, a random principal component analysis method (RPCA) is selected, and model parameters are mapped from an original high-dimension space to a lower dimension, so that the main characteristics are reserved as much as possible while the calculation complexity is reduced. RPCA is a dimension reduction technique for high-dimensional data. Compared to classical principal component analysis (PRINCIPAL COMPONENT ANALYSIS, PCA), RPCA reduces computational complexity by using random techniques, while maintaining an efficient perception of major changes to data. The basic idea of RPCA is to map the original high-dimensional data into a low-dimensional subspace by random linear mapping, and then perform PCA analysis in this low-dimensional subspace. The RPCA not only can accelerate the PCA analysis speed, but also can reduce the storage requirement thereof, so that the RPCA is more suitable for large-scale high-dimensional data processing.
There are n model parameter matrices w=w_1, w_2, …, w_n of clients in a certain round, where each w_i corresponds to a d-dimensional vector representing the model parameters of the i-th client. The invention applies RPCA to a parameter matrix W epsilon R (n x d), specifically expressed asWhere P_k represents the RPCA projection operator, k is a reduced dimension (k < d), the value of which can be adjusted according to the specific client data distribution and computational overhead. /(I)Is a dimension-reducing model parameter matrix obtained after RPCA projection.
Referring to FIG. 4, the PCA algorithm of the present invention employs two secret sharing mechanisms, namely "< - > -sharing" AND ": Sharing.
"< - > -Sharing" is a classical additive secret sharing (ADDITIVE SECRET SHARING, ASS). For numerical valuesIt is shared between the two servers S 0 and S 1 with each server S i, i ε {0,1} holding the share < v > i, satisfying: v= < v > 0+<v>1.
Sharing "is based on" < > -sharing "Improved additive secret sharing (Improved ADDITIVE SECRET SHARING, IASS). In this mechanism, there are two values/>The following conditions are satisfied: (1) The random number r v is shared between S 0 and S 1 by means of "< - > -sharing", i.e. < r v>0 and < r v>1;(2)δv=v-rv;(3)δv are disclosed for S 0 and S 1. The share of server S i is formally expressed as/>Where i e {0,1}.
IASS the basic operational protocol is as follows:
Sharing protocol: the data owner converts the data v into' -A shared "form. First, the data owner generates a set of "< - > -shared" two random numbers < r v>0 and < r v>1, and computes δ v=v-<rv>0-<rv>1. Next, delta v and < r v>i are sent to S i, respectively, to give v "/>Share "form, i.e./>
Reconstruction protocol: to fromWhere v is reconstructed, the two servers exchange < r v>i and calculate v=δ v+<rv>0+<rv>1 locally, this process can be expressed as/>)。
Polynomial computing protocol: given a givenAnd a common constant/>Servers S 0 and S 1 may implement computation/>Specifically, S i calculates δ y=c1·δa+c2·δb and <ry>i=c1·<ra>i+c2·<rb>i locally to get/>
Multiplication protocol: given'Share "form a, b, i.e. server S i holds/> Protocol SecMul output/>Namely S i hold/>Where y=ab. Offline stage: (1) Both servers S i each randomly generate/>(2) By multiplication of "< - > -sharing", < r ab>i is calculated using < r a>i,<rb>i. In the online phase, both servers calculate <δy>i=i·δa·δba•<rb>ib•<ra>i+<rab>i-<ry>i, locally and send the calculation result < delta y>i to the other server S 1-i. Each server S i then calculates delta y=<δy>0+<δy>1 to obtain/>This process is formally expressed as/>
Inputting a multiplication protocol: given'Share "form a, b, c, the protocol will output/>Offline stage: (1) Both servers S i each randomly generate/>(2) The two-party server S i generates "</> -sharing" of the four terms r ab,rbc,rac and r abc. In the online stage, after <δy>i=i·δa·δb·δca·δb·<rc>ia·δc·<rb>ib·δc·<ra>ia·<rbc>ib·<rac>ic·<rab)i+<rabc>i-<ry>i, is calculated by the two servers respectively, the two servers S i exchange < delta y>i to recover delta y, thus obtaining/>For ease of representation, this process is formally represented as/>, similar to the multiplication protocolSimilarly, this protocol can be generalized to n-input multiplication protocols.
Matrix multiplication protocol: given'Share "matrix a n×d,Bd×k, i.e. server S i holdsThe goal of protocol SecMatMul is output/>Offline stage: (1) Both servers S i each randomly generate/>(2) By multiplication of "< - > -sharing", < r AB>i is calculated using < r A>i,<rB>i. On-line stage: each party calculates < delta C>i locally, the calculation formula is :<δC>i=i·δA·δBA·<rB>iB·<rA>i+<rAB>i-<rC>i., and each party exchanges < delta C>i with each other and obtains delta C. This process is formally represented as
As an alternative embodiment of the present invention, S330 includes:
S331, multiplying the projection matrix Q1 and the covariance matrix M2M2 T once to obtain a multiplication matrix A n×ρ;
s332, assigning an R matrix in SecQR protocols as the multiplication matrix, and initializing a Q matrix as a corresponding identity matrix or all-zero matrix;
S333, extracting a lower triangular matrix of the multiplication matrix column by column, and calculating a correlation value of each column of the lower triangular matrix;
S333 of the present invention includes:
s3331 extracting the kth column of the lower triangular matrix of the multiplication matrix column by column
S3332 calculate kth column2-Norm/>And sign bit/>
S3333 using the 2-normAnd sign bit/>Calculate the Householder vector per column >
S3334 using the holder vectorAnd column k/>Construction of the kth column/>, of the transformation matrix
S3335 using the 2-normFor the Householder vector/>Normalization processing is carried out to obtain the direction adjustment quantity/>
S3336 using the direction adjustment amountAnd sign bit/>Calculate the direction adjustment amount/>
S3337 adjusting the directionAnd the kth column/>, of the transformation matrixAs a correlation value.
S334, updating the R matrix and the Q matrix by using the correlation value of each column of the lower triangular matrix;
S335, taking the updated Q matrix as a projection matrix of the next time;
S336, repeating S331 to S335 for a total of P times yields a recursively optimized projection matrix Q2.
The SecQR protocol involves a number of complex nonlinear computations, such as division and two-dimensional forms. For division calculations, the main challenge is to solve for the reciprocal 1/x. The reciprocal is approximately calculated by Newton's iterative method, and the specific iterative formula is that :yn+1=yn(2-xyn)=2yn-xynyn,y0=3e0.5-x+0.003.
For the secure computation of e x in the iterative formula, the protocol is defined as: in this protocol, given input/> Output/>First, the server S 0 initializes/>Server S 1 initialization/>Then, two servers call SecMul the protocol to calculate
Based on the above theory, division is defined asNamely: input/>Output/>First, two servers calculate/>The two servers then invoke iASS's 3-input ride-through protocol to optimize the computation of xy nyn. Thus,/> The number of iterations of the algorithm may be adjusted according to the error requirement, set to 10 by default herein. Finally, two servers call SecMul the protocol to calculate/>
The two-norm is the square root of the sum of the squares of the elements in the vector. For vector x= (x [0], x [1], …, x [ n ]), the calculation formula for the two-dimensional form is:
For the two-paradigm x ciphertext calculation, a secure square root protocol needs to be implemented Namely: input is/>When the output is/>The invention carries out approximate square root calculation by Newton iteration method, and the specific iteration formula is as follows: y n+1=0.5•(yn+x/yn),y0 = x. Thus, the two servers S i will first/>Initialized to/>And then obtaining/>, through iterative calculation
On this basis, the two-party server securely calculates a two-paradigm, defined asNamely:
for the computation of x sign bits, the most advanced most significant bit MSB protocol SecMSB is employed herein. The specific content forms are as follows:
secure computation of x sign bits, i.e. ρ= -2.msb (x) +1, is then achieved based on SecMSB protocol. Let x sign bit calculation protocol be defined as The specific form is as follows:
In summary, this section implements the secure QR decomposition of the two-party server, defined as In the way of'The input of a matrix a in m x n in shared form will output the matrix/>Sum matrix/>One orthogonal matrix and one upper triangular matrix, respectively, and such that a=qr. A specific implementation is shown in algorithm 1 below.
Algorithm 1.SecQR protocol
As an alternative embodiment of the present invention, S360 includes:
s361, two servers negotiate to determine non-zero random numbers, random square matrixes and additive secret sharing of inverse square matrixes of the random square matrixes;
S362, the two servers cooperatively calculate the similarity matrix of the small symmetric matrix B by utilizing the random square matrix and the additive secret sharing of the inverse square matrix of the random square matrix
S363, the two servers blindly use the addition secret sharing of the non-zero random numbers to carry out the similar matrix;
S364, the first server sends the blinded added secret sharing shares of the similarity matrix to the second server;
S365, the second server calculates the eigenvalue tλ j and the eigenvector V' of the similar matrix according to the added secret sharing share sent by the first server, and sends the eigenvalue and the eigenvector of the similar matrix back to the first server;
S366, substituting the eigenvalue into an inverse function by each server, and recovering the normal secret sharing share of the eigenvalue of the small symmetric matrix B by using the inverse function;
s367, each server multiplies the random square matrix with the eigenvectors of the similar matrix to obtain eigenvectors of a small symmetric matrix B Is used to share the secret.
Feature decomposition is a key step in Random Principal Component Analysis (RPCA) whose function is to reveal the intrinsic properties of the matrix. The process mainly comprises decomposing the matrix into a group of eigenvalues and corresponding eigenvectors, revealing the key characteristics of the original matrix. In a plaintext environment, the computation of feature values and feature vectors typically requires multiple iterations, such as QR decomposition. However, in an encrypted environment, if such computation is performed directly on the matrix elements, high interaction turns will result in a dual server architecture, making feature decomposition difficult. To solve this problem, two quotations were introduced inspired by the existing work. The two quotations utilize some characteristics of the matrix, so that the complex characteristic decomposition problem in the double-server architecture is converted into simpler operation, and the calculation difficulty is reduced.
Lemma 1. If λ is a eigenvalue of matrix a, thenAlso matrix/>Is a characteristic value of (wherein/>Is a polynomial of λ). In addition, if vector v is a eigenvector of matrix A at eigenvalue λ, then vector v is also matrix/>At eigenvalue/>The following feature vector.
Lemma 2. If a and B are similar matrices (i.e., a-B) and b=p -1 AP, then matrices a and B have the same eigenvalues. If a eigenvector v is present at eigenvalue λ of matrix a, P -1 v is the eigenvector of B at eigenvalue λ.
In SecEigen protocol, a random matrix P and a random functionFor converting the secret matrix X into a matrix Y. Random matrix P and random function/>It is ensured that the secret matrix X is blinded, so that the matrix Y can be recovered and the eigenvalues and eigenvectors of the matrix Y can be quickly calculated in the clear. Then by means of an inverse function/>And a matrix P, the eigenvalues of the matrix X and the shares of the eigenvectors can be converted back secretly. Function/>And/>Generating the idea of the prior art. Algorithm 2 gives a detailed process of feature decomposition for security.
Algorithm 2.SecEigen protocol
The SECFEDDMC framework can be fully implemented based on the above protocols. The SECFEDDMC framework is mainly composed of a dual server execution phase and a client local training phase. And in the execution stage of the double servers, firstly, performing security dimension reduction processing, and then, performing security clustering and security aggregation operation.
As an optional real-time manner of the present invention, the collaborative updating the global model with other servers according to the clustering result includes:
performing cooperative calculation with other servers according to the clustering result to judge and filter out malicious local parameters to obtain honest local parameters;
And updating the global model according to the integrity local parameters.
It should be noted that, the parameter matrix obtained after the dimension reduction is used for clustering to identify the malicious clients. And after the malicious clients are identified and removed, carrying out safe aggregation on the original parameter matrix. And in the local training stage of the client, the client performs model training and uploads the secret sharing parameters to the cloud server. The specific implementation scheme is shown in algorithm 3.
Algorithm 3. Secfedddmc framework.
The invention provides a user data privacy protection method facing Bayesian robust federal study, which is applied to a user data privacy protection system facing Bayesian robust federal study, wherein the privacy protection system comprises at least two servers and a plurality of clients, and each client has a local data set; the user data privacy protection method facing Bayesian robust federal learning comprises the following steps:
S100, each server transmits parameters of the global model to each client by using an addition secret sharing protocol;
S200, each client recovers local parameters needing local training according to own share parameters, and trains the local parameters by utilizing a local data set to obtain updated local parameters; sharing the updated local parameters by using the addition secret sharing protocol and uploading the updated local parameters to a corresponding server;
S300, each server receives the corresponding share and encrypts the updated local parameters, performs safe dimension reduction processing on the encrypted updated local parameters by using an RPCA method improved by SecQR protocol and SecEigen protocol to obtain a dimension reduction result, and clusters the dimension reduction result by using a clustering algorithm to obtain a clustering result; and according to the clustering result, the global model is updated in cooperation with other servers.
The invention provides a server, which processes the server in the user data privacy protection system facing Bayesian robust federal learning.
The invention provides a client, which processes the client in a user data privacy protection system facing Bayesian robust federal learning.
The invention provides a user data privacy protection system for Bayesian robust federal learning. Compared with the traditional federal learning privacy protection scheme, the method and the device need to detect the model parameters of the client before aggregating the model parameters. The detection process consists of two modules of dimension reduction and clustering. In addition, in order to safely detect malicious client behaviors and not reveal privacy model parameters of honest clients, the invention mainly comprises two core steps: (1) The main feature extraction of the high-dimensional model is realized by constructing a safe Random Principal Component Analysis (RPCA) algorithm; (2) And analyzing the extracted main characteristics by utilizing a security clustering technology to realize the detection of the malicious client.
The technical effects of the present invention are explained by experiments as follows.
1. Experimental setup
SECFEDDMC was implemented using PyTorch and experiments were completed on 2 servers equipped with 2 NVIDIA RTX 3090 GPUs, intel i9-10900K CPU, 128GB memory, ubuntu18.04 system.
The invention evaluates SECFEDDMC performances on three standard reference data sets and correspondingly sets three different network architectures. The three standard reference data sets are MNIST, FEMNIST, CIFAR a 10, respectively. The present invention sets up 100 clients and creates a local client dataset using Dirichlet (Dirichlet) distribution to achieve non-IID distribution. Specifically, q j to DirN (β) are obtained from the dirichlet distribution, and an example of the j class scale q j is allocated to the client i. Wherein the parameter beta can control the unbalance degree of data among clients, and the smaller the numerical value is, the unbalance is. In default, β is set to 5.
The invention randomly selects 28 clients as malicious clients by default. The adopted attack method comprises the following steps: tag flip attacks, scaling attacks, gaussian attacks, LIT attacks, and witch attacks.
The comparative defense method set by the present invention includes PPRAgg, PEFL, LSFL and BREA. These methods do not require a server to have a trusted dataset and can output a list of malicious clients to evaluate the detection effect. In the course of the comparison experiment, the same model structure, learning rate (η), batch size (B), data distribution parameters (β) and local training lineage (E) were used, replacing only the different defense methods. The defense method remains the same set as the previous scheme to ensure fairness of the experiment. Table 2 summarizes the default settings of the experiment.
Table 1 default parameter settings
2. Detection effect of malicious client
The method comprises the steps of comparing SECFEDDMC detection results under different attacks and different datasets with other detection methods, and researching the influence of different malicious client numbers and the influence of non-independent identical distribution on defense. The method herein compares with the malicious client detection results of 4 advanced methods on three data sets. As can be seen from Table 2, SECFEDDMC is superior to the existing method under different attack settings, and the detection result is optimal.
TABLE 2 malicious client detection results under different attacks and different detection methods
3. Accuracy of global model
Table 3 shows TACC and ASR of the global model under different attack and detection methods on three reference datasets. Where the "No-attack" term represents the case where the global model is learned by only 72 honest clients without the intervention of a malicious client. For the remaining four attacks, the experimental setup contained 28 malicious clients and 72 honest clients. LIT-attack and SCALINGATTACK are targeted backdoor attacks, and therefore ASR for each method was tested. Under several other attack environments, SECFEDDMC's global model performance reaches the same level as under the No-attack environment. Compared with other defense methods, the global model SECFEDDMC performs optimally in accuracy. Under LIT-attack and Scaling-attack scenarios, SECFEDDMC ASR indexes are lowest, which indicates that SECFEDDMC effectively prevents backdoor implantation by effectively removing malicious clients.
4. Computational overhead
In order to evaluate SECFEDDMC the computational overhead under the dual cloud server model, detailed runtime testing of each critical step is performed herein. Note that this overhead assessment does not involve local training and time to transmit parameters to the server. SECFEDDMC the frame consists essentially of three components: security dimension reduction, security clustering and security aggregation. The security clustering part adopts SECKMEANS scheme, and the security dimension reduction is realized by a core protocol SecQR, secEigen, secMatMul. The evaluation was based on three data sets and the experimental results are shown in table 4. Comparison of SECFEDDMC with other defense methods as shown in table 5, the computational overhead of SECFEDDMC is feasible in practical application, and has small computational overhead while ensuring security, thus providing possibility for wide deployment in practical environment.
TABLE 3 test accuracy of Global models under different attacks and different detection methods
TABLE 4SECFEDDMC run times for the various steps
Table 5 computational overhead of different methods on three datasets
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Although the application is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (6)

1. A user data privacy protection system for bezels robust federal learning, comprising: at least two servers and a plurality of clients, each client having a local data set;
S100, each server transmits parameters of the global model to each client by using an addition secret sharing protocol;
S200, each client recovers local parameters needing local training according to own share parameters, and trains the local parameters by utilizing a local data set to obtain updated local parameters; sharing the updated local parameters by using the addition secret sharing protocol and uploading the updated local parameters to a corresponding server;
S300, each server receives the corresponding share and encrypts the updated local parameters, performs safe dimension reduction processing on the encrypted updated local parameters by using an RPCA method improved by SecQR protocol and SecEigen protocol to obtain a dimension reduction result, and clusters the dimension reduction result by using a clustering algorithm to obtain a clustering result; according to the clustering result, the global model is updated in a cooperative mode with other servers;
The method for performing secure dimension reduction processing on the encrypted updated local parameters by using the RPCA method modified by SecQR protocol and SecEigen protocol to obtain dimension reduction results comprises the following steps:
S310, taking an input matrix M1 formed by the updated local parameters uploaded by encryption and a random projection matrix Q1 epsilon Q n×ρ as inputs; where pi=k+α, k is the desired number of principal components, α is the oversampling parameter;
s320, carrying out mean value centering on each column of the input matrix M1 to eliminate offset among columns and obtain a centered input matrix M2;
S330, multiplying the projection matrix Q1 and the covariance matrix M2M2 T once to obtain a multiplication matrix, orthogonalizing the multiplication matrix by using SecQR protocol, taking the orthogonalization result as the projection matrix of the next time, repeating the steps of calculating the multiplication matrix P times and orthogonalizing the multiplication matrix to obtain a recursively optimized projection matrix Q2;
S340, multiplying the centralized input matrix M2 by the optimized projection matrix Q2 to project the input matrix M to a lower dimensional space to obtain a low-dimensional projection matrix Q3;
S350, calculating a small symmetric matrix B by using the low-dimensional projection matrix Q3 and the centralized input matrix M2, wherein the matrix represents the characteristic variance in a low-dimensional space and is obtained by multiplying Q3 and Q3 T on two sides of a covariance matrix M2M2 T;
s360, calculating the feature vector of B through SecEigen protocol Is provided with a secret share;
s370, using the feature vector Reconstructing a feature vector W in an original space;
s380, projecting the centralized input matrix M2 onto the eigenvector W to obtain a dimension reduction result of the input matrix M1
S330 includes:
S331, multiplying the projection matrix Q1 and the covariance matrix M2M2 T once to obtain a multiplication matrix A n×ρ;
s332, assigning an R matrix in SecQR protocols as the multiplication matrix, and initializing a Q matrix as a corresponding identity matrix or all-zero matrix;
S333, extracting a lower triangular matrix of the multiplication matrix column by column, and calculating a correlation value of each column of the lower triangular matrix;
s334, updating the R matrix and the Q matrix by using the correlation value of each column of the lower triangular matrix;
S335, taking the updated Q matrix as a projection matrix of the next time;
s336, repeating S331 to S335 for P times to obtain a recursively optimized projection matrix Q2;
S333 includes:
s3331 extracting the kth column of the lower triangular matrix of the multiplication matrix column by column
S3332 calculate kth column2-Norm/>And sign bit/>
S3333 using the 2-normAnd sign bit/>Calculate the Householder vector per column >
S3334 using the holder vectorAnd column k/>Construction of the kth column/>, of the transformation matrix
S3335 using the 2-normFor the Householder vector/>Normalization processing is carried out to obtain the direction adjustment quantity/>
S3336 using the direction adjustment amountAnd sign bit/>Calculate the direction adjustment amount/>
S3337 adjusting the directionAnd the kth column/>, of the transformation matrixAs a correlation value;
s360 includes:
s361, two servers negotiate to determine non-zero random numbers, random square matrixes and additive secret sharing of inverse square matrixes of the random square matrixes;
S362, the two servers cooperatively calculate the similarity matrix of the small symmetric matrix B by utilizing the random square matrix and the additive secret sharing of the inverse square matrix of the random square matrix
S363, the two servers blindly use the addition secret sharing of the non-zero random numbers to carry out the similar matrix;
S364, the first server sends the blinded added secret sharing shares of the similarity matrix to the second server;
S365, the second server calculates the eigenvalue tλ j and the eigenvector V' of the similar matrix according to the added secret sharing share sent by the first server, and sends the eigenvalue and the eigenvector of the similar matrix back to the first server;
S366, substituting the eigenvalue into an inverse function by each server, and recovering the normal secret sharing share of the eigenvalue of the small symmetric matrix B by using the inverse function;
s367, each server multiplies the random square matrix with the eigenvectors of the similar matrix to obtain eigenvectors of a small symmetric matrix B Is used to share the secret.
2. The user data privacy protection system for the Bayesian robust federal learning of claim 1, wherein the clustering algorithm in S300 is a K-means clustering method.
3. The user data privacy protection system for bezier robust federal learning of claim 1, wherein the updating the global model in coordination with other servers based on the clustering result comprises:
performing cooperative calculation with other servers according to the clustering result to judge and filter out malicious local parameters to obtain honest local parameters;
And updating the global model according to the integrity local parameters.
4. The user data privacy protection method facing the Bayesian robust federal study is characterized by being applied to the user data privacy protection system facing the Bayesian robust federal study, wherein the privacy protection system comprises at least two servers and a plurality of clients, and each client has a local data set; the user data privacy protection method facing Bayesian robust federal learning comprises the following steps:
S100, each server transmits parameters of the global model to each client by using an addition secret sharing protocol;
S200, each client recovers local parameters needing local training according to own share parameters, and trains the local parameters by utilizing a local data set to obtain updated local parameters; sharing the updated local parameters by using the addition secret sharing protocol and uploading the updated local parameters to a corresponding server;
S300, each server receives the corresponding share and encrypts the updated local parameters, performs safe dimension reduction processing on the encrypted updated local parameters by using an RPCA method improved by SecQR protocol and SecEigen protocol to obtain a dimension reduction result, and clusters the dimension reduction result by using a clustering algorithm to obtain a clustering result; according to the clustering result, the global model is updated in a cooperative mode with other servers;
The method for performing secure dimension reduction processing on the encrypted updated local parameters by using the RPCA method modified by SecQR protocol and SecEigen protocol to obtain dimension reduction results comprises the following steps:
s310, taking an input matrix M1 formed by the updated local parameters uploaded by encryption and a random projection matrix Q1 epsilon Q n×ρ as inputs; where ρ=k+α, k is the desired number of principal components, α is the oversampling parameter;
s320, carrying out mean value centering on each column of the input matrix M1 to eliminate offset among columns and obtain a centered input matrix M2;
S330, multiplying the projection matrix Q1 and the covariance matrix M2M2 T once to obtain a multiplication matrix, orthogonalizing the multiplication matrix by using SecQR protocol, taking the orthogonalization result as the projection matrix of the next time, repeating the steps of calculating the multiplication matrix P times and orthogonalizing the multiplication matrix to obtain a recursively optimized projection matrix Q2;
S340, multiplying the centralized input matrix M2 by the optimized projection matrix Q2 to project the input matrix M to a lower dimensional space to obtain a low-dimensional projection matrix Q3;
S350, calculating a small symmetric matrix B by using the low-dimensional projection matrix Q3 and the centralized input matrix M2, wherein the matrix represents the characteristic variance in a low-dimensional space and is obtained by multiplying Q3 and Q3 T on two sides of a covariance matrix M2M2 T;
s360, calculating the feature vector of B through SecEigen protocol Is provided with a secret share;
s370, using the feature vector Reconstructing a feature vector M in an original space;
s380, projecting the centralized input matrix M2 onto the eigenvector W to obtain a dimension reduction result of the input matrix M1
S330 includes:
S331, multiplying the projection matrix Q1 and the covariance matrix M2M2 T once to obtain a multiplication matrix A n×ρ;
s332, assigning an R matrix in SecQR protocols as the multiplication matrix, and initializing a Q matrix as a corresponding identity matrix or all-zero matrix;
S333, extracting a lower triangular matrix of the multiplication matrix column by column, and calculating a correlation value of each column of the lower triangular matrix;
s334, updating the R matrix and the Q matrix by using the correlation value of each column of the lower triangular matrix;
S335, taking the updated Q matrix as a projection matrix of the next time;
s336, repeating S331 to S335 for P times to obtain a recursively optimized projection matrix Q2;
S333 includes:
s3331 extracting the kth column of the lower triangular matrix of the multiplication matrix column by column
S3332 calculate kth column2-Norm/>And sign bit/>
S3333 using the 2-normAnd sign bit/>Calculate the Householder vector per column >
S3334 using the holder vectorAnd column k/>Construction of the kth column/>, of the transformation matrix
S3335 using the 2-normFor the Householder vector/>Normalization processing is carried out to obtain the direction adjustment quantity/>
S3336 using the direction adjustment amountAnd sign bit/>Calculate the direction adjustment amount/>
S3337 adjusting the directionAnd the kth column/>, of the transformation matrixAs a correlation value;
s360 includes:
s361, two servers negotiate to determine non-zero random numbers, random square matrixes and additive secret sharing of inverse square matrixes of the random square matrixes;
S362, the two servers cooperatively calculate the similarity matrix of the small symmetric matrix B by utilizing the random square matrix and the additive secret sharing of the inverse square matrix of the random square matrix
S363, the two servers blindly use the addition secret sharing of the non-zero random numbers to carry out the similar matrix;
S364, the first server sends the blinded added secret sharing shares of the similarity matrix to the second server;
S365, the second server calculates the eigenvalue tλ j and the eigenvector V' of the similar matrix according to the added secret sharing share sent by the first server, and sends the eigenvalue and the eigenvector of the similar matrix back to the first server;
S366, substituting the eigenvalue into an inverse function by each server, and recovering the normal secret sharing share of the eigenvalue of the small symmetric matrix B by using the inverse function;
s367, each server multiplies the random square matrix with the eigenvectors of the similar matrix to obtain eigenvectors of a small symmetric matrix B Is used to share the secret.
5. A server characterized by the step of processing a server in a user data privacy protection system for bezizania robust federal learning according to any one of claims 1 to 3.
6. A client characterized by the step of handling a client in a user data privacy protection system for bezimuth-oriented robust federal learning according to any of claims 1 to 3.
CN202311482298.XA 2023-11-08 2023-11-08 User data privacy protection system and method for Bayesian robust federal learning Active CN117395067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311482298.XA CN117395067B (en) 2023-11-08 2023-11-08 User data privacy protection system and method for Bayesian robust federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311482298.XA CN117395067B (en) 2023-11-08 2023-11-08 User data privacy protection system and method for Bayesian robust federal learning

Publications (2)

Publication Number Publication Date
CN117395067A CN117395067A (en) 2024-01-12
CN117395067B true CN117395067B (en) 2024-04-19

Family

ID=89439110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311482298.XA Active CN117395067B (en) 2023-11-08 2023-11-08 User data privacy protection system and method for Bayesian robust federal learning

Country Status (1)

Country Link
CN (1) CN117395067B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117808082B (en) * 2024-02-29 2024-05-14 华侨大学 Federal learning method, device, equipment and medium for privacy protection against Bayesian attack

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115455471A (en) * 2022-09-05 2022-12-09 深圳大学 Federal recommendation method, device, equipment and storage medium for improving privacy and robustness
CN115660050A (en) * 2022-11-07 2023-01-31 南开大学 Robust federated learning method with efficient privacy protection
CN116644800A (en) * 2023-04-28 2023-08-25 西安电子科技大学 LSTM-based federal learning Bayesian and busy court node detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220292387A1 (en) * 2021-03-09 2022-09-15 International Business Machines Corporation Byzantine-robust federated learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115455471A (en) * 2022-09-05 2022-12-09 深圳大学 Federal recommendation method, device, equipment and storage medium for improving privacy and robustness
CN115660050A (en) * 2022-11-07 2023-01-31 南开大学 Robust federated learning method with efficient privacy protection
CN116644800A (en) * 2023-04-28 2023-08-25 西安电子科技大学 LSTM-based federal learning Bayesian and busy court node detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
去中心联邦学习中抗女巫和拜占庭攻击的研究;肖丹;西安电子科技大学;20220401;第3章 *

Also Published As

Publication number Publication date
CN117395067A (en) 2024-01-12

Similar Documents

Publication Publication Date Title
Miao et al. Privacy-preserving Byzantine-robust federated learning via blockchain systems
So et al. Byzantine-resilient secure federated learning
Li et al. DeepFed: Federated deep learning for intrusion detection in industrial cyber–physical systems
Khazbak et al. MLGuard: Mitigating poisoning attacks in privacy preserving distributed collaborative learning
CN112906903A (en) Network security risk prediction method and device, storage medium and computer equipment
CN117395067B (en) User data privacy protection system and method for Bayesian robust federal learning
Lytvyn et al. Information encryption based on the synthesis of a neural network and AES algorithm
Wang et al. Quantization enabled privacy protection in decentralized stochastic optimization
Ergun et al. Sparsified secure aggregation for privacy-preserving federated learning
Zhang et al. Augmented multi-party computation against gradient leakage in federated learning
Zhang et al. Safelearning: Enable backdoor detectability in federated learning with secure aggregation
CN116186780A (en) Privacy protection method and system based on noise disturbance in collaborative learning scene
CN111581648A (en) Method of federal learning to preserve privacy in irregular users
Liang et al. EGIA: An External Gradient Inversion Attack in Federated Learning
Yang et al. Efficient and secure federated learning with verifiable weighted average aggregation
Sami et al. Secure aggregation for clustered federated learning
CN109697613B (en) Security authentication method and system for network transaction in block chain
Ye et al. On the tradeoff between privacy preservation and Byzantine-robustness in decentralized learning
CN116865938A (en) Multi-server federation learning method based on secret sharing and homomorphic encryption
Chen et al. Privacy-Enhancing and Robust Backdoor Defense for Federated Learning on Heterogeneous Data
Wang et al. Multi-user quantum private query using symmetric multi-particle W state
Ergün et al. Communication-efficient secure aggregation for federated learning
Zhou et al. A survey of security aggregation
EP4083868A1 (en) Federated learning for preserving privacy
Zhao et al. PPCNN: An efficient privacy‐preserving CNN training and inference framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant