CN112199702A - Privacy protection method, storage medium and system based on federal learning - Google Patents
Privacy protection method, storage medium and system based on federal learning Download PDFInfo
- Publication number
- CN112199702A CN112199702A CN202011109363.0A CN202011109363A CN112199702A CN 112199702 A CN112199702 A CN 112199702A CN 202011109363 A CN202011109363 A CN 202011109363A CN 112199702 A CN112199702 A CN 112199702A
- Authority
- CN
- China
- Prior art keywords
- model
- ciphertext
- global
- parameter
- encryption
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioethics (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a privacy protection method, a storage medium and a system based on federal learning, wherein the method comprises the following steps: encrypting the global model by adopting a parameter encryption algorithm to obtain a ciphertext model; training the ciphertext model by using local data, decrypting the obtained ciphertext gradient information and the noise item to obtain a parameter gradient, updating the global model by using the parameter gradient, and circulating the steps until the model converges or reaches a specified iteration number to obtain a model parameter; encrypting the model parameters to obtain encryption model parameters, and updating the global model by adopting the encryption model parameters to obtain a global encryption model; and local training is performed on the encrypted global model to realize privacy protection. The invention can effectively prevent semi-credible Federal learning participants from obtaining the real parameters of the global model and the output result of the intermediate model, and simultaneously ensures that the participants can obtain the real prediction result by utilizing the finally trained encryption model.
Description
Technical Field
The invention relates to the field of data protection, in particular to a privacy protection method, a storage medium and a system based on federal learning.
Background
With the wide application and development of big data mining and deep learning, more and more privacy disclosure and frequent outbreak of data abuse events make the importance on data privacy and security become a worldwide trend. Especially in distributed machine learning, distributed participants are reluctant to provide their own local training data due to privacy concerns, creating a "data islanding" phenomenon. In order to deal with the difficult problem of data privacy protection, break through the practical difficulty of a data isolated island and meet the urgent requirement of data combined use, the concept of federal learning and an industrial application solution are proposed. The federated learning is essentially a distributed machine learning framework, under the framework, original data are not communicated among all participants, a model is trained locally, and updated model parameters or gradients are uploaded, so that the federated learning modeling of a plurality of participants can be effectively assisted on the premise of protecting privacy.
Although federal learning does not require participants to upload local training data, privacy can be protected to some extent. However, current research shows that it is still possible for an attacker to acquire original training data, perform membership inference, and attribute inference, etc. using the true gradient or updated model parameter information uploaded by each participant. At present, privacy protection research based on federal learning almost considers preventing a central server from obtaining privacy information of participants from model updating, but does not consider the situation of malicious participants. That is, the malicious or attacker-intercepted participants may still get the true global model update, and therefore, they may still guess other training data or guess the training data sets of other participants through the true parameters, in addition to their own local training data. As pointed out by Kairouz et al, preventing real model updates and eventual model parameters from being acquired by malicious participants in iterative processes is also a problem to be solved in federal learning. Essentially, the solution to this problem is to have the participants train locally on encrypted or scrambled global model updates. Although there are three mainstream privacy protection technologies, namely differential privacy, homomorphic encryption and secure multiparty computation, at present, the technology is widely applied to machine learning for privacy protection. However, these techniques suffer either from sacrificing model accuracy or from sacrificing efficiency of model training, so that privacy-preserving model training remains a difficult point.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a privacy protection method, a storage medium and a system based on federal learning, aiming at solving the problem that the existing privacy data cannot be effectively protected.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a privacy protection method based on federal learning comprises the following steps:
encrypting the global model by adopting a parameter encryption algorithm to obtain a ciphertext model;
training on the ciphertext model by using local data to obtain ciphertext gradient information and a noise item;
decrypting the ciphertext gradient information and the noise item to obtain a parameter gradient, updating the global model by adopting the parameter gradient, and circulating the steps until the model converges or reaches a specified iteration number to obtain a model parameter;
encrypting the model parameters to obtain encryption model parameters, and updating the global model by adopting the encryption model parameters to obtain a global encryption model;
and carrying out local training on the encrypted global model to realize privacy protection.
The privacy protection method based on federal learning is characterized in that the step of encrypting the global model by adopting a parameter encryption algorithm to obtain the ciphertext model comprises the following steps:
when the global model is a multilayer perceptron model with L layers, a random number matrix is adoptedAndfor plaintext model parameters in the multilayer perceptron modelEncrypting to obtain ciphertext model parameters:wherein the content of the first and second substances,representing a Hadamard product multiplication operation;
the random number matrix R(l)From multiplicative noise vectorsThe method comprises the following steps:
The random number matrix Ra is composed of random numbers gamma and additive noise vectorsConsisting of the following formula:wherein the subscripts i and j satisfy i ∈ [1, n ]L],j∈[1,nL-1];
And replacing the plaintext model parameters in the multilayer perceptron model with the ciphertext model parameters to obtain a ciphertext model.
The privacy protection method based on federal learning is characterized in that the step of encrypting the global model by adopting a parameter encryption algorithm to obtain the ciphertext model comprises the following steps:
using a random tensor when the global model is a convolutional neural network model of L layersAnd a random matrixEncrypting the plaintext model parameters of the convolutional neural network model to obtain corresponding ciphertext model parameters:when L is not less than 1 and not more than L-1, the parameter W(l)Being the convolution kernel tensor, the random tensor R(l)From multiplicative noise vectorsConsists of, and satisfies:
wherein r is(l,in)=(r(m))m∈P(l)From m e P (l) vectors r(m)Spliced, p (l) represents a set of indices for all network layers connected to the l convolutional layer;
the random matrix R(L)From a multiplicative noise vector r(L-1)Is composed of, and satisfies:
and replacing the plaintext model parameters in the convolutional neural network model with the ciphertext model parameters to obtain a ciphertext model.
The privacy protection method based on federal learning comprises the following steps of training on the ciphertext model by using local data to obtain ciphertext gradient information and a noise item:
computing the output of the ciphertext model:the output of the ciphertext model and the output of the corresponding plaintext model satisfy the following relational expression:
For samples of arbitrary dimensionsPrediction value of ciphertext modelThe mean square error from the true value is expressed as a loss function as:
wherein n isLRepresenting the dimension of the model output layer and the dimension of the sample label;
said loss functionFor ciphertext parameterThe noisy gradient and the corresponding real gradient satisfy the following relation:wherein the content of the first and second substances,and is
The kth participant has all its minibatches of dataComputing ciphertext gradient information over a sampleAnd combined with an additive noise vector raCalculating noise termsAnd
the privacy protection method based on federal learning comprises the following steps of training on the ciphertext model by using local data to obtain ciphertext gradient information and a noise item:
the convolutional layer output of the ciphertext model and the corresponding real convolutional layer output meet the following requirements:and, the full-connection layer output of the ciphertext model and the corresponding real output result meet the following requirements:whereinIs a pseudo output statistic, and the function Flatten (-) expresses that the multi-dimensional tensor is extended into a one-dimensional vector, and the dimensionality of the extended vector is nL-1=cL-1hL-1wL-1The parameter r is gamma raIs a combined noise vector;
for samples of arbitrary dimensionsPrediction value of ciphertext modelThe mean square error from the true value is expressed as a loss function as:
wherein n isLRepresenting the dimension of the model output layer and the dimension of the sample label;
said loss functionFor ciphertext parameterThe noisy gradient and the corresponding real gradient satisfy the following relation:wherein the content of the first and second substances,and isThe kth participant has all its minibatches of dataComputing ciphertext gradient information over a sampleAnd combined with an additive noise vector raCalculating noise termsAnd
the privacy protection method based on the federal learning comprises the following steps of decrypting the ciphertext gradient information and the noise item to obtain a parameter gradient, updating the global model by adopting the parameter gradient until the model converges or reaches a specified iteration number, and obtaining model parameters, wherein the step of decrypting the ciphertext gradient information and the noise item comprises the following steps:
global model W for the t-th roundtThe parameter gradient obtained during the local training of the kth participant is solved as follows:wherein, Fk(. h) represents the loss function for the kth participant;
updating the global model by adopting the parameter gradient until the model converges or reaches the specified iteration times to obtain the model parameter W of the t +1 th roundt+1Comprises the following steps:
where eta represents the learning rate, Nkthe/N represents the ratio of the data volume of the kth participant to the total data volume.
The privacy protection method based on federal learning comprises the following steps of encrypting the model parameters to obtain encryption model parameters, updating the global model by adopting the encryption model parameters to obtain a global encryption model:
using said cryptographic model parametersAnd updating the global model to obtain a global encryption model.
A computer readable storage medium having one or more programs stored thereon that are executable by one or more processors to perform the steps of the federated learning-based privacy protection method of the present invention.
A privacy preserving system based on federal learning, comprising: the server side is used for encrypting the global model by adopting a parameter encryption algorithm to obtain a ciphertext model;
the client is used for training the ciphertext model by using local data to obtain ciphertext gradient information and a noise item;
the server is further used for decrypting the ciphertext gradient information and the noise item to obtain a parameter gradient, updating the global model by adopting the parameter gradient, and repeating the steps until the model converges or reaches a specified iteration number to obtain a model parameter; encrypting the model parameters to obtain encryption model parameters, and updating the global model by adopting the encryption model parameters to obtain a global encryption model;
and the client is also used for carrying out local training on the encrypted global model to realize privacy protection.
Has the advantages that: compared with the prior art, the privacy protection method based on the federal learning provided by the invention has the advantages that the global model of the federal learning is encrypted through the privacy protection algorithm to obtain the global encryption model, and the participants are allowed to carry out local training on the global encryption model. The privacy protection method provided by the invention can effectively prevent semi-credible federal learning participants from obtaining the real parameters of the global model and the output result of the intermediate model, and simultaneously ensures that all participants can obtain the real prediction result by utilizing the finally trained encryption model.
Drawings
Fig. 1 is a flowchart of a privacy protection method based on federal learning according to a preferred embodiment of the present invention.
Fig. 2 is a schematic diagram of a privacy protection system based on federal learning according to the present invention.
Detailed Description
The invention provides a privacy protection method, a storage medium and a system based on federal learning, and in order to make the purpose, technical scheme and effect of the invention clearer and clearer, the invention is further described in detail below by referring to the attached drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention will be further explained by the description of the embodiments with reference to the drawings.
Federal learning (Federatedlearning) is a new artificial intelligence basic technology, is proposed by Google in 2016, and is originally used for solving the problem of local model updating of android mobile phone terminal users, and the design goal of the federal learning is to carry out efficient machine learning among multiple parties or multiple computing nodes on the premise of guaranteeing information safety during big data exchange, protecting terminal data and personal data privacy and guaranteeing legal compliance. The machine learning algorithm which can be used for federal learning is not limited to a neural network, and also comprises important algorithms such as a random forest. The system architecture for federated learning is presented as an example of a scenario involving two data owners (i.e., enterprises A and B). The framework is extensible to scenarios involving multiple data owners. Suppose enterprises A and B want to jointly train a machine learning model, and their business systems have the relevant data of their respective users. In addition, enterprise B also has label data that the model needs to predict. Due to data privacy protection and safety considerations, A and B cannot directly exchange data, and a federal learning system can be used for establishing a model. The federal learning system framework is composed of three parts:
a first part: the encrypted samples are aligned. Because the user groups of the two enterprises are not completely overlapped, the system confirms the common users of the two enterprises on the premise that A and B do not disclose respective data by using an encryption-based user sample alignment technology, and does not expose the users which are not overlapped with each other, so that the modeling is carried out by combining the characteristics of the users. A second part: and (5) training an encryption model. After the common user population is determined, the machine learning model can be trained using these data. In order to ensure the confidentiality of data in the training process, the third-party collaborator C needs to be used for encryption training. Taking the linear regression model as an example, the training process can be divided into 4 steps: the first step is as follows: collaborator C distributes the public key to a and B for encrypting the data to be exchanged during the training process. The second step: a and the intermediate results between A and A are used to calculate the gradient in encrypted form. Thirdly, the step of: a and B are calculated respectively based on the encrypted gradient values, meanwhile, B calculates loss according to the label data of the A and B, and summarizes the result to C, and C calculates the total gradient value through the summarized result and decrypts the total gradient value. Fourthly, the step: and C, respectively transmitting the decrypted gradient back to A and B, and updating the parameters of the respective models by the A and B according to the gradient. And iterating the steps until the loss function converges, so that the whole training process is completed. In the sample alignment and model training processes, the data of A and B are kept locally, and data privacy is not leaked due to data interaction in the training process. Thus, both parties are enabled to collaboratively train the model with the help of federal learning. And a third part: and (4) effect excitation. One of the major features of federal learning is that it solves the problem why different agencies are added to federal co-modeling, i.e., the model effect is shown in practical applications and recorded on a permanent data recording mechanism (e.g., blockchain) after modeling. The effects of these models are distributed to individual agencies on the federal mechanism for feedback and continue to encourage more agencies to join this data federation. The implementation of the three parts not only considers privacy protection and effect of common modeling among a plurality of organizations, but also considers the incentive of the organizations with more contribution data by a consensus mechanism.
Although federal learning does not require participants to upload local training data, the level of privacy protection can be increased to some extent. However, current research shows that it is still possible for an attacker to acquire original training data, perform membership inference, and attribute inference, etc. using the true gradient or updated model parameter information uploaded by each participant. At present, privacy protection research based on federal learning almost considers preventing a central server from obtaining privacy information of participants from model updating, but does not consider the situation of malicious participants. That is, the malicious or attacker-intercepted participants may still get the true global model update, and therefore, they may still guess other training data or guess the training data sets of other participants through the true parameters, in addition to their own local training data. Therefore, in federal learning, it is also an urgent problem to prevent the real model update in the iterative process and the final model parameters from being acquired by malicious participants.
In order to solve the problems in the prior art, the invention provides a privacy protection method based on federal learning, as shown in fig. 1, which comprises the following steps:
s10, encrypting the global model by adopting a parameter encryption algorithm to obtain a ciphertext model;
s20, training the ciphertext model by using local data to obtain ciphertext gradient information and a noise item;
s30, decrypting the ciphertext gradient information and the noise item to obtain a parameter gradient, and updating the global model by adopting the parameter gradient;
s40, circularly executing the steps S10-S30 until the model converges or reaches the specified iteration times to obtain model parameters;
s50, encrypting the model parameters to obtain encryption model parameters, and updating the global model by adopting the encryption model parameters to obtain a global encryption model;
and S60, performing local training on the encrypted global model to realize privacy protection.
Specifically, although the differential privacy technology can ensure the efficiency of the model, the introduced random noise cannot be eliminated, so that the accuracy of the model is greatly affected, and there is a trade-off relationship between the accuracy of the model and the privacy protection level, that is, the higher the privacy protection level is, the greater the random number noise needs to be added, which makes the accuracy of the model worse. In order to solve the problem, the embodiment provides an efficient privacy protection method based on the idea of differential privacy, in each iteration of the training process, before the server side distributes global model parameters, additive and multiplicative random numbers meeting certain conditions are selected as private keys, and the global model is multiplied or added with the private keys as encrypted global models distributed to each participant according to certain design requirements; the participants then perform model training on the encrypted global model using their own local data. The privacy protection method provided by the embodiment can enable the server side to accurately eliminate the influence of the random number and restore a real global model, so that the accuracy of the model can be ensured.
According to the embodiment, the global model is encrypted by providing an efficient privacy protection method, so that all participants of federal learning can only train on the global encryption model and cannot obtain real model parameters, and the privacy of the global model is guaranteed. That is to say, the privacy protection method based on federal learning provided in this embodiment can effectively prevent semi-trusted federal learning participants from obtaining the real parameters of the global model and the output result of the intermediate model, and meanwhile, it is ensured that all participants can obtain the real prediction result by using the finally trained encryption model.
In some specific embodiments, the privacy protection method provided by the invention can be applied to scenes with sensitive data privacy, such as hospitals (medical image data) and banks (credit card transaction records), and the like, and all organizations are combined to train a global model together on the premise of not revealing the data privacy so as to achieve the expected purpose. Taking a bank credit card fraud detection scenario as an example, each bank wants to train a global model while not revealing data privacy, so as to obtain transaction information for a single credit card and detect the capability of whether the transaction is a fraudulent transaction. After receiving the ciphertext model, each banking institution can train on the ciphertext model by using transaction record data of a local credit card and a manually labeled tag (whether the transaction is fraudulent), obtain ciphertext gradient information and a noise item and send the ciphertext gradient information and the noise item to a server; the server decrypts the ciphertext gradient information and the noise item to obtain a parameter gradient, and updates the global model by adopting the parameter gradient; circularly executing the steps until the model converges or reaches the specified iteration times to obtain model parameters; encrypting the model parameters to obtain encryption model parameters, and updating the global model by adopting the encryption model parameters to obtain a global encryption model; the server distributes the global encryption model to the client (each banking institution) again, and the client can carry out local training on the global encryption model to realize privacy protection.
In some embodiments, the privacy protection method provided by this embodiment is applicable to two deep learning models, namely a multilayer perceptron (MLP) and a Convolutional Neural Network (CNN), and supports a ReLU activation function and a MSE loss function; meanwhile, the privacy protection method can also be effectively applied to a variant convolutional neural network model with hopping connection, such as ResNet and DenseNet networks which are widely popular at present.
The invention comprises four stages of global model encryption, local model training, global model updating and final model distribution, corresponding to the training process of horizontal federal learning. And the first three stages are sequentially executed in a circulating mode until the model converges or reaches the specified circulating times, and then the fourth stage is turned to be ended.
In some embodiments, the "global model encryption" stage is performed by a server side of the federal learning framework, and the server side encrypts or scrambles a global model (referred to as a "plaintext model") by using the parameter encryption algorithm, and then sends the encrypted model (referred to as a "ciphertext model") and auxiliary random information to a client side. The global model can be a multilayer perceptron model or a convolutional neural network model, and the parameter encryption algorithm is specifically described below based on the two models respectively.
In a specific embodiment, when the global model is a multilayer perceptron model with L layers, the multilayer perceptron model is composed of any number of fully connected layers, and the ReLU is used as an activation function. Consider a multi-layer perceptron model with L layers, the number of neurons in the L layer is nlThe parameter matrix isIts output can be expressed as:
in particular, when l is 0, y(0)Input x ═ x representing model1,x2,…,xd)T,n0D denotes the dimension of the input data.
Aiming at the multilayer perceptron model of the L layers, the server side uses a random number matrixAndfor plaintext model parametersPerforming encryption, ciphertext model parameterThe calculation process of (2) is as follows:
In particular, the random number matrix R(l)From multiplicative noise vectorsThe method comprises the following steps:wherein the subscripts i and j satisfy i ∈ [1, n ]l],j∈[1,nl-1]. The random number matrix RaFrom a random number gamma and an additive noise vectorConsisting of the following formula:wherein the subscripts i and j satisfy i ∈ [1, n ]L],j∈[1,nL-1]. And replacing the plaintext model parameters in the multilayer perceptron model with the ciphertext model parameters to obtain a ciphertext model.
In this embodiment, the server end first completes the model encryption according to the above requirements, and then encrypts the ciphertext model parametersAnd additive noise vector raSent together to the federal learned participants.
In another embodiment, when the global model is a convolutional neural network model of L layers, the convolutional neural network model is composed of an arbitrary number of convolutional layers and max-pooling layers alternately connected and then ends with a fully connected layer for regression or classification tasks. The convolution model uses ReLU as an activation function and allows a "splice-type jump-join" structure between layers. The convolution layer takes three-dimensional data as input and takes a plurality of three-dimensional convolution kernels as parameters, and outputs a feature map after convolution. For any number of channels clHeight of hlWidth of wlThree-dimensional input data ofThe three-dimensional convolution operation is defined as: wherein the content of the first and second substances,is cl+1Size of clThe tensor composed by the xf f convolution kernel,is an outputThe characteristic diagram of (1).
For any L layers of the convolutional network model, the server side uses random tensorAnd a random matrixEncrypting the plaintext model parameters of the convolutional neural network model to obtain corresponding ciphertext model parameters:when L is not less than 1 and not more than L-1, the parameter W(l)Being the convolution kernel tensor, the random tensor R(l)From multiplicative noise vectorsConsists of, and satisfies:
wherein r is(l,in)=(r(m))m∈P(l)From m e P (l) vectors r(m)Spliced, p (l) denotes a set of indices for all network layers connected to the l convolutional layer, this change being used to adapt the connection structure with a spliced jump connection;
the random matrix R(L)From a multiplicative noise vector r(L-1)Is composed of, and satisfies:
the random matrix RaFrom an additive noise vector raAnd a random number γ, and satisfies:and replacing the plaintext model parameters in the convolutional neural network model with the ciphertext model parameters to obtain a ciphertext model.
In the embodiment, the server sideFirstly, completing model encryption according to the requirements, and then, encrypting the model parameters of the ciphertextAnd additive noise vector raSent together to the participants (clients) of the federal study.
In some embodiments, the "local model training" phase is a second phase, completed by the client. After each participant receives the ciphertext model and the additive noise vector, the participant trains on the ciphertext model by using own local data, and the training method mainly comprises two stages: forward propagation and backward propagation. And finally, sending the cipher text gradient information obtained by local calculation and the extra noise item to a server side.
And a forward propagation stage:
when the global model is a multilayer perceptron model of an L layer, after receiving the ciphertext model, the participants of the federal learning calculate the output containing noise
The output of the ciphertext model and the output of the corresponding plaintext model satisfy the following relational expression:
whereinr=γra. For a Federal learning participant, a random noise vector r(l)And the random number gamma is unknown.
When the global model is a convolutional neural network model of an L layer, similar to a multilayer perceptron, the convolutional layer output of the ciphertext model and the corresponding real convolutional layer output satisfy:
and the output of the full connection layer of the ciphertext model and the corresponding real output result meet the following requirements:
whereinIs a pseudo output statistic, and the function Flatten (-) expresses that the multi-dimensional tensor is extended into a one-dimensional vector, and the dimensionality of the extended vector is nL-1=cL-1hL-1wL-1. Parameter r ═ γ raTo a combined noise vector. The random noise vector r is the same as the encryption mechanism of the multilayer perceptron model(l)And the random number gamma is unknown to the participants of the federated learning, and the encryption mechanism aiming at the convolution model can strictly ensure the security of the global model.
And (3) a back propagation stage:
the back propagation process is applicable to both multilayer perceptron models and convolutional neural network models, using Mean Square Error (MSE) as the loss function. For samples of arbitrary dimensionsPrediction value of ciphertext modelThe mean square error from the true value can be expressed as:
wherein n isLRepresenting the dimensions of the model output layer and also the dimensions of the sample label. The parameters alpha and r are respectively "orderThe pseudo output statistics and the combined noise vector introduced in paragraph 1 ". For the parametric encryption algorithm introduced in the present invention, the loss functionFor ciphertext parameterThe noisy gradient and the corresponding real gradient satisfy the following relation:wherein the content of the first and second substances,
and isNotably, the parameter σ(l)And beta(l)Is done by the participants independently computing locally.
Specifically, the kth participant is at all its minibatches of dataCalculating gradients on a sampleAnd combined with additive noise vector r provided by the serveraCalculating noise termsAndfinally, the three types of information are combinedAnd simultaneously sending the data to the server side.
In some embodiments, the "global model update" phase is performed by federally learned clothesAnd the server end completes the operation. And after receiving the ciphertext gradient information and the noise items sent by all participants, the server decrypts the true parameter gradient by using the selected private key, and finally updates the global model by using the aggregated parameter gradient. In particular, the global model W for the t-th roundtThe true gradient it gets during the local training of the kth participant can be solved as follows: wherein Fk(. cndot.) represents the loss function for the kth participant. After the real gradient is decrypted, the server side updates to obtain the global model W of the t +1 th roundt+1:
Where eta represents the learning rate, Nkthe/N represents the ratio of the data volume of the kth participant to the total data volume.
In some embodiments, the "final model distribution" phase is done by the server side of federal learning. After the server and the client alternately execute the stages 1 to 3 until the model converges or reaches the specified iteration times, the server obtains the final model parametersIn order to protect the model parameters and ensure that the participants can obtain correct inference results, the server side still encrypts the global model before distributing the model, and the server side does not select additive noise R unlike the training stageaAnd only multiplicative noise is selected to ensure that the output of the ciphertext model is the same as the true output, namely:without loss of generality, the server side encrypts the model parameters according to the following formula:
the parameter W in the above formula is not only a multilayer perceptron model but also a convolutional neural network model(l)And noise R(l)In exactly the same form as described in "stage 1". And finally, the server side distributes the encrypted global model to all participants, and the participants carry out local training on the encrypted global model to realize privacy protection.
In some embodiments, a computer-readable storage medium is also provided, wherein the computer-readable storage medium stores one or more programs which are executable by one or more processors to implement the steps in the federal learning based privacy protection method of the present invention.
In some embodiments, a privacy protection system based on the federal learning method is further provided, as shown in fig. 2, which includes a server 10 and a client 20, where the server 10 is configured to perform encryption processing on a global model by using a parameter encryption algorithm to obtain a ciphertext model;
the client 20 is configured to train on the ciphertext model by using local data to obtain ciphertext gradient information and a noise item;
the server 10 is further configured to decrypt the ciphertext gradient information and the noise item to obtain a parameter gradient, update the global model by using the parameter gradient, and loop the above steps until the model converges or reaches a specified iteration number to obtain a model parameter; encrypting the model parameters to obtain encryption model parameters, and updating the global model by adopting the encryption model parameters to obtain a global encryption model;
the client 20 is further configured to perform local training on the encrypted global model to achieve privacy protection.
In summary, the present invention solves the problem of implementing a nonlinear activation function such as ReLU in a ciphertext domain, thereby supporting a client to train a multilayer perceptron model, a convolutional neural network model or perform local prediction in an encryption domain without knowing real updates or parameters. Therefore, semi-trusted federal learning participants can be effectively prevented from obtaining the real parameters of the global model and the output result of the intermediate model, and meanwhile, all the participants can be guaranteed to obtain the real prediction result by using the finally distributed encryption model. The invention provides privacy protection, at the same time, the server can eliminate random numbers to obtain real global model parameters, and the participants can obtain real prediction by using the encrypted model, thereby ensuring the accuracy of the model and the prediction. The extra cost of the invention is mainly generated in the back propagation, and besides the gradient, the participants also calculate and send two extra noise terms to the server side. The upper bounds of the additional computation and communication costs compared to the plaintext model training are about 2T and 2C, respectively (T is the cost of back propagation in the plaintext model training and C is the size of the model parameters), which ensures the efficiency and usability of the method in practice.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (9)
1. A privacy protection method based on federal learning is characterized by comprising the following steps:
encrypting the global model by adopting a parameter encryption algorithm to obtain a ciphertext model;
training on the ciphertext model by using local data to obtain ciphertext gradient information and a noise item;
decrypting the ciphertext gradient information and the noise item to obtain a parameter gradient, updating the global model by adopting the parameter gradient, and circulating the steps until the model converges or reaches a specified iteration number to obtain a model parameter;
encrypting the model parameters to obtain encryption model parameters, and updating the global model by adopting the encryption model parameters to obtain a global encryption model;
and carrying out local training on the encrypted global model to realize privacy protection.
2. The privacy protection method based on federal learning of claim 1, wherein the step of obtaining the ciphertext model by encrypting the global model by using a parameter encryption algorithm comprises:
when the global model is a multilayer perceptron model with L layers, a random number matrix is adoptedAndfor plaintext model parameters in the multilayer perceptron modelEncrypting to obtain ciphertext model parameters:wherein the content of the first and second substances,representing a Hadamard product multiplication operation;
the random number matrix R(l)From multiplicative noise vectorsThe method comprises the following steps:
The random number matrix RaFrom a random number gamma and an additive noise vectorConsisting of the following formula:wherein the subscripts i and j satisfy i ∈ [1, n ]L],j∈[1,nL-1];
And replacing the plaintext model parameters in the multilayer perceptron model with the ciphertext model parameters to obtain a ciphertext model.
3. The privacy protection method based on federal learning of claim 1, wherein the step of obtaining the ciphertext model by encrypting the global model by using a parameter encryption algorithm comprises:
using a random tensor when the global model is a convolutional neural network model of L layersAnd a random matrix R(L),Encrypting the plaintext model parameters of the convolutional neural network model to obtain corresponding ciphertext model parameters:when L is not less than 1 and not more than L-1, the parameter W(l)For convolution kernel tensors, the random tensor R (l) is formed by multiplicative noise vectorsConsists of, and satisfies:
wherein r is(l,in)=(r(m))m∈P(l)From m e P (l) vectors r(m)Spliced, p (l) represents a set of indices for all network layers connected to the l convolutional layer;
the random matrix R(L)From a multiplicative noise vector r(L-1)Is composed of, and satisfies:
and replacing the plaintext model parameters in the convolutional neural network model with the ciphertext model parameters to obtain a ciphertext model.
4. The privacy protection method based on federal learning as claimed in claim 2, wherein the step of training on the ciphertext model by using local data to obtain ciphertext gradient information and a noise item comprises:
computing the output of the ciphertext model:the output of the ciphertext model and the output of the corresponding plaintext model satisfy the following relational expression:
For any dimension of sampleBook (I)Prediction value of ciphertext modelThe mean square error from the true value is expressed as a loss function as:
wherein n isLRepresenting the dimension of the model output layer and the dimension of the sample label;
said loss functionFor ciphertext parameterThe noisy gradient and the corresponding real gradient satisfy the following relation:wherein the content of the first and second substances,v=rTr and
5. the privacy protection method based on federal learning as claimed in claim 3, wherein the step of training on the ciphertext model by using local data to obtain ciphertext gradient information and a noise item comprises:
the convolutional layer output of the ciphertext model and the corresponding real convolutional layer output meet the following requirements:and, the full-connection layer output of the ciphertext model and the corresponding real output result meet the following requirements:whereinIs a pseudo output statistic, and the function Flatten (-) expresses that the multi-dimensional tensor is extended into a one-dimensional vector, and the dimensionality of the extended vector is nL-1=cL-1hL-1wL-1The parameter r is gamma raIs a combined noise vector;
for samples of arbitrary dimensionsPrediction value of ciphertext modelThe mean square error from the true value is expressed as a loss function as:
wherein n isLRepresenting model outputsThe dimension of the layer, and also the dimension of the sample label;
said loss functionFor ciphertext parameterThe noisy gradient and the corresponding real gradient satisfy the following relation:wherein the content of the first and second substances,v=rTr and
6. the privacy protection method based on federated learning as claimed in any of claims 4-5, wherein, to decrypt the ciphertext gradient information and noise item, to obtain parameter gradients, to use the parameter gradients to update the global model, until the model converges or reaches a specified number of iterations, the step of obtaining model parameters includes:
global model W for the t-th roundtThe parameter gradient obtained during the local training of the kth participant is solved as follows:wherein, Fk(. h) represents the loss function for the kth participant;
updating the global model by adopting the parameter gradient until the model converges or reaches the specified iteration times to obtain the model parameter W of the t +1 th roundt+1Comprises the following steps:
7. The privacy protection method based on federal learning as claimed in claim 6, wherein the step of encrypting the model parameters to obtain encryption model parameters, and updating the global model with the encryption model parameters to obtain a global encryption model comprises:
8. A computer-readable storage medium storing one or more programs for execution by one or more processors to perform the steps of the federated learning-based privacy method of any one of claims 1-7.
9. A privacy preserving system based on federal learning, comprising: the server side is used for encrypting the global model by adopting a parameter encryption algorithm to obtain a ciphertext model;
the client is used for training the ciphertext model by using local data to obtain ciphertext gradient information and a noise item;
the server is further used for decrypting the ciphertext gradient information and the noise item to obtain a parameter gradient, updating the global model by adopting the parameter gradient, and repeating the steps until the model converges or reaches a specified iteration number to obtain a model parameter; encrypting the model parameters to obtain encryption model parameters, and updating the global model by adopting the encryption model parameters to obtain a global encryption model;
and the client is also used for carrying out local training on the encrypted global model to realize privacy protection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011109363.0A CN112199702A (en) | 2020-10-16 | 2020-10-16 | Privacy protection method, storage medium and system based on federal learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011109363.0A CN112199702A (en) | 2020-10-16 | 2020-10-16 | Privacy protection method, storage medium and system based on federal learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112199702A true CN112199702A (en) | 2021-01-08 |
Family
ID=74009841
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011109363.0A Pending CN112199702A (en) | 2020-10-16 | 2020-10-16 | Privacy protection method, storage medium and system based on federal learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112199702A (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560106A (en) * | 2021-02-20 | 2021-03-26 | 支付宝(杭州)信息技术有限公司 | Method, device and system for processing privacy matrix |
CN112949760A (en) * | 2021-03-30 | 2021-06-11 | 平安科技(深圳)有限公司 | Model precision control method and device based on federal learning and storage medium |
CN113159918A (en) * | 2021-04-09 | 2021-07-23 | 福州大学 | Bank client group mining method based on federal group penetration |
CN113158230A (en) * | 2021-03-16 | 2021-07-23 | 陕西数盾慧安数据科技有限公司 | Online classification method based on differential privacy |
CN113159316A (en) * | 2021-04-08 | 2021-07-23 | 支付宝(杭州)信息技术有限公司 | Model training method, method and device for predicting business |
CN113179244A (en) * | 2021-03-10 | 2021-07-27 | 上海大学 | Federal deep network behavior feature modeling method for industrial internet boundary safety |
CN113191530A (en) * | 2021-04-09 | 2021-07-30 | 汕头大学 | Block link point reliability prediction method and system with privacy protection function |
CN113221144A (en) * | 2021-05-19 | 2021-08-06 | 国网辽宁省电力有限公司电力科学研究院 | Virtualization terminal abnormity detection method and system for privacy protection machine learning |
CN113335490A (en) * | 2021-06-30 | 2021-09-03 | 广船国际有限公司 | Double-wall pipe ventilation system and ship |
CN113362160A (en) * | 2021-06-08 | 2021-09-07 | 南京信息工程大学 | Federal learning method and device for credit card anti-fraud |
CN113378198A (en) * | 2021-06-24 | 2021-09-10 | 深圳市洞见智慧科技有限公司 | Federal training system, method and device for model for protecting user identification |
CN113515760A (en) * | 2021-05-28 | 2021-10-19 | 平安国际智慧城市科技股份有限公司 | Horizontal federal learning method, device, computer equipment and storage medium |
CN113543120A (en) * | 2021-09-17 | 2021-10-22 | 百融云创科技股份有限公司 | Mobile terminal credit anti-fraud estimation method and system based on federal learning |
CN113591133A (en) * | 2021-09-27 | 2021-11-02 | 支付宝(杭州)信息技术有限公司 | Method and device for performing feature processing based on differential privacy |
CN113614726A (en) * | 2021-06-10 | 2021-11-05 | 香港应用科技研究院有限公司 | Dynamic differential privacy for federated learning systems |
CN113688408A (en) * | 2021-08-03 | 2021-11-23 | 华东师范大学 | Maximum information coefficient method based on safe multi-party calculation |
CN113704779A (en) * | 2021-07-16 | 2021-11-26 | 杭州医康慧联科技股份有限公司 | Encrypted distributed machine learning training method |
CN113778966A (en) * | 2021-09-15 | 2021-12-10 | 深圳技术大学 | Cross-school information sharing method and related device for college teaching and course score |
CN113836556A (en) * | 2021-09-26 | 2021-12-24 | 广州大学 | Federal learning-oriented decentralized function encryption privacy protection method and system |
CN114091651A (en) * | 2021-11-03 | 2022-02-25 | 支付宝(杭州)信息技术有限公司 | Method, device and system for multi-party joint training of neural network of graph |
CN114143311A (en) * | 2021-11-03 | 2022-03-04 | 深圳前海微众银行股份有限公司 | Privacy protection scheme aggregation method and device based on block chain |
CN114282652A (en) * | 2021-12-22 | 2022-04-05 | 哈尔滨工业大学 | Privacy-protecting longitudinal deep neural network model construction method, computer and storage medium |
CN114338144A (en) * | 2021-12-27 | 2022-04-12 | 杭州趣链科技有限公司 | Method for preventing data from being leaked, electronic equipment and computer-readable storage medium |
CN115081014A (en) * | 2022-05-31 | 2022-09-20 | 西安翔迅科技有限责任公司 | Target detection label automatic labeling method based on federal learning |
CN115186285A (en) * | 2022-09-09 | 2022-10-14 | 闪捷信息科技有限公司 | Parameter aggregation method and device for federal learning |
CN115278709A (en) * | 2022-07-29 | 2022-11-01 | 南京理工大学 | Communication optimization method based on federal learning |
CN115310121A (en) * | 2022-07-12 | 2022-11-08 | 华中农业大学 | Real-time reinforced federal learning data privacy security method based on MePC-F model in Internet of vehicles |
WO2022257180A1 (en) * | 2021-06-10 | 2022-12-15 | Hong Kong Applied Science and Technology Research Institute Company Limited | Dynamic differential privacy to federated learning systems |
CN115865307A (en) * | 2023-02-27 | 2023-03-28 | 蓝象智联(杭州)科技有限公司 | Data point multiplication operation method for federal learning |
CN116366250A (en) * | 2023-06-02 | 2023-06-30 | 江苏微知量子科技有限公司 | Quantum federal learning method and system |
WO2023134077A1 (en) * | 2022-01-17 | 2023-07-20 | 平安科技(深圳)有限公司 | Homomorphic encryption method and system based on federated factorization machine, device and storage medium |
WO2023174018A1 (en) * | 2022-03-15 | 2023-09-21 | 北京字节跳动网络技术有限公司 | Vertical federated learning methods, apparatuses, system and device, and storage medium |
CN117675199A (en) * | 2023-12-21 | 2024-03-08 | 盐城集结号科技有限公司 | Network security defense system based on RPA |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190227980A1 (en) * | 2018-01-22 | 2019-07-25 | Google Llc | Training User-Level Differentially Private Machine-Learned Models |
CN110572253A (en) * | 2019-09-16 | 2019-12-13 | 济南大学 | Method and system for enhancing privacy of federated learning training data |
CN110995737A (en) * | 2019-12-13 | 2020-04-10 | 支付宝(杭州)信息技术有限公司 | Gradient fusion method and device for federal learning and electronic equipment |
CN111552986A (en) * | 2020-07-10 | 2020-08-18 | 鹏城实验室 | Block chain-based federal modeling method, device, equipment and storage medium |
CN111611610A (en) * | 2020-04-12 | 2020-09-01 | 西安电子科技大学 | Federal learning information processing method, system, storage medium, program, and terminal |
-
2020
- 2020-10-16 CN CN202011109363.0A patent/CN112199702A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190227980A1 (en) * | 2018-01-22 | 2019-07-25 | Google Llc | Training User-Level Differentially Private Machine-Learned Models |
CN110572253A (en) * | 2019-09-16 | 2019-12-13 | 济南大学 | Method and system for enhancing privacy of federated learning training data |
CN110995737A (en) * | 2019-12-13 | 2020-04-10 | 支付宝(杭州)信息技术有限公司 | Gradient fusion method and device for federal learning and electronic equipment |
CN111611610A (en) * | 2020-04-12 | 2020-09-01 | 西安电子科技大学 | Federal learning information processing method, system, storage medium, program, and terminal |
CN111552986A (en) * | 2020-07-10 | 2020-08-18 | 鹏城实验室 | Block chain-based federal modeling method, device, equipment and storage medium |
Cited By (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560106A (en) * | 2021-02-20 | 2021-03-26 | 支付宝(杭州)信息技术有限公司 | Method, device and system for processing privacy matrix |
CN113179244A (en) * | 2021-03-10 | 2021-07-27 | 上海大学 | Federal deep network behavior feature modeling method for industrial internet boundary safety |
CN113179244B (en) * | 2021-03-10 | 2022-12-23 | 上海大学 | Federal deep network behavior feature modeling method for industrial internet boundary safety |
CN113158230B (en) * | 2021-03-16 | 2024-02-09 | 陕西数盾慧安数据科技有限公司 | Online classification method based on differential privacy |
CN113158230A (en) * | 2021-03-16 | 2021-07-23 | 陕西数盾慧安数据科技有限公司 | Online classification method based on differential privacy |
CN112949760A (en) * | 2021-03-30 | 2021-06-11 | 平安科技(深圳)有限公司 | Model precision control method and device based on federal learning and storage medium |
CN112949760B (en) * | 2021-03-30 | 2024-05-10 | 平安科技(深圳)有限公司 | Model precision control method, device and storage medium based on federal learning |
CN113159316A (en) * | 2021-04-08 | 2021-07-23 | 支付宝(杭州)信息技术有限公司 | Model training method, method and device for predicting business |
CN113159316B (en) * | 2021-04-08 | 2022-05-17 | 支付宝(杭州)信息技术有限公司 | Model training method, method and device for predicting business |
CN113191530A (en) * | 2021-04-09 | 2021-07-30 | 汕头大学 | Block link point reliability prediction method and system with privacy protection function |
CN113159918A (en) * | 2021-04-09 | 2021-07-23 | 福州大学 | Bank client group mining method based on federal group penetration |
CN113159918B (en) * | 2021-04-09 | 2022-06-07 | 福州大学 | Bank client group mining method based on federal group penetration |
CN113221144A (en) * | 2021-05-19 | 2021-08-06 | 国网辽宁省电力有限公司电力科学研究院 | Virtualization terminal abnormity detection method and system for privacy protection machine learning |
CN113221144B (en) * | 2021-05-19 | 2024-05-03 | 国网辽宁省电力有限公司电力科学研究院 | Privacy protection machine learning virtualization terminal abnormality detection method and system |
CN113515760B (en) * | 2021-05-28 | 2024-03-15 | 平安国际智慧城市科技股份有限公司 | Horizontal federal learning method, apparatus, computer device, and storage medium |
CN113515760A (en) * | 2021-05-28 | 2021-10-19 | 平安国际智慧城市科技股份有限公司 | Horizontal federal learning method, device, computer equipment and storage medium |
CN113362160A (en) * | 2021-06-08 | 2021-09-07 | 南京信息工程大学 | Federal learning method and device for credit card anti-fraud |
CN113362160B (en) * | 2021-06-08 | 2023-08-22 | 南京信息工程大学 | Federal learning method and device for credit card anti-fraud |
WO2022257180A1 (en) * | 2021-06-10 | 2022-12-15 | Hong Kong Applied Science and Technology Research Institute Company Limited | Dynamic differential privacy to federated learning systems |
US11907403B2 (en) | 2021-06-10 | 2024-02-20 | Hong Kong Applied Science And Technology Research Institute Co., Ltd. | Dynamic differential privacy to federated learning systems |
CN113614726A (en) * | 2021-06-10 | 2021-11-05 | 香港应用科技研究院有限公司 | Dynamic differential privacy for federated learning systems |
CN113378198A (en) * | 2021-06-24 | 2021-09-10 | 深圳市洞见智慧科技有限公司 | Federal training system, method and device for model for protecting user identification |
CN113335490A (en) * | 2021-06-30 | 2021-09-03 | 广船国际有限公司 | Double-wall pipe ventilation system and ship |
CN113704779A (en) * | 2021-07-16 | 2021-11-26 | 杭州医康慧联科技股份有限公司 | Encrypted distributed machine learning training method |
CN113688408B (en) * | 2021-08-03 | 2023-05-12 | 华东师范大学 | Maximum information coefficient method based on secure multiparty calculation |
CN113688408A (en) * | 2021-08-03 | 2021-11-23 | 华东师范大学 | Maximum information coefficient method based on safe multi-party calculation |
CN113778966B (en) * | 2021-09-15 | 2024-03-26 | 深圳技术大学 | Cross-school information sharing method and related device for university teaching and course score |
CN113778966A (en) * | 2021-09-15 | 2021-12-10 | 深圳技术大学 | Cross-school information sharing method and related device for college teaching and course score |
CN113543120B (en) * | 2021-09-17 | 2021-11-23 | 百融云创科技股份有限公司 | Mobile terminal credit anti-fraud estimation method and system based on federal learning |
CN113543120A (en) * | 2021-09-17 | 2021-10-22 | 百融云创科技股份有限公司 | Mobile terminal credit anti-fraud estimation method and system based on federal learning |
CN113836556A (en) * | 2021-09-26 | 2021-12-24 | 广州大学 | Federal learning-oriented decentralized function encryption privacy protection method and system |
CN113591133A (en) * | 2021-09-27 | 2021-11-02 | 支付宝(杭州)信息技术有限公司 | Method and device for performing feature processing based on differential privacy |
CN113591133B (en) * | 2021-09-27 | 2021-12-24 | 支付宝(杭州)信息技术有限公司 | Method and device for performing feature processing based on differential privacy |
CN114091651B (en) * | 2021-11-03 | 2024-05-24 | 支付宝(杭州)信息技术有限公司 | Method, device and system for multi-party combined training of graph neural network |
CN114143311A (en) * | 2021-11-03 | 2022-03-04 | 深圳前海微众银行股份有限公司 | Privacy protection scheme aggregation method and device based on block chain |
CN114091651A (en) * | 2021-11-03 | 2022-02-25 | 支付宝(杭州)信息技术有限公司 | Method, device and system for multi-party joint training of neural network of graph |
CN114282652A (en) * | 2021-12-22 | 2022-04-05 | 哈尔滨工业大学 | Privacy-protecting longitudinal deep neural network model construction method, computer and storage medium |
CN114338144A (en) * | 2021-12-27 | 2022-04-12 | 杭州趣链科技有限公司 | Method for preventing data from being leaked, electronic equipment and computer-readable storage medium |
WO2023134077A1 (en) * | 2022-01-17 | 2023-07-20 | 平安科技(深圳)有限公司 | Homomorphic encryption method and system based on federated factorization machine, device and storage medium |
WO2023174018A1 (en) * | 2022-03-15 | 2023-09-21 | 北京字节跳动网络技术有限公司 | Vertical federated learning methods, apparatuses, system and device, and storage medium |
CN115081014A (en) * | 2022-05-31 | 2022-09-20 | 西安翔迅科技有限责任公司 | Target detection label automatic labeling method based on federal learning |
CN115310121A (en) * | 2022-07-12 | 2022-11-08 | 华中农业大学 | Real-time reinforced federal learning data privacy security method based on MePC-F model in Internet of vehicles |
CN115278709B (en) * | 2022-07-29 | 2024-04-26 | 南京理工大学 | Communication optimization method based on federal learning |
CN115278709A (en) * | 2022-07-29 | 2022-11-01 | 南京理工大学 | Communication optimization method based on federal learning |
CN115186285B (en) * | 2022-09-09 | 2022-12-02 | 闪捷信息科技有限公司 | Parameter aggregation method and device for federal learning |
CN115186285A (en) * | 2022-09-09 | 2022-10-14 | 闪捷信息科技有限公司 | Parameter aggregation method and device for federal learning |
CN115865307A (en) * | 2023-02-27 | 2023-03-28 | 蓝象智联(杭州)科技有限公司 | Data point multiplication operation method for federal learning |
CN116366250A (en) * | 2023-06-02 | 2023-06-30 | 江苏微知量子科技有限公司 | Quantum federal learning method and system |
CN116366250B (en) * | 2023-06-02 | 2023-08-15 | 江苏微知量子科技有限公司 | Quantum federal learning method and system |
CN117675199A (en) * | 2023-12-21 | 2024-03-08 | 盐城集结号科技有限公司 | Network security defense system based on RPA |
CN117675199B (en) * | 2023-12-21 | 2024-06-07 | 盐城集结号科技有限公司 | Network security defense system based on RPA |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112199702A (en) | Privacy protection method, storage medium and system based on federal learning | |
WO2021197037A1 (en) | Method and apparatus for jointly performing data processing by two parties | |
Byrd et al. | Differentially private secure multi-party computation for federated learning in financial applications | |
CN111160573B (en) | Method and device for protecting business prediction model of data privacy joint training by two parties | |
CN110546667B (en) | Blockchain data protection using homomorphic encryption | |
CN112989368B (en) | Method and device for processing private data by combining multiple parties | |
CN112541593B (en) | Method and device for jointly training business model based on privacy protection | |
CN111143894B (en) | Method and system for improving safe multi-party computing efficiency | |
US20160020904A1 (en) | Method and system for privacy-preserving recommendation based on matrix factorization and ridge regression | |
CA3159667A1 (en) | Systems and methods for encrypting data and algorithms | |
CN113435592B (en) | Neural network multiparty collaborative lossless training method and system with privacy protection | |
CN113239404A (en) | Federal learning method based on differential privacy and chaotic encryption | |
CN110580409A (en) | model parameter determination method and device and electronic equipment | |
Hassan et al. | Secure content based image retrieval for mobile users with deep neural networks in the cloud | |
CN116561787A (en) | Training method and device for visual image classification model and electronic equipment | |
Byrd et al. | Collusion resistant federated learning with oblivious distributed differential privacy | |
Khan et al. | Vertical federated learning: A structured literature review | |
CN113792890A (en) | Model training method based on federal learning and related equipment | |
JP2014206696A (en) | Data secrecy type inner product calculation system, method and program | |
CN115952529B (en) | User data processing method, computing device and storage medium | |
CN116094686B (en) | Homomorphic encryption method, homomorphic encryption system, homomorphic encryption equipment and homomorphic encryption terminal for quantum convolution calculation | |
Deng et al. | Non-interactive and privacy-preserving neural network learning using functional encryption | |
Lee et al. | PPEM: Privacy‐preserving EM learning for mixture models | |
Li et al. | Privacy threats analysis to secure federated learning | |
CN113657616B (en) | Updating method and device of federal learning model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |