CN114997423A - Semi-centralized confrontation training method for federal learning - Google Patents
Semi-centralized confrontation training method for federal learning Download PDFInfo
- Publication number
- CN114997423A CN114997423A CN202210532196.3A CN202210532196A CN114997423A CN 114997423 A CN114997423 A CN 114997423A CN 202210532196 A CN202210532196 A CN 202210532196A CN 114997423 A CN114997423 A CN 114997423A
- Authority
- CN
- China
- Prior art keywords
- parameters
- client
- encoder
- server
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a semi-centralized confrontation training method facing federal learning, belonging to the crossing field of safety and artificial intelligence; the method comprises the following specific steps: firstly, determining features to be trained preferentially aiming at a federal learning framework, and acquiring a sample data set with a feature label; according to the sample attribute, pre-training a confrontation generation model to enable the model to generate a confrontation sample aiming at the attribute; then, splitting the countermeasure generation model into an encoder and a decoder, sending the encoder to a client, and starting a federal process; the client side encodes a small amount of local 5% -10% random samples by using the received encoder and sends the samples to the server; after each round of receiving the samples, the server uses the confrontation samples to finely adjust the parameters of the federal learning framework and finally releases the confrontation samples; the invention realizes the application of countermeasure training in a distributed federated learning framework to improve the performance of the model.
Description
Technical Field
The invention belongs to the cross field of safety and artificial intelligence, and particularly relates to a federal learning-oriented semi-centralized confrontation training method.
Background
In 2016, Google provides federal learning for the first time on artificial intelligence prediction about input methods, the federal learning is a framework for model training by combining a plurality of user parties, and on the basis of traditional distributed machine learning, a technology and a mechanism for protecting data privacy are added, so that at the present that privacy sensitive data increasingly become the limit of big data development, the federal learning becomes a few machine learning technologies capable of providing privacy protection solutions.
However, the federal learning is not perfect, and due to the loss of model performance in the process of multi-user side joint training, the performance of the model on the prediction task is expected to be lower than that of centralized machine learning, so that a corresponding optimization method needs to be provided according to a special distributed machine learning framework, namely the federal learning, so that the learning capability of the federal learning model is improved to be capable of participating in the prediction task in the commercial activity.
Disclosure of Invention
Aiming at the problems, the invention provides a semi-centralized confrontation training method facing the federal learning, which reduces the disclosure of user privacy and the consumption of communication resources while optimizing the federal learning performance based on a special generated confrontation network, and realizes the high-efficiency and targeted performance optimization of the federal learning.
The method comprises the following specific steps:
step one, in the federal learning process, aiming at a feature item to be preferentially trained, acquiring a data set with a feature label, and dividing the data set into a training set and a test set;
secondly, according to the rule of federal learning, the server determines the number of clients, and carries out probability sampling on training sets to ensure that the number of the training sets distributed to each client is the same;
pointing a local data pointer of the client to the distributed training set, and adding a training set catalog to each client;
the contents contained in the directory are: (1) the number of training set samples of the client; (2) the client side training set labels are distributed; (3) the client trains the label distribution of the priority feature items of the set.
Step three, aiming at a transverse federal learning framework with a central server, a training set is used for pre-training a countermeasure generation model VAE;
the confrontation generation model VAE includes: the Encoder, the Decoder, the mean vector mu of the hidden variables, the log variance vector logvar of the hidden variables, and the sampling method repetition of the hidden variables.
In addition, the pre-training steps are as follows:
step 301, initializing an encoder, a decoder and a fully-connected neural network MLP;
step 302, inputting a picture sample x to an encoder to obtain a hidden variable z; the decoder decodes the hidden variable z into the antagonistic samples, while the MLP uses the partial variable z of the hidden variable z a Obtaining a prediction judgment result a of the optimized features for input;
step 303, substituting the input sample x, the hidden variable z and the prediction discrimination result a into a loss function, and calculating a loss function loss value of the current confrontation sample;
the loss function loss is calculated as:
L(p,q)=E q(z|x) [logp(x|z)]-KLD(q(z|x)||p(z))+E[a|z a ]
wherein E represents the expectation of the result, q (z|x) Represents the output of the encoder, i.e. the distribution of hidden variables; p (x | z) represents the output of the decoder, i.e. the generated challenge samples; p (z) represents the representation of z under the normal distribution, KLD represents the KL divergence distance between the hidden variable distribution and the normal distribution, E [ a | z |) a ]Is expressed according to the variable z a Predicting an outcome expectation of the feature to be optimized;
the loss function includes a loss term of: the cross entropy of pixel value distribution of original image samples and confrontation samples, the KL divergence distance of hidden variable distribution and standard normal distribution and the cross entropy of prediction priority characteristics and actual characteristic labels.
304, performing back propagation by using random gradient descent according to the loss value, and updating parameters of an encoder, a decoder and an MLP;
step four, the central server sends the encoder parameters in the trained model VAE to each user side and simultaneously sends the quantity proportion parameters of the collected samples;
the method specifically comprises the following steps:
firstly, a central server generates a homomorphic encryption public key and a private key, encrypts encoder parameters by using the public key, and encrypts a series of task parameters at the same time; the method comprises the following steps: (1) the training data proportion selected locally by the client; (2) the server side preferentially trains the feature names; (3) the number and position of the hidden variables to be randomized are required.
Then, the central server sends the private key to each client through a secure channel, sends the encoder parameters and the task parameters to each client through a public network, and waits for the client to respond to the receiving mark.
The client side decrypts the encoder parameters and the task parameters by using the private key after receiving the encoder parameters and the task parameters, and performs validity verification after decryption is completed; if the data verification is legal, sending a received mark to the server; otherwise, sending a retransmission mark to the server.
The validity verification comprises the following steps: (1) whether the data format is correct; (2) whether the data content is within a normal interval.
The server waits for the client message, and immediately retransmits the parameter information if the retransmission flag exists until all the participants send the received flag.
Step five, the central server applies a homomorphic encryption algorithm Pallier to regenerate a public key and a private key, sends the model parameters and the public key of the Federal learning architecture to each client, and leaves the private key in the local server;
step six, the client updates the model parameters of the federated learning architecture by using a local training set, and sends a local training completion signal to the server; meanwhile, randomly selecting local data, coding by using a trained coder, obtaining a feature vector by re-parameterization random sampling, packaging and sending to a central server;
the generation method of the feature vector comprises the following specific steps:
firstly, a client randomly selects local 5% -10% data, and the data are used as the input of an encoder to obtain the mean value and the logarithmic variance of an implicit variable;
then, carrying out random sampling through reparameterization operation, and assigning values to each hidden variable;
specifically, a value is randomly selected from standard normal distribution, added with a mean value and multiplied by a standard deviation to assign a value to each hidden variable;
the standard deviation is obtained by exponentiating the logarithmic variance to the base e under the root number.
And finally, adding random noise to the part of the assigned hidden variable vector corresponding to the priority characteristic, and packaging.
Step seven, the server receives the updated model parameters of the federal learning architecture, and carries out weighted average on the updated model parameters; meanwhile, a decoder is used for converting the characteristic vector sent by the client into a confrontation sample, and the confrontation sample is used as training data to fine-tune the model parameters of the federal learning framework after weighted average again;
the method comprises the following specific steps:
firstly, a server collects a fixed number of model parameters, the model parameters and the parameters of the current federal learning model are weighted and averaged, and after calculation, the server decodes the model parameters by using a local private key of a server;
and then, the server collects a fixed number of characteristic vectors, decodes the characteristic vectors through a local decoder to obtain countermeasure samples added with noise, retrains and finely adjusts model parameters of the federal learning architecture by using the local data samples and the countermeasure samples, and issues the updated parameters after the update of the parameters is completed to complete the federal process.
Step eight, the server releases the fine-tuned federal learning architecture and checks whether the federal flow is finished; if not, go back to step five.
The invention has the advantages that:
1) the semi-centralized confrontation training method for the federal learning can greatly improve the performance of the federal learning model, so that the performance of the federal learning model can meet the requirement of a prediction task in a commercial environment; meanwhile, the learning ability of the federal learning model to specific attributes can be improved.
2) The semi-centralized confrontation training method for the federal learning can better protect the privacy information of a user, reduce the transmission quantity of data in the communication process and improve the running speed of the federal learning system in comparison with the prior technology of the same type.
Drawings
FIG. 1 is a block diagram of the semi-centralized counter-training of the present invention;
FIG. 2 is a flow chart of a federal learning oriented semi-centralized confrontation training method of the present invention;
FIG. 3 is a schematic illustration of a challenge sample generated in the present invention;
FIG. 4 is a schematic diagram of the improvement of task prediction accuracy by the counterlearning method implemented by the present invention;
FIG. 5 is a communication data consumption diagram of a counterlearning approach implemented by the present invention versus other optimization schemes;
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples of embodiment.
The invention relates to a semi-centralized confrontation training method facing federal learning, which comprises the steps of firstly, aiming at a federal learning framework, determining characteristics to be trained preferentially, and acquiring a sample data set with a characteristic label; according to sample attributes, a customized confrontation generation model is pre-trained, so that the model can generate confrontation samples aiming at each attribute; then, splitting the countermeasure generation model into an encoder and a decoder, sending the encoder to a client, and starting a federal process; the client side encodes a small amount of local 5% -10% random samples by using the received encoder and sends the samples to the server; after receiving the samples in each round, the server uses the confrontation samples to finely adjust the parameters of the federal learning framework and issues the confrontation samples; the invention realizes the application of countermeasure training in a distributed federated learning framework to improve the performance of the model.
As shown in fig. 2, the specific steps are as follows:
step one, aiming at a transverse federal learning architecture with a central server, determining a feature item needing preferential training, acquiring a data set with a feature label, and dividing the data set into a training set and a test set;
the feature items are labels different from the prediction tasks and are generally important sensitive information; such as gender of the face data. Such information often has a great influence on the prediction performance of the model, and for example, if the face data is subjected to color value scoring, the gender of the person has a great influence on the score, so that it is necessary to explicitly utilize the important information in the prediction task to improve the model performance.
In the standard training and testing process of machine learning, data sets need to be divided, and the data sets are generally divided according to a training set of 90% and a testing set of 10%; in the present invention, the pre-training data set also needs to be divided according to a certain proportion.
The pre-training is a method for initializing parameters in the training process, and when the prediction performance of the model is excellent enough, the parameters of the trained model can be saved, so that the trained model can obtain a better result when a similar task is executed next time. The federal learning final model in the invention has certain dependence on the pre-training quality, so whether the pre-training is finished or not needs to be determined by observing the performance of the pre-training task.
Distributing the training set to each client according to the rule of federal learning, so that each client has approximately equal sample number;
the status of each user party in the federal learning is equal, in the operation of the actual federal learning system, the probability weight of each participant is generally determined according to the number of contribution samples of each participant, and before the start of the federal learning process, the specific process is as follows:
firstly, the server determines the number of clients, and carries out probability sampling on a training set according to the number of the clients, so that the sampled probability of each client is approximately the same;
then, pointing the local data pointer of the client to the distributed training set;
finally, adding a training set catalog at each client;
the contents contained in the directory are: (1) the number of training set samples of the client; (2) the client side training set labels are distributed; (3) the client trains the label distribution of the priority feature items of the set.
The reason for adding the data directory is to enable the client to locally and quickly look up and send some label information about the data without counting every time, so that execution and interaction efficiency of federal learning are accelerated.
Step three, initializing a federal learning architecture at a central server, and using a training set to pre-train a countermeasure generation model VAE (variational self-encoder);
a Variational Auto-Encoders (VAE) is a generation network structure based on Variational Bayes (VB) inference as a form of a depth generation model, and describes observation of a potential space in a probabilistic manner unlike a traditional self-encoder which describes the potential space in a numerical manner, so that the Variational Auto-encoder has great application value in data generation and countermeasure training. The structure and method of the VAE confrontation generation model are as follows:
an Encoder: the encoder maps high-dimensional input to a low-dimensional output neural network, and generally adopts a multilayer fully-connected neural network or a convolutional network;
a Decoder: a decoder for mapping the low-dimensional input to the high-dimensional output neural network, generally adopting a multilayer fully-connected neural network or an inverse convolution network;
mu: the mean vector of the hidden variables is obtained through the learning of an encoder;
logvar: obtaining a logarithmic variance vector of the hidden variable through the learning of an encoder;
repearameterization: the sampling method of hidden variables is generally to add a random value from a standard normal distribution, add a mean value, and multiply by a standard deviation.
In addition, the steps for pre-training the VAE are as follows:
step 301, splitting a variational self-encoder into an encoder part and a decoder part, and initializing the encoder, the decoder and a fully-connected neural network (MLP);
the encoder encodes the picture data into hidden variables, and the decoder decodes the hidden variables into the picture data;
step 302, inputting picture data x to an encoder to obtain a hidden variable z; the decoder decodes the implicit variable z into the antagonistic samples, as wellPartial variable z of hidden variable z for time-MLP a Obtaining a prediction judgment result a of the optimized characteristics for input;
step 303, substituting input data x, hidden variables z and a prediction discrimination result a into a loss function, and calculating a loss function loss value of the current confrontation sample;
the loss function loss is calculated as:
L(p,q)=E q(z|x) [logp(x|z)]-KLD(q(z|x)||p(z))+E[a|z a ]
wherein E represents the expectation of the result, q (z|x) Represents the output of the encoder, i.e. the distribution of hidden variables; p (x | z) represents the output of the decoder, i.e. the generated samples; p (z) represents the representation of z under the normal distribution, KLD represents the KL divergence distance between the hidden variable distribution and the normal distribution, E [ a | z |) a ]Is expressed according to the variable z a Predicting an outcome expectation of the feature to be optimized;
the loss function includes a loss term:
cross entropy of original sample and challenge sample pixel value distributions: the cross entropy is mainly used for judging the closeness degree of the actual output and the expected output, and measuring the difference degree of the sample and the original sample after the confrontation sample is generated;
KL divergence distance of hidden variable distribution from standard normal distribution: the KL divergence, also known as relative entropy, is a measure of the asymmetry of the difference between two probability distributions. In information theory, the relative entropy is equivalent to the difference between the information entropies (Shannon entrypes) of two probability distributions. In order to improve the generalization capability of the model, the term is designed as a regular term in the pre-training process, and the difference degree of the current distribution of the hidden variables and the standard normal distribution is measured.
Predicting cross entropy of the prior feature and the actual feature label: the degree of correlation between the original hidden variables and the priority characteristics is measured.
304, performing back propagation by using random gradient descent according to the loss value, and updating parameters of an encoder, a decoder and an MLP;
step four, the central server sends the encoder parameters in the trained model VAE to each user side and simultaneously sends the quantity proportion parameters of the collected samples;
the method specifically comprises the following steps:
firstly, a central server generates a homomorphic encryption public key and a private key, encrypts encoder parameters by using the public key, and encrypts a series of task parameters at the same time; the method comprises the following steps: (1) the training data proportion selected locally by the client; (2) the server side preferentially trains the feature names; (3) the number and position of the hidden variables to be randomized are required.
Homomorphic encryption is a cryptographic technique based on the computational complexity theory of mathematical problems, processes homomorphic encrypted data to obtain an output, decrypts the output, and has the same result as the output obtained by processing unencrypted original data in the same method.
Then, the central server sends the private key to each client through a secure channel, sends the encoder parameters and the task parameters to each client through a public network, and waits for the client to respond to the receiving mark.
The client side decrypts the encoder parameters and the task parameters by using the private key after receiving the encoder parameters and the task parameters, and performs validity verification after decryption is completed; if the data verification is legal, sending a received mark to the server; otherwise, sending a retransmission mark to the server.
The validity verification comprises the following steps: (1) whether the data format is correct; (2) whether the data content is within a normal interval.
The server waits for the client message, and immediately retransmits the parameter information if the retransmission flag exists until all the participants send the received flag.
Step five, the central server starts a federal training flow, sends the parameters of the current federal model to each client, determines the number of the clients to be sampled in the current round, and waits for the clients to send local training completion signals;
the sending mode is specifically that the server applies a homomorphic encryption algorithm Pallier to regenerate a public key and a private key, sends the model parameters and the public key of the Federal learning architecture to each client, and leaves the private key in the local server.
Leaving the private key local to the server prevents attacks from third parties on the public network.
Step six, the client receives the model parameters of the federal learning architecture, and trains and updates the federal learning architecture parameters by using a local training set; meanwhile, the client randomly selects 5% -10% of local data, encodes the local data into hidden variables by using a trained encoder, randomly samples the hidden variables to obtain characteristic vectors, packages the characteristic vectors, and finally sends a local training completion signal to the server;
the method for generating the feature vector specifically comprises the following steps:
firstly, a client randomly selects local 5% -10% data, and the data are used as the input of an encoder to obtain the mean value and the logarithmic variance of an implicit variable;
then, sampling is carried out through reparameterization operation, reparameterization is a skill for avoiding random values from being irreducible in a neural network, a specific method is that a value is randomly selected from standard normal distribution, a mean value is added, then the mean value is multiplied by a standard deviation, the standard deviation can be obtained by taking a logarithmic variance as a power with e below a root number as a base, and therefore each hidden variable can be assigned;
and adding a random noise to the part of the assigned hidden variable vector corresponding to the priority feature, wherein the noise generally uses a random value on a standard normal distribution. The client encapsulates the hidden variable vector, and finally encapsulates all vectors.
Step seven, the server receives the updated model parameters of the federal learning architecture from the client, carries out weighted average on the updated model parameters and updates the federal model parameters; meanwhile, the server receives the feature vector sent by the client, converts the feature vector into a countermeasure sample by using a decoder, and uses the countermeasure sample as training data to fine tune the federal model parameters;
the method comprises the following specific steps:
firstly, a server collects a fixed number of model parameters, the fixed number of model parameters and the current federal learning model parameters are weighted and averaged, and after calculation, the model parameters are decoded by using a local private key;
and then, the server collects a fixed number of characteristic vectors, decodes the characteristic vectors through a local decoder to obtain the confrontation samples added with noise, retrains the fine-tuning nation learning model parameters again by using the local data samples and the confrontation samples, and issues the parameters after the parameters are updated, so that the federal flow of the round is completed.
Countermeasure samples As shown in FIG. 3, in the first two images, the left is the original sample, and the right is the countermeasure sample, transforming the countermeasure sample by the "skin tone" of the original sample, so the "skin tone" is the feature that is expected to be trained preferentially. In the second set of two images, the left is the original sample and the right is the countersample, and the countersample is transformed by the color of the original sample, so that the color is the feature that is expected to be trained preferentially.
Step eight, the server releases the federal model of the federal learning architecture and checks whether the federal flow is finished; if not, go back to step five.
The termination of the federal flow depends on a termination command, if no termination command is sent, the termination command is always sent, and each time the server sends the model, the process of the federal flow can be regarded as one turn.
Specific examples are as follows:
the method is used for carrying out countermeasure training optimization on the federal learning, and the federal learning simulation platform PySyft in python language is used for carrying out federal framework simulation. In the simulation, 5 groups of users and 1 central server were instantiated, using the CelebA public data set, as shown in fig. 4, a diagram of the predicted task accuracy changes over 5 groups of users for the federated learning model before and after the countermeasure optimization. Compared with the experimental results, the accuracy improvement range is not completely the same in 5 user groups, but is improved to a certain extent, the maximum improvement can reach 13.3%, and the superiority of the method in the invention on the prediction task is fully proved compared with the original federal learning framework.
Compared with the communication traffic consumption of another federal learning optimization scheme widely used in the commercial market, as shown in fig. 5, the communication traffic consumption can measure the communication cost and the communication efficiency of the federal learning system, and it can be seen that the reduction of the communication traffic of the method of the invention is gradually increased along with the progress of the federal process, so that the communication cost is effectively reduced, and the communication efficiency and the overall operation efficiency of the federal learning system are improved.
Claims (6)
1. A federal learning-oriented semi-centralized oppositional training method is characterized by comprising the following specific steps:
firstly, in the federal learning process, aiming at a feature item to be preferentially trained, acquiring a data set with a feature label, and dividing the data set into a training set and a test set; according to the rule of federal learning, the server carries out probability sampling on the training set according to the number of clients;
then, aiming at a transverse federal learning framework with a central server, a training set is used for pre-training a countermeasure generation model VAE;
the central server sends the encoder parameters in the trained model VAE to each user side and simultaneously sends the quantity proportion parameters of the collected samples; meanwhile, the central server applies a homomorphic encryption algorithm Pallier to regenerate a public key and a private key, sends the model parameters and the public key of the Federal learning architecture to each client, and leaves the private key in the local server;
the client updates model parameters of the federated learning framework by using the training set and sends the updated model parameters to the server; simultaneously, randomly selecting 5% -10% of local data, coding the local data into hidden variables by using a trained coder, obtaining feature vectors by re-parameterization random sampling, packaging and sending the feature vectors to a central server;
finally, the server receives the updated model parameters of the federal learning architecture and carries out weighted average on the updated model parameters; meanwhile, a decoder is used for converting the characteristic vector sent by the client into a countermeasure sample, and the countermeasure sample is used as training data to fine-tune the model parameters of the federal learning architecture after weighted average for final release.
2. The federally-learning-oriented semi-centralized antagonistic training method according to claim 1, wherein each client local data pointer points to an assigned training set, and a training set directory is added to each client;
the contents contained in the directory are: (1) the number of training set samples of the client; (2) the client side training set labels are distributed; (3) the client trains the label distribution of the priority feature items of the set.
3. The federally-learned semi-centralized countermeasure training method of claim 1, wherein the countermeasure generation model VAE comprises: an Encoder Encoder, a Decoder Decoder, a mean vector mu of a hidden variable, a log variance vector logvar of the hidden variable, and a sampling method repetition of the hidden variable;
the steps of pre-training the confrontation generation model VAE are as follows:
step 301, initializing an encoder, a decoder and a fully-connected neural network (MLP);
step 302, inputting a picture sample x to an encoder to obtain a hidden variable z; the decoder decodes the hidden variable z into the antagonistic samples, while the MLP uses the partial variable z of the hidden variable z a Obtaining a prediction judgment result a of the optimized features for input;
step 303, substituting the input sample x, the hidden variable z and the prediction discrimination result a into a loss function, and calculating a loss function loss value of the current confrontation sample;
the loss function loss is calculated as:
L(p,q)=E q(z|x) [logp(x|z)]-KLD(q(z|x)||p(z))+E[a|z a ]
wherein E represents the expectation of the result, q (z|x) Represents the output of the encoder, i.e. the distribution of hidden variables; p (x | z) represents the output of the decoder, i.e., the generated countermeasure samples; p (z) represents the representation of z under the normal distribution, KLD represents the KL divergence distance between the hidden variable distribution and the normal distribution, E [ a | z |) a ]Is expressed according to the variable z a Predicting an outcome expectation of the feature to be optimized;
and step 304, performing back propagation by using random gradient descent according to the loss value, and updating parameters of the encoder, the decoder and the MLP.
4. The federally-learned semi-centralized countermeasure training method as claimed in claim 1, wherein the central server sends the encoder parameters in the trained model VAE to each user end, and simultaneously sends the quantity ratio parameters of the collected samples, specifically:
firstly, a central server generates a homomorphic encryption public key and a private key, encrypts encoder parameters by using the public key, and encrypts a series of task parameters at the same time; the method comprises the following steps: (1) the training data proportion selected locally by the client; (2) the server side preferentially trains the feature names; (3) the number and position of hidden variables to be randomized;
then, the central server sends the private key to each client through a secure channel, sends the encoder parameters and the task parameters to each client through a public network, and waits for the client to respond to a receiving mark;
the client side decrypts the encoder parameters and the task parameters by using the private key after receiving the encoder parameters and the task parameters, and performs validity verification after decryption is completed; if the data verification is legal, sending a received mark to the server; otherwise, sending a retransmission mark to the server;
the validity verification comprises the following steps: (1) whether the data format is correct; (2) whether the data content is within a normal interval;
the server waits for the client message, and immediately retransmits the parameter information if the retransmission flag exists until all the participants send the received flag.
5. The federal-learning-oriented semi-centralized antagonistic training method according to claim 1, wherein the feature vector is generated by the following specific method:
firstly, a client randomly selects local 5% -10% data, and the data are used as the input of an encoder to obtain the mean value and the logarithmic variance of an implicit variable;
then, carrying out random sampling through reparameterization operation, and assigning values to each hidden variable;
specifically, a value is randomly selected from standard normal distribution, added with a mean value and multiplied by a standard deviation to assign a value to each hidden variable;
the standard deviation is obtained by taking the logarithmic variance to the power with e as the base under the root number;
and finally, adding random noise to the part of the assigned hidden variable vector corresponding to the priority characteristic, and packaging.
6. The federal learning oriented semi-centralized antagonistic training method as claimed in claim 1, wherein the server side fine-tunes the model parameters of the federal learning architecture, and the specific steps are as follows:
firstly, a server collects a fixed number of model parameters, the model parameters and the parameters of the current federal learning model are weighted and averaged, and after calculation, the server decodes the model parameters by using a local private key of a server;
and then, the server collects a fixed number of characteristic vectors, decodes the characteristic vectors through a local decoder to obtain countermeasure samples added with noise, retrains and finely adjusts model parameters of the federal learning architecture by using the local data samples and the countermeasure samples, and issues the updated parameters after the update of the parameters is completed to complete the federal process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210532196.3A CN114997423A (en) | 2022-05-09 | 2022-05-09 | Semi-centralized confrontation training method for federal learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210532196.3A CN114997423A (en) | 2022-05-09 | 2022-05-09 | Semi-centralized confrontation training method for federal learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114997423A true CN114997423A (en) | 2022-09-02 |
Family
ID=83027531
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210532196.3A Pending CN114997423A (en) | 2022-05-09 | 2022-05-09 | Semi-centralized confrontation training method for federal learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114997423A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115577797A (en) * | 2022-10-18 | 2023-01-06 | 东南大学 | Local noise perception-based federated learning optimization method and system |
-
2022
- 2022-05-09 CN CN202210532196.3A patent/CN114997423A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115577797A (en) * | 2022-10-18 | 2023-01-06 | 东南大学 | Local noise perception-based federated learning optimization method and system |
CN115577797B (en) * | 2022-10-18 | 2023-09-26 | 东南大学 | Federal learning optimization method and system based on local noise perception |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112906903B (en) | Network security risk prediction method and device, storage medium and computer equipment | |
CN116167084A (en) | Federal learning model training privacy protection method and system based on hybrid strategy | |
CN113505882B (en) | Data processing method based on federal neural network model, related equipment and medium | |
CN111931253A (en) | Data processing method, system, device and medium based on node group | |
CN112733967A (en) | Model training method, device, equipment and storage medium for federal learning | |
CN107135061B (en) | A kind of distributed secret protection machine learning method under 5g communication standard | |
CN114401079A (en) | Multi-party joint information value calculation method, related equipment and storage medium | |
CN113947211A (en) | Federal learning model training method and device, electronic equipment and storage medium | |
CN111767411B (en) | Knowledge graph representation learning optimization method, device and readable storage medium | |
CN115841133A (en) | Method, device and equipment for federated learning and storage medium | |
WO2022076826A1 (en) | Privacy preserving machine learning via gradient boosting | |
CN112787809A (en) | Efficient crowd sensing data stream privacy protection truth value discovery method | |
CN114997423A (en) | Semi-centralized confrontation training method for federal learning | |
Khalid et al. | Quantum semantic communications for metaverse: Principles and challenges | |
CN114362948B (en) | Federated derived feature logistic regression modeling method | |
CN111553486A (en) | Information transmission method, device, equipment and computer readable storage medium | |
CN117155670B (en) | Method, system, equipment and storage medium for transmitting secure E-mail with encrypted identity | |
CN114358316A (en) | Federal learning system and large-scale image training method and device thereof | |
Hidayat et al. | Privacy-Preserving Federated Learning With Resource Adaptive Compression for Edge Devices | |
CN113645294A (en) | Message acquisition method and device, computer equipment and message transmission system | |
CN114862416B (en) | Cross-platform credit evaluation method in federal learning environment | |
Zhou et al. | A survey of security aggregation | |
CN114692201B (en) | Multi-party security calculation method and system | |
Huang et al. | Dynamic bayesian network based security analysis for physical layer key extraction | |
Liu et al. | PPEFL: An Edge Federated Learning Architecture with Privacy‐Preserving Mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |