CN117150566A - Robust training method and device for collaborative learning - Google Patents

Robust training method and device for collaborative learning Download PDF

Info

Publication number
CN117150566A
CN117150566A CN202311424634.5A CN202311424634A CN117150566A CN 117150566 A CN117150566 A CN 117150566A CN 202311424634 A CN202311424634 A CN 202311424634A CN 117150566 A CN117150566 A CN 117150566A
Authority
CN
China
Prior art keywords
global
neural network
network model
model
gradient information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311424634.5A
Other languages
Chinese (zh)
Other versions
CN117150566B (en
Inventor
徐恪
刘自轩
赵乙
王维强
赵闻飙
金宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Technology Group Co ltd
Tsinghua University
Original Assignee
Ant Technology Group Co ltd
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Technology Group Co ltd, Tsinghua University filed Critical Ant Technology Group Co ltd
Priority to CN202311424634.5A priority Critical patent/CN117150566B/en
Publication of CN117150566A publication Critical patent/CN117150566A/en
Application granted granted Critical
Publication of CN117150566B publication Critical patent/CN117150566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a robust training method and a robust training device for collaborative learning, wherein the method comprises the steps of generating a plurality of first countermeasure samples, calculating initial gradient information corresponding to the plurality of first countermeasure samples, and carrying out noise adding processing on the initial gradient information to obtain noise adding gradient information; the method comprises the steps of configuring priority weights of noise gradient information by utilizing a meta learning network model, and updating global model parameters of a global neural network model according to a weight configuration result to obtain updated global model parameters; and updating the local model parameters of the local neural network model based on the updated global model parameters so as to obtain the global neural network model and the local neural network model which are trained in the current turn according to the updated global model parameters and the updated local model parameters. The invention can give consideration to collaborative learning training efficiency on the basis of ensuring the privacy of the client, and simultaneously improves the robustness of the global model maintained by the central server.

Description

Robust training method and device for collaborative learning
Technical Field
The invention belongs to the technical fields of network space safety and artificial intelligence safety, and particularly relates to a robust training method and device for collaborative learning.
Background
Collaborative learning generally refers to the collaborative work of multiple deep learning models or algorithms in the field of artificial intelligence, which improves learning effects and performance by sharing and exchanging information. Collaborative learning is widely applied to the fields of network space security, the next generation of Internet, finance, intelligent medical treatment and the like at present. Collaborative learning challenge defense is a branch of collaborative learning security research, and aims to ensure that relevant intelligent models can still maintain robustness when being subjected to challenge, and reduce negative effects of challenge on model performance. Different from deep learning countermeasure defense, cooperative learning countermeasure defense also needs to combine the characteristics of cooperative learning local training, parameter sharing, privacy protection and the like, and the robustness of the model is improved on the basis of considering data transmission efficiency, model training efficiency and data privacy.
White-box challenge-against attacks in the field of artificial intelligence means that an attacker has comprehensive knowledge of a target model, including all relevant information such as the structure, parameters, training data, optimization algorithms, and the like of the model, and uses the information to generate a challenge sample so as to make the model produce misjudgment or misprediction. Specific attack algorithms include, but are not limited to Fast Gradient Sign Method (FGSM), projected Gradient Descent (PGD), adversarial Generative Network (AGN), etc. The antagonism sample that different attack algorithms generated has different mechanism of action and advantage scene. Differential privacy refers to adding random noise to a data set, thereby ensuring the privacy of individual data on the basis of ensuring the availability of the data and the relative invariance of statistical information.
Meta learning is a method for extracting features of different tasks through learning a plurality of tasks so as to improve the performance of learning new tasks. Meta-learning is typically the abstraction of a generic learning strategy from multiple tasks, such as model hyper-parameters, optimal data, etc. for a specific task, and then applying the learning strategy to the new task.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems in the related art to some extent.
Therefore, the invention aims to provide a robust training method for collaborative learning, and a collaborative learning platform (or system) trained by the method can give consideration to collaborative learning training efficiency on the basis of guaranteeing client privacy, and meanwhile, the robustness of a global model maintained by a central server is improved, so that damage to the collaborative learning platform (or system) after being subjected to attack resistance is reduced to the greatest extent.
Another object of the present invention is to propose a robust training device for collaborative learning.
To achieve the above object, according to one aspect of the present invention, a robust training method for collaborative learning is provided, the method comprising the steps of:
constructing a global neural network model and a meta learning network model of a central server and a local neural network model of a client;
generating a plurality of first countermeasure samples by using a countermeasure training module of the client, calculating initial gradient information corresponding to the plurality of first countermeasure samples, and carrying out noise adding processing on the initial gradient information by using a privacy protection module of the client to obtain noise adding gradient information;
the noise adding gradient information is sent to a robust aggregation module of a central server, so that priority weight configuration is carried out on the noise adding gradient information by utilizing the meta learning network model, and global model parameters of the global neural network model are updated according to a weight configuration result to obtain updated global model parameters;
and updating the local model parameters of the local neural network model based on the updated global model parameters so as to obtain the global neural network model and the local neural network model which are trained in the current turn according to the updated global model parameters and the updated local model parameters.
In addition, the robust training method for collaborative learning according to the above embodiment of the present invention may further have the following additional technical features:
further, in one embodiment of the present invention, after modeling the central server and the client, the method further comprises:
and initializing the network structure of the local neural network model of each client according to the network structure of the global neural network model of the central server.
Further, in one embodiment of the present invention, the challenge training module of the client generates a plurality of first challenge samples and calculates initial gradient information corresponding to the plurality of first challenge samples, including:
constructing a set of challenge algorithms based on a plurality of challenge algorithms; wherein the plurality of attack-resistant algorithms at least comprise a white-box attack-resistant algorithm based on projection gradient descent and a black-box attack-resistant algorithm based on generation of an attack-resistant network;
generating a plurality of first challenge samples of the input original samples by applying any algorithm in the set of challenge algorithms with a challenge training module of the client;
and calculating initial gradient information corresponding to the first countermeasure samples by using a back propagation algorithm.
Further, in an embodiment of the present invention, the noise adding processing is performed on the initial gradient information by using a privacy protection module of the client to obtain noise adding gradient information, including:
clipping the initial gradient information to obtain clipped gradient information;
carrying out noise adding processing on the cut gradient information by utilizing a differential privacy technology to obtain noise adding gradient information; wherein the noise follows a Laplacian distribution of a preset mean and scale.
Further, in an embodiment of the present invention, the configuring the priority weight of the noise-added gradient information by using the meta learning network model, and updating the global model parameters of the global neural network model according to the weight configuration result to obtain updated global model parameters includes:
judging whether the meta learning network model of the current round is trained, generating a plurality of second countermeasure samples based on the training completion judgment result and the global model parameters, and inputting the plurality of second countermeasure samples into the global neural network model to output and obtain a first confidence;
updating parameters based on the noise-added gradient information and the global model parameters to obtain updated first global model parameters, and inputting the plurality of second countermeasure samples into a global neural network model to output to obtain second confidence coefficients;
calculating the average value of the confidence coefficient difference values before and after gradient updating based on the first confidence coefficient and the second confidence coefficient, and inputting noise adding gradient information into a meta learning network model, wherein the average value of the confidence coefficient difference values is used as a label of the meta learning network model to train the meta learning network model to obtain a trained meta learning network model;
and restoring the updated first global model parameters into global model parameters, and obtaining updated second global model parameters by inputting the noise adding gradient information into the trained meta learning network model to output a model output result serving as the weight of the noise adding gradient information and weighting the updated and restored global model parameters after normalization.
Further, in an embodiment of the present invention, the noise-added gradient information and the updated second global model parameter are transmitted in an encrypted manner based on TLS protocol.
Further, in one embodiment of the present invention, updating the local model parameters of the local neural network model based on the updated global model parameters includes:
and replacing the parameters of the local neural network model with the updated second global model parameters to update the local model parameters of the local neural network model.
Further, in one embodiment of the present invention, based on the encryption suite set preset in the TLS protocol, one suite is randomly selected from the encryption suite set to encrypt each time the central server performs data transmission with the client.
Further, in one embodiment of the invention, initializing the global neural network model and the meta-learning network model and the local neural network model includes initializing a plurality of model structures, model parameters, loss functions, and optimization algorithms; the global neural network model and the local neural network model comprise one of a full-connection network, a convolutional neural network, a cyclic neural network and a transducer; the loss functions of the global neural network model and the local neural network model comprise one of a cross entropy loss function and a mean square error loss function; the optimization algorithm of the global neural network model and the local neural network model and the element learning network model is an Adam algorithm.
To achieve the above object, another aspect of the present invention provides a robust training apparatus for collaborative learning, the apparatus comprising:
the network model construction module is used for constructing a global neural network model and a meta learning network model of the central server and a local neural network model of the client;
the system comprises a gradient information processing module, a client-side privacy protection module and a client-side contrast training module, wherein the gradient information processing module is used for generating a plurality of first contrast samples by using the client-side contrast training module, calculating initial gradient information corresponding to the first contrast samples, and carrying out noise adding processing on the initial gradient information by using the client-side privacy protection module to obtain noise adding gradient information;
the model parameter updating module is used for sending the noise adding gradient information to a robust aggregation module of a central server, so as to carry out priority weight configuration on the noise adding gradient information by utilizing the meta learning network model, and updating global model parameters of the global neural network model according to a weight configuration result to obtain updated global model parameters;
and the network model training module is used for updating the local model parameters of the local neural network model based on the updated global model parameters so as to obtain the global neural network model and the local neural network model which are trained in the current turn according to the updated global model parameters and the updated local model parameters.
According to the robust training method and device for collaborative learning, firstly, an anti-attack algorithm is used for generating an anti-sample and obtaining corresponding gradient information, then, a differential privacy technology is used for carrying out gradient noise adding processing from a client to a central server, and then, meta learning is used for updating weights for gradient configuration, so that efficient optimization of global model parameters is achieved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a robust training method for collaborative learning in accordance with an embodiment of the present invention;
FIG. 2 is a logic flow diagram of a robust training method for collaborative learning in accordance with an embodiment of the present invention;
FIG. 3 is an architecture diagram of a collaborative learning platform (or system) according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a robust training apparatus for collaborative learning according to an embodiment of the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The following describes a robust training method and device for collaborative learning according to an embodiment of the present invention with reference to the accompanying drawings.
Fig. 1 is a flowchart of a robust training method for collaborative learning according to an embodiment of the present invention. As shown in fig. 1, the method includes:
s1, constructing a global neural network model and a meta learning network model of a central server and a local neural network model of a client;
s2, generating a plurality of first countermeasure samples by using a countermeasure training module of the client, calculating initial gradient information corresponding to the plurality of first countermeasure samples, and carrying out noise adding processing on the initial gradient information by using a privacy protection module of the client to obtain noise adding gradient information;
s3, sending the noise gradient information to a robust aggregation module of the central server, so as to carry out priority weight configuration on the noise gradient information by utilizing the meta-learning network model, and updating global model parameters of the global neural network model according to a weight configuration result to obtain updated global model parameters;
and S4, updating local model parameters of the local neural network model based on the updated global model parameters so as to obtain the global neural network model and the local neural network model which are trained in the current turn according to the updated global model parameters and the updated local model parameters.
The robust training method for collaborative learning according to the embodiment of the invention is described in detail below with reference to the accompanying drawings.
Fig. 2 is a logic flow chart of the present invention, and fig. 3 is a framework diagram of a collaborative learning platform (or system) of the present invention, as shown in fig. 3, the method of the present invention combines a multi-module fusion training method composed of a client challenge training module, a client privacy protection module, and a central server robust aggregation module. In the scheme of the invention, a client firstly activates a challenge training module, generates different challenge samples by using a series of challenge algorithms with different characteristics, such as Projected Gradient Descent (PGD), adversarial Generative Network (AGN) and the like, and sends the challenge samples into a neural network in batches to obtain gradient information; then activating a privacy protection module, adding Laplacian distributed noise conforming to specific parameters for the gradient by using a differential privacy algorithm, and sending the denoised gradient information to a central server; after receiving the noise-added gradient information, the central server activates an countermeasure training module and a robust aggregation module, trains a meta learning prediction model, is used for predicting the influence of different gradients on the performance of the global model, and configures different updating weights for the gradients; and finally, weighting the gradient based on the updating weight, and updating the global model parameters by using the weighted gradient.
As shown in FIG. 2, in one embodiment of the present invention, step S1, initializing a collaborative learning environment, comprisesEach client node is connected with the central server. The global neural network of the central server is initialized. Initializing a neural network of each client. The meta learning network of the central server is initialized. Initializing training round of central server global neural network to +.>Initializing training round of meta learning network as +.>. Initializing iteration round number +.>. Specifically, this stepStep S1 may include:
step (S1.1): it will be appreciated that each client node has a neural network model that is structurally identical to the central server global neural network, has different parameters and is independent of each other. After the global neural network structure of the central server is determined, initializing the neural network structure of each client according to the global neural network structure of the central server.
Step (S1.2): the global neural network of the central server and the local neural network of the client are initialized, including initializing model structures, model parameters, loss functions, optimization algorithms and the like.
Step (S1.3): it will be appreciated that the global neural network of the central server and the local neural network of each client may be any type of neural network, such as fully connected networks, convolutional neural networks, recurrent neural networks, transformers, etc., depending on the particular application scenario and data type. The activation function uses a ReLU. The network model performs parameter initialization using the following random initialization method: the weight of any layer parameter satisfies the mean value of 0 and the variance of 0Is of the Gaussian distribution of>The number of neurons for the upper layer.
Step (S1.4): it will be appreciated that the choice of the loss function for the global neural network of the central server and the local neural network of each client will depend on the particular task scenario. In case of classification problems, the cross entropy loss function is chosen, i.e
For the number of samples of a batch, M is the number of categories, < >>Is a sign function if the sample->The true category equals->Get->Otherwise take->,/>Is predicted as category->Confidence of (2); in case of regression problem, the mean square error loss function is chosen, namely:
step (S1.5): it can be understood that the global neural network of the central server and the local neural network of each client have the optimization algorithm selecting Adam algorithm, and the learning rate is set as followsThe first moment estimation exponent decay rate is set to +.>The second moment estimation exponent attenuation rate is set to +.>
Step (S1.6): it will be appreciated that the central server's neuronal network selects a fully connected output network, comprising several linear layers. The activation function uses a ReLU. The following random initialization method is used for parameter of the neuronal neural networkInitializing: the weight of any layer parameter satisfies the mean value of 0 and the variance of 0Is of the Gaussian distribution of>The number of neurons for the upper layer. The neuronal neural network uses Dropout, and the Dropout probability is set to +.>
Step (S1.7): it will be appreciated that the input of the central server's neuronal neural network is the noise gradient of a client node and the output is of lengthThe output is processed to obtain the weight of the input gradient. Thus, the loss function selects the mean square error loss function, namely:
step (S1.8): it can be understood that the optimization algorithm of the central server is Adam algorithm, and the learning rate is set as followsThe first moment estimation exponent decay rate is set to +.>The second moment estimation exponent attenuation rate is set to +.>
As shown in fig. 2, in one embodiment of the present invention, each client node activates its challenge training module, generates a challenge sample using a series of challenge algorithms, and calculates corresponding gradient information using a back-propagation algorithm. Specifically, the step S2 may include:
step (S2.1): defining a set of challenge algorithms,/>Different challenge algorithms are included, such as Projected Gradient Descent (PGD), adversarial Generative Network (AGN), etc. To ensure diversity of challenge sample generation, the challenge training module of each client can use +.>Which generates challenge samples that are a plurality of first challenge samples of embodiments of the present invention. Definitions->For the original sample, ++>For its corresponding challenge sample.
Step (S2.2): projected Gradient Descent (PGD) algorithm is a white-box challenge algorithm based on projection gradient descent. In each iteration, the PGD algorithm finds the loss function versus the input samplesAnd generating new samples along the gradient direction. PGD algorithms can be generally expressed as:
wherein,indicate->Challenge sample after step iteration, ++>Representation pair->Gradient determination->Representing gradient clipping.
Step (S2.3): adversarial Generative Network (AGN) is a black box challenge algorithm based on generating a challenge network. AGN uses a generating network to generate a challenge sample, the input of the AGN is random noise, and the optimization goal is to minimize the distance between the generated challenge sample and the original sample and maximize the probability that the generated challenge sample makes the judging network misjudge; a discrimination network is used to classify the challenge sample and the normal sample, with the objective of minimizing the cross entropy loss function of the normal sample/challenge sample classification problem.
Step (S2.4): generating challenge samplesThen, the corresponding gradient is calculated by using the back propagation algorithm>. The back propagation algorithm first calculates the gradient of the last layer of the model, and then propagates forward layer by layer according to the chain law, calculating the gradient of each layer.
As shown in fig. 2, in an embodiment of the present invention, in step S3, the client node activates a privacy protection module, performs noise-adding processing on the gradient information by using the differential privacy technology, and then sends the processed gradient information to the central server. Specifically, the step S3 may include:
step (S3.1): the gradient information is subjected to noise processing by utilizing a differential privacy technology, and the gradient obtained in the step (S2) is firstly subjected toCutting outMake it->Norms less than threshold->. Let->Indicate->First->Gradient obtained by the client model, gradient after clipping +.>The method comprises the following steps:
at this time, the gradientSensitivity of->
Step (S3.2): adding noise to the clipped gradientNoise->Should obey the mean value +.>The scale is->Laplacian distribution, wherein->For sensitivity->For privacy budgets. Gradient after noise addition->The method comprises the following steps:
step (S3.3): gradient to be noisy by client nodeAnd sending the encrypted data to a central server, and using a TLS protocol to encrypt and transmit the data. The TLS protocol version may be 1.3. Define the client encryption suite set as +.>WhereinKits such as tls_aes_256_gcm_sha384 are included. The client terminal handshakes with the central server every timeA cipher suite is randomly selected.
Further, the central server receives noisy gradient information from the client node, activates the robust aggregation module, configures priority weights for the gradients using the neuronal network, and updates the global model parameters accordingly. Specifically, it may include:
step (S3.4): the priority weights are configured for the gradients by using the neuronal network, and whether the neuronal network is trained at the moment needs to be judged first. If the number of iteration turns is at this timeThe training of the neuronal network is not completed, and the step (S3.5) is carried out; if->The neuronal network has completed training and proceeds to step (S3.10).
Step (S3.5): recording parameters of global neural network asThe central server activates the challenge training module, generates +.f. using the white-box challenge algorithm>A challenge sample, the->The challenge samples are a plurality of second challenge samples according to the embodiment of the present invention, and the specific method is shown in step (S2.1). And sending the generated countermeasure sample into a global neural network to obtain confidence output. Each challenge sample->Corresponding to an output->
Step (S3.6): usingThe individual clients provide a noise adding gradient +.>The global neural network is updated. The updated neural network parameters (the updated first global model parameters for the present invention) are +.>,/>And->The following relationship is satisfied:
step (S3.7): sending the countermeasure sample generated in the step (S3.5) into a global neural network to obtain a confidence output. At this time, each challenge sample +.>Corresponding to an output->
Step (S3.8): calculating the mean value of the confidence difference before and after gradient updateThe calculation mode is as follows:
in fact it can be seen that a gradient is used for the global neural network>The measure of robustness change before and after updating. />The larger, i.e. the greater the robustness of the global neural network is promoted, the +.>The greater the impact on the robustness of the global neural network. Therefore, should be->Giving a larger update weight. Gradient->Send into YuanshenThrough network, willAs a label for the neuronal neural network, the neuronal neural network is trained by minimizing the mean square error loss function.
Step (S3.9): returning to step (S3.6) until useThe gradient of each client trains the neuronal neural network. After training, the parameters of the global neural network are restored to +.>
Step (S3.10): will beNoise gradient->Sending into a neural network to obtain an output. Use->As->And (3) updating the global neural network after weighted normalization, and outputting to obtain the updated second global model parameters. The specific updating mode is as follows:
as shown in fig. 2, in one embodiment of the present invention, step S4: the central server uses the parameters of the global neural networkTo all clients. The client updates parameters of the local neural network. May include:
step (S4.1): the central server uses the parameters of the global neural networkAnd distributing the updated second global model parameters to all clients, and encrypting and transmitting the data by using a TLS protocol. The TLS protocol version may be 1.3. Define the central server encryption suite set as +.>Wherein->Kits such as tls_aes_256_gcm_sha384 are included. The central server handshakes with the client every time from +.>A set is randomly selected for encryption.
Step (S4.2): the client updates the parameters of the local neural network by using the parameters of the global neural networkReplacing parameters of the local neural network.
Further, the number of iteration rounds. Go back to step S2 until +.>And (3) finishing the training of the global neural network and the client neural network.
According to the robust training method for collaborative learning, various proper countermeasure samples are found through countermeasure training; the privacy of the client node data is successfully protected by utilizing gradient noise processing based on differential privacy, and sensitive information is prevented from being revealed; meanwhile, the updating of the global model is more refined and optimized by utilizing gradient weight configuration based on the neuronal network, so that the model can maintain excellent performance in the face of potential challenge.
In order to implement the above embodiment, as shown in fig. 4, a robust training apparatus 10 for collaborative learning is further provided in this embodiment, where the apparatus 10 includes a network model building module 100, a gradient information processing module 200, a model parameter updating module 300, and a network model training module 400.
The network model construction module 100 is configured to construct a global neural network model and a meta learning network model of the central server, and a local neural network model of the client;
the gradient information processing module 200 is configured to generate a plurality of first challenge samples by using a challenge training module of the client, calculate initial gradient information corresponding to the plurality of first challenge samples, and perform noise adding processing on the initial gradient information by using a privacy protection module of the client to obtain noise added gradient information;
the model parameter updating module 300 is configured to send the noise-adding gradient information to a robust aggregation module of the central server, so as to perform priority weight configuration on the noise-adding gradient information by using the meta-learning network model, and update global model parameters of the global neural network model according to a weight configuration result to obtain updated global model parameters;
the network model training module 400 is configured to update local model parameters of the local neural network model based on the updated global model parameters, so as to obtain the global neural network model and the local neural network model trained in the current round according to the updated global model parameters and the updated local model parameters.
According to the robust training device for collaborative learning, various proper countermeasure samples are found through countermeasure training; the privacy of the client node data is successfully protected by utilizing gradient noise processing based on differential privacy, and sensitive information is prevented from being revealed; meanwhile, the updating of the global model is more refined and optimized by utilizing gradient weight configuration based on the neuronal network, so that the model can maintain excellent performance in the face of potential challenge.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.

Claims (10)

1. A robust training method for collaborative learning, the method comprising the steps of:
constructing a global neural network model and a meta learning network model of a central server and a local neural network model of a client;
generating a plurality of first countermeasure samples by using a countermeasure training module of the client, calculating initial gradient information corresponding to the plurality of first countermeasure samples, and carrying out noise adding processing on the initial gradient information by using a privacy protection module of the client to obtain noise adding gradient information;
the noise adding gradient information is sent to a robust aggregation module of a central server, so that priority weight configuration is carried out on the noise adding gradient information by utilizing the meta learning network model, and global model parameters of the global neural network model are updated according to a weight configuration result to obtain updated global model parameters;
and updating the local model parameters of the local neural network model based on the updated global model parameters so as to obtain the global neural network model and the local neural network model which are trained in the current turn according to the updated global model parameters and the updated local model parameters.
2. The method of claim 1, wherein after modeling the central server and the clients, the method further comprises:
and initializing the network structure of the local neural network model of each client according to the network structure of the global neural network model of the central server.
3. The method of claim 2, wherein the generating a plurality of first challenge samples and calculating initial gradient information corresponding to the plurality of first challenge samples using a challenge training module of a client comprises:
constructing a set of challenge algorithms based on a plurality of challenge algorithms; wherein the plurality of attack-resistant algorithms at least comprise a white-box attack-resistant algorithm based on projection gradient descent and a black-box attack-resistant algorithm based on generation of an attack-resistant network;
generating a plurality of first challenge samples of the input original samples by applying any algorithm in the set of challenge algorithms with a challenge training module of the client;
and calculating initial gradient information corresponding to the first countermeasure samples by using a back propagation algorithm.
4. The method of claim 3, wherein the step of denoising the initial gradient information using a privacy preserving module of the client to obtain noisy gradient information comprises:
clipping the initial gradient information to obtain clipped gradient information;
carrying out noise adding processing on the cut gradient information by utilizing a differential privacy technology to obtain noise adding gradient information; wherein the noise follows a Laplacian distribution of a preset mean and scale.
5. The method of claim 4, wherein the configuring the noise-added gradient information with the meta-learning network model for priority weighting, and updating the global model parameters of the global neural network model according to the weighting configuration result to obtain updated global model parameters, comprises:
judging whether the meta learning network model of the current round is trained, generating a plurality of second countermeasure samples based on the training completion judgment result and the global model parameters, and inputting the plurality of second countermeasure samples into the global neural network model to output and obtain a first confidence;
updating parameters based on the noise-added gradient information and the global model parameters to obtain updated first global model parameters, and inputting the plurality of second countermeasure samples into a global neural network model to output to obtain second confidence coefficients;
calculating the average value of the confidence coefficient difference values before and after gradient updating based on the first confidence coefficient and the second confidence coefficient, and inputting noise adding gradient information into a meta learning network model, wherein the average value of the confidence coefficient difference values is used as a label of the meta learning network model to train the meta learning network model to obtain a trained meta learning network model;
and restoring the updated first global model parameters into global model parameters, and obtaining updated second global model parameters by inputting the noise adding gradient information into the trained meta learning network model to output a model output result serving as the weight of the noise adding gradient information and weighting the updated and restored global model parameters after normalization.
6. The method of claim 5, wherein the noisy gradient information and the updated second global model parameters are transmitted encrypted based on TLS protocol, respectively.
7. The method of claim 6, wherein updating local model parameters of the local neural network model based on the updated global model parameters comprises:
and replacing the parameters of the local neural network model with the updated second global model parameters to update the local model parameters of the local neural network model.
8. The method of claim 7, wherein, based on a set of encryption suites preset in the TLS protocol, one suite is randomly selected from the set of encryption suites for encryption each time the central server performs data transmission with the client.
9. The method of claim 2, wherein initializing the global and meta-learning network models and the local neural network model includes initializing a plurality of model structures, model parameters, loss functions, and optimization algorithms; the global neural network model and the local neural network model comprise one of a full-connection network, a convolutional neural network, a cyclic neural network and a transducer; the loss functions of the global neural network model and the local neural network model comprise one of a cross entropy loss function and a mean square error loss function; the optimization algorithm of the global neural network model and the local neural network model and the element learning network model is an Adam algorithm.
10. A collaborative learning-oriented robust training apparatus, comprising:
the network model construction module is used for constructing a global neural network model and a meta learning network model of the central server and a local neural network model of the client;
the system comprises a gradient information processing module, a client-side privacy protection module and a client-side contrast training module, wherein the gradient information processing module is used for generating a plurality of first contrast samples by using the client-side contrast training module, calculating initial gradient information corresponding to the first contrast samples, and carrying out noise adding processing on the initial gradient information by using the client-side privacy protection module to obtain noise adding gradient information;
the model parameter updating module is used for sending the noise adding gradient information to a robust aggregation module of a central server, so as to carry out priority weight configuration on the noise adding gradient information by utilizing the meta learning network model, and updating global model parameters of the global neural network model according to a weight configuration result to obtain updated global model parameters;
and the network model training module is used for updating the local model parameters of the local neural network model based on the updated global model parameters so as to obtain the global neural network model and the local neural network model which are trained in the current turn according to the updated global model parameters and the updated local model parameters.
CN202311424634.5A 2023-10-31 2023-10-31 Robust training method and device for collaborative learning Active CN117150566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311424634.5A CN117150566B (en) 2023-10-31 2023-10-31 Robust training method and device for collaborative learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311424634.5A CN117150566B (en) 2023-10-31 2023-10-31 Robust training method and device for collaborative learning

Publications (2)

Publication Number Publication Date
CN117150566A true CN117150566A (en) 2023-12-01
CN117150566B CN117150566B (en) 2024-01-23

Family

ID=88906558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311424634.5A Active CN117150566B (en) 2023-10-31 2023-10-31 Robust training method and device for collaborative learning

Country Status (1)

Country Link
CN (1) CN117150566B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539449A (en) * 2024-01-09 2024-02-09 清华大学 Efficient and flexible collaborative learning framework and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200234110A1 (en) * 2019-01-22 2020-07-23 Adobe Inc. Generating trained neural networks with increased robustness against adversarial attacks
CN114970886A (en) * 2022-07-18 2022-08-30 清华大学 Clustering-based adaptive robust collaborative learning method and device
CN115481431A (en) * 2022-08-31 2022-12-16 南京邮电大学 Dual-disturbance-based privacy protection method for federated learning counterreasoning attack
CN115883053A (en) * 2022-11-03 2023-03-31 支付宝(杭州)信息技术有限公司 Model training method and device based on federated machine learning
US20230308465A1 (en) * 2023-04-12 2023-09-28 Roobaea Alroobaea System and method for dnn-based cyber-security using federated learning-based generative adversarial network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200234110A1 (en) * 2019-01-22 2020-07-23 Adobe Inc. Generating trained neural networks with increased robustness against adversarial attacks
CN114970886A (en) * 2022-07-18 2022-08-30 清华大学 Clustering-based adaptive robust collaborative learning method and device
CN115481431A (en) * 2022-08-31 2022-12-16 南京邮电大学 Dual-disturbance-based privacy protection method for federated learning counterreasoning attack
CN115883053A (en) * 2022-11-03 2023-03-31 支付宝(杭州)信息技术有限公司 Model training method and device based on federated machine learning
US20230308465A1 (en) * 2023-04-12 2023-09-28 Roobaea Alroobaea System and method for dnn-based cyber-security using federated learning-based generative adversarial network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539449A (en) * 2024-01-09 2024-02-09 清华大学 Efficient and flexible collaborative learning framework and method
CN117539449B (en) * 2024-01-09 2024-03-29 清华大学 Efficient and flexible collaborative learning framework and method

Also Published As

Publication number Publication date
CN117150566B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
US11651292B2 (en) Methods and apparatuses for defense against adversarial attacks on federated learning systems
CN110782044A (en) Method and device for multi-party joint training of neural network of graph
CN117150566B (en) Robust training method and device for collaborative learning
CN113298268B (en) Vertical federal learning method and device based on anti-noise injection
Wang et al. Adversarial attacks and defenses in machine learning-empowered communication systems and networks: A contemporary survey
Olowononi et al. Federated learning with differential privacy for resilient vehicular cyber physical systems
WO2021106077A1 (en) Update method for neural network, terminal device, calculation device, and program
WO2017089443A1 (en) System and method for aiding decision
US20220318412A1 (en) Privacy-aware pruning in machine learning
CN113841157A (en) Training a safer neural network by using local linearity regularization
Liang et al. Co-maintained database based on blockchain for idss: A lifetime learning framework
CN114363043A (en) Asynchronous federated learning method based on verifiable aggregation and differential privacy in peer-to-peer network
Trejo et al. Adapting attackers and defenders patrolling strategies: A reinforcement learning approach for Stackelberg security games
CN110874638B (en) Behavior analysis-oriented meta-knowledge federation method, device, electronic equipment and system
CN116708009A (en) Network intrusion detection method based on federal learning
CN115481441A (en) Difference privacy protection method and device for federal learning
Wang et al. Scalable Game-Focused Learning of Adversary Models: Data-to-Decisions in Network Security Games.
Wang et al. Optimal DoS attack strategy for cyber-physical systems: A Stackelberg game-theoretical approach
Al-Maslamani et al. Toward secure federated learning for iot using drl-enabled reputation mechanism
CN113810385B (en) Network malicious flow detection and defense method for self-adaptive interference
Kim et al. Performance impact of differential privacy on federated learning in vehicular networks
CN115174173A (en) Global security game decision method of industrial information physical system in cloud environment
Bedoui et al. A deep neural network-based interference mitigation for MIMO-FBMC/OQAM systems
Viksnin et al. A Game Theory approach for communication security and safety assurance in cyber-physical systems with Reputation and Trust-based mechanisms
CN114912146B (en) Data information defense method and system under vertical federal architecture, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant