CN116187469A - Client member reasoning attack method based on federal distillation learning framework - Google Patents

Client member reasoning attack method based on federal distillation learning framework Download PDF

Info

Publication number
CN116187469A
CN116187469A CN202310062980.7A CN202310062980A CN116187469A CN 116187469 A CN116187469 A CN 116187469A CN 202310062980 A CN202310062980 A CN 202310062980A CN 116187469 A CN116187469 A CN 116187469A
Authority
CN
China
Prior art keywords
client
model
attack
federal
distillation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310062980.7A
Other languages
Chinese (zh)
Inventor
赵彦超
杨子路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202310062980.7A priority Critical patent/CN116187469A/en
Publication of CN116187469A publication Critical patent/CN116187469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/002Countermeasures against attacks on cryptographic mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Transfer Between Computers (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a client member reasoning attack method based on a federation distillation learning framework, which comprises the steps of (1) adopting a knowledge distillation communication enhancement algorithm to carry out privacy protection scheme design on federation learning, and realizing a federation learning model with high performance and high stability and a corresponding privacy leakage assessment scheme by gradient logic layer parameter conversion, client knowledge distillation and random client model uploading mechanisms for meeting the requirements of the federation learning framework on private data privacy protection of participants; (2) The federal learning high privacy protection framework enhanced by the knowledge distillation algorithm is used for designing member reasoning attack based on the client, and the knowledge distillation-based model behavior similarity theory is utilized to realize the shadow-like model construction of the malicious client. The invention adopts a federal distillation communication algorithm with high privacy protection and a member reasoning attack scheme based on a malicious client to realize attack on a distributed machine learning framework with privacy protection function.

Description

Client member reasoning attack method based on federal distillation learning framework
Technical Field
The invention belongs to the field of artificial intelligence safety, and particularly relates to a client member reasoning attack method based on a federal distillation learning framework.
Background
With the enhancement of mobile device sensing and computing capabilities, emerging technologies and applications generate large amounts of data in edge networks, which puts new demands on improving privacy and security. Federal learning marks an advance in intelligent internet of things application privacy protection technology, and although federal learning can provide privacy guarantees for local training data of participants, even machine learning models in the federal learning framework are still extremely vulnerable to membership inference attacks.
One of the main reasons that membership inference attacks have been successful in the federal learning framework is that they are vulnerable to various inference attacks when the gradient parameters remember information about their training data. Thus, the transmission gradient between the server and client is distinguishable for members and non-members of the training dataset, and research based on the above theory reveals an inherent vulnerability of federal learning, i.e., an attacker can obtain private training data from publicly shared gradients.
While knowledge distillation techniques are considered the most advanced technique to address gradient leakage by hiding gradients, the federal learning framework based on knowledge distillation still presents the risk of client-level membership information leakage. Meanwhile, other privacy leakage problems exist in federal learning. LigenegZhu demonstrates that it is possible to obtain private training data from publicly shared gradients, where malicious attackers can reconstruct the participants' local private data from the gradients, which presents considerable challenges for privacy protection in the Union study.
Disclosure of Invention
The invention aims to: a client member reasoning attack method based on a federal distillation learning framework solves the problem of gradient leakage evaluation complexity in the federal learning framework.
The invention comprises the following steps: the invention discloses a client side member reasoning attack method based on a federal distillation learning framework, which comprises the following steps:
(1) Based on the federal learning framework, knowledge distillation technology and random client selection technology are adopted, so that communication overhead and privacy leakage of federal learning are reduced;
(2) Based on the reinforced federal learning knowledge distillation framework, analyzing the privacy disclosure degree of the participants;
(3) Establishing a client-level membership inference attack model according to the problem of leakage of local model membership information in the federal learning knowledge distillation framework;
(4) And establishing a complete federal distillation learning member privacy disclosure evaluation model according to the client-level member reasoning attack model.
Further, the implementation process of the step (1) is as follows:
firstly, setting a local client model and a client private data set which participate in federal learning, wherein the local models of all participants are heterogeneous models, the internal structure and details of the local models and the local private data set are different, then, the federal communication algorithm selects a knowledge distillation technology as an optimization training algorithm of the client model to replace a model gradient parameter training algorithm with a gradient leakage problem, namely, the model gradient parameter is converted into a logic layer parameter, and the local client model optimization method is converted into the logic layer parameter optimization from gradient optimization, so that a federal learning framework with higher model performance and less communication expense is realized.
Further, the improved federal learning knowledge distillation framework of step (2) is:
the federal learning knowledge distillation framework is provided with m participants, each participant has a local private data set which is independently and uniformly distributed or is not independently and uniformly distributed
Figure BDA0004061499950000021
Wherein k represents private data, N represents an Nth client, i represents an ith data record in the data set, x represents a data sample in a row of records, and y represents a tag in a row of records; a public training set is stored at the server side>
Figure BDA0004061499950000022
Figure BDA0004061499950000023
D public Providing a common training set available to all clients on behalf of the central server; each client participant independently designs their model f k To realize eachData prediction classification tasks of (2); model parameters are no longer shared between client participants under knowledge distillation enhanced communication protocol, but rather are shared with standard training data set D public Outputting logic layer parameters by the corresponding model;
first, each client uses a transfer learning technique on a common training data set D public The respective private models are pre-trained on the model and then the local private data set D is obtained k Performing revision training; in each round of federal distillation communications, each client calculates a common data set D that the server collects and distributes public Upper logical layer parameter vector
Figure BDA0004061499950000024
Uploading the output parameters of the logic layer of the model to a server; after the central server receives the model logical layer output parameters uploaded by the K clients, the server performs weighted average on all the received logical layer parameters to generate average logical layer parameters +.>
Figure BDA0004061499950000025
And issues it to each client; finally, each client uses knowledge distillation techniques and is based on the received +.>
Figure BDA0004061499950000026
The logic layer parameters carry out local private model knowledge distillation optimization training until the private models f of all clients k Reaching a convergence state.
Further, the implementation process of the step (2) is as follows:
the method comprises the steps of designing member reasoning attack and reconstruction attack based on a federal distillation learning framework to evaluate the privacy disclosure degree of the framework, wherein member reasoning attack privacy evaluation adopts a NSH federal learning gradient member reasoning attack technology of the shakri invention, specifically, locally training a member reasoning attack two classifiers at a malicious client, actively influencing an aggregation average process of federal distillation learning by changing logic layer parameters uploaded to a central server by the malicious client, and finally defining average logic layer parameters issued by the central server as a training set of the attack two classifiers by the malicious client, and training a member reasoning model based on the difference between members and non-member information of the data set to realize the privacy evaluation of the member reasoning attack; the reconstruction attack evaluation adopts a deep privacy disclosure evaluation scheme to reconstruct data of average logic layer parameters issued by the central server so as to realize privacy evaluation of the reconstruction attack.
Further, the implementation process of the step (3) is as follows:
combining the data set which participates in model optimization training and the data set which does not participate in training to form a new training data set at local by the malicious client, wherein the data set is used for training a member reasoning classifier of the malicious client; then, the malicious client inputs the training data set into a local model to obtain a confidence score vector output of the model, the malicious client forms a new attack classifier training set belonging sample by the confidence score vector and the original data set, labels on the basis of member and non-member information of the data sample and trains a member reasoning attack two-classifier; the finally trained attack two-classifier can complete the task of client side member reasoning attack, and malicious client side member reasoning attack is realized.
Further, the implementation process of the step (4) is as follows:
the invention is based on the client-level member reasoning attack technology, adopts more than MNIST, EMNIST, CIFAR-10 and CIFAR-100 data sets to evaluate the performance of the attack technology, optimizes the network structure of the member reasoning attack two classifiers in the step (3), adopts a random forest algorithm with more excellent performance and an antagonism sample attack method, and finally establishes a complete federal distillation learning privacy evaluation model to analyze and evaluate the attack accuracy of the member reasoning attack initiated by a malicious client.
The beneficial effects are that: compared with the prior art, the invention has the beneficial effects that: in order to realize the enhancement type federal learning communication protocol, the federal learning process is realized based on a high-performance and high-stability model distillation communication enhancement algorithm, and a corresponding privacy leakage evaluation scheme is designed; in order to meet the requirements of the federal learning framework on the privacy disclosure degree of the private data of the participants, the federal distillation learning framework with high availability, high stability and high performance is realized through gradient logic layer parameter conversion, client knowledge distillation and random client model uploading mechanisms, and a corresponding privacy disclosure evaluation scheme is designed; the invention designs client-based member reasoning attack aiming at the federal learning high privacy protection framework enhanced by the model distillation algorithm, utilizes the knowledge distillation-based model behavior similarity theory to realize the construction of a shadow-like model of a malicious client, thereby realizing member information reasoning aiming at private data of federal distillation learning framework participants, and proving that the federal distillation client-side member reasoning attack technology adopted by the invention is effective.
Drawings
FIG. 1 is a diagram of a federal learning architecture based on knowledge distillation;
FIG. 2 is a diagram of a membership inference attack architecture for the federal distillation framework;
FIG. 3 is a flow chart of an end-to-end membership inference attack;
FIG. 4 is a training diagram of a client-based membership inference attack model;
fig. 5 is a graph of the effect of client-based membership inference attacks.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
The invention provides a client member reasoning attack method based on a federal distillation learning framework, which mainly solves the problem of gradient leakage assessment in the federal learning framework, specifically, the method utilizes a knowledge distillation technology to complete the conversion of gradient parameters to logic layer parameters, and simultaneously combines a random client selection method to strengthen the federal learning framework, so as to form the federal distillation learning framework and reasonably designed privacy leakage assessment scheme, and finally, students based on knowledge distillation of a machine learning model and the similarity of teacher prediction behaviors are used for realizing member reasoning privacy attack at the client level.
The invention is mainly divided into two parts: knowledge distillation-based privacy disclosure degree evaluation model implementation of the federal learning framework and federal distillation learning framework-based client member reasoning attack implementation.
And (one) realizing a privacy disclosure degree evaluation model of a federal learning framework based on knowledge distillation.
The invention changes the traditional federal learning communication mode based on model gradient parameters, adopts a knowledge distillation technology to convert gradient parameters shared by a central server and a client into logic layer parameters, reduces federal learning communication overhead, enhances the performance of a local private model of federal learning participants, sets random client selection rules at the central server, solves the problem that the client has no state, and improves federal learning training rate.
The federal distillation learning framework is composed of a central server and a plurality of clients, wherein the central server only needs to complete the tasks of collecting and issuing a public data set and aggregating and averaging logic layer parameters, and global model training is not needed at the server. In each round of model training process, firstly, each client side completes a pre-training task of a local model based on a public training set issued by a central server, then, carries out personalized model correction training based on a local private data set, after the personalized model training task is completed, each client side carries out a prediction task based on the public training set and uploads logic layer parameter output to the central server, the central server carries out aggregation average according to the collected logic layer parameters and issues the aggregation logic layer parameters to each participant, and each participant carries out knowledge distillation based on the logic layer parameters so as to finally realize performance optimization of the model.
In the waiting stage that the central server waits for the client to upload the logical layer parameters, in order to solve the problem that the client has no status, namely a series of problems that the central server cannot collect the logical layer parameters in time due to the delay or downtime of local service of the client, the invention sets random client selection rules at the central server, namely, randomly sets part of clients as backup clients in each communication time, if the clients have long-time unresponsive clients, the central server starts the backup clients and discards the stateless clients, thereby reducing the data uploading time and improving the model aggregation speed.
Firstly, the invention adopts knowledge distillation technology to strengthen the communication protocol, and ensures the stability and safety of the federal learning framework. The technique is essentially an on-line knowledge distillation technique. In the flow of federal distillation learning designed by the invention, each client is relatively set as a single student model, and the logic layer parameter output after aggregation and averaging of all participants is set as the knowledge output of a teacher. The logical layer output of each client is a set of logical layer parameter values normalized by the softmax function, the dimension of the logical layer parameters being determined by the dimension of the tag. The invention periodically measures the output difference of teachers and students by using the cross entropy to generate a loss regularizer of students, namely a distillation regularizer, so as to ensure that each student model can acquire knowledge of other participants in the distributed model training. Aiming at the condition of gradient privacy leakage in federal learning, the invention converts the data between the central server and the client from the gradient into the logic layer parameters of the model, and the conversion of the gradient and the logic layer realizes the high stability and the safety of the federal learning framework. Aiming at the privacy disclosure degree of the federal learning framework, the invention designs a specific privacy disclosure evaluation scheme, the essence of which is that a reconstruction attack technology is adopted to steal and recover the local private data set of the participant, and the privacy disclosure degree evaluation scheme is realized as shown in figure 1.
Then, as shown in the framework diagram of the federal distillation learning in the enhanced version of fig. 1, the present invention sets the basic data and configuration of federal distillation. The invention sets m participants in the designed federal distillation study, each participant has a local private data set which is independently distributed in the same way or is not independently distributed in the same way
Figure BDA0004061499950000051
Figure BDA0004061499950000052
k represents private data, N represents an nth client, i represents an ith number in the datasetAccording to the record, x represents the data sample in a row of records, and y represents the tag in a row of records; a public training set is stored at the server side>
Figure BDA0004061499950000061
public represents the common training set available to all clients. Each client participant independently designs their model f k To implement respective data classification tasks. Furthermore, the model parameters are no longer shared between client participants under the knowledge distillation enhanced communication protocol, but rather are shared with the standard training data set D public The corresponding model outputs the logical layer parameters.
The federal distillation overall procedure employed by the present invention is specifically, first, each client uses transfer learning on a common training dataset D public Training the respective private model and then training on its own private data set D k Training is performed on the device. In each round of federal distillation communications, each client calculates a common data set D that the server gathers and distributes public Upper logical layer parameter vector
Figure BDA0004061499950000062
And upload the output of the model to the server. After receiving the output of a certain number K of model logic layer parameters uploaded by the client, the server aggregates and averages all corresponding logs to generate
Figure BDA0004061499950000063
And sends it to each client. Finally, each client uses knowledge distillation technology to receive +.>
Figure BDA0004061499950000064
The logic layer parameters perform model training until the private models f of all clients k And finally, the security of the local private data set can be improved by all federal distillation learning participant models, and the privacy disclosure degree of the local private data set can be evaluated. Based on the invention, high stability and high safety are realizedFederal learning communication framework and highly available federal learning privacy disclosure assessment model.
Client side membership inference attack implementation based on federal distillation learning framework
The federal learning framework of the present invention, based on knowledge distillation enhanced communication protocols, designs a rational client-side membership inference technique, as shown in fig. 2.
Aiming at the defect of knowledge distillation in solving the gradient leakage problem in federal learning, the invention deduces private data member and non-member information aiming at other victims participating in federal learning by designing a malicious attack classifier. The model trained based on the knowledge distillation method comprises the main reasons of private data information of all teacher models: the malicious attacker trains locally to obtain a similar shadow model, so that a local private model of the malicious attacker becomes a target model of self attack, a reasonable client privacy attack technology is designed, and the aim of the technology is to construct an attack model two-classifier to identify the difference of target model behaviors and to distinguish member information of training data sets of other models by utilizing the output of the target model.
Firstly, in the attack model training scheme setting of the invention, an attacker set by the invention does not have the capability of actively training a shadow model, the attacker sets a private model of the attacker as a target model, and the attack method is realized under the condition of completely transparent internal structure and data real distribution of the private model based on a local client. Therefore, the member reasoning attack model designed by the invention can supervise and learn the input and corresponding output (each marked as 'in' or 'out') of the private model of the malicious attacker, thereby achieving the effect that the attack model can distinguish the model output based on the member training set model output of the class shadow target model and the model output of the non-member training set.
Then, the invention carries out architecture and parameter setting on the membership inference attack model, and the private model f of the malicious client designed by the invention adv () Defined as a target model f in the membership inference attack procedure rarget (),
Figure BDA0004061499950000071
A training set representing the object model, (x) {i} ,y {i} ) target Represented is the ith record in the training set of target models. Record data->
Figure BDA0004061499950000072
Representing input data of the object model, +.>
Figure BDA0004061499950000073
Representing the actual tag of the data, the tag class size of the data set being c target . The output of the object model is a confidence score vector, all of which are at [0,1 ]]Within the range and the sum of all vectors is 1.f (f) attack () Attack model representing the lock design of the present invention, its input x attack The output vectors of the attack and target models have the same dimension c target 。f attack () The goal of the attack model is to enhance all participants in the federal distillation learning framework. Fig. 3 illustrates a membership inference end-to-end attack process devised by the present invention. Given target record data (x, y) required by the attack model, the record is first entered into f of the target model target () To obtain a predictive vector y=f of the model target The distribution of y is mainly related to the actual classification labels of the target record data x.
Finally, the invention realizes the prediction of member information and non-member information, namely the final result display of member reasoning attack, specifically, the invention inputs the label Y of target record data and the predicted vector Y of target model target for the data record into the attack model f target () The attack model can calculate the membership probability of the target data record, namely the probability that the real label Y and the predictive vector Y belong to the target training set
Figure BDA0004061499950000074
This is also the probability that the record is classified as "in" after being input to the attack model. The invention realizes the target attackThe basic premise of the attack is the training result of the attack model and the internal parameters of the target model. The invention is set in that the private model (i.e. the internal structure of the object model) and training set of malicious clients are known +.>
Figure BDA0004061499950000075
Finally, by accessing the local private model of the attacker participating in federal distillation, the training data mobile phone of the attack binary classifier and the mobile phone of the model parameters can be realized, so that the attack prediction vector of the local malicious model, namely the final target result of member reasoning attack, is obtained.
Figure 4 illustrates a malicious attack model training process. Recording of local model training sets for malicious clients
Figure BDA0004061499950000076
Calculating a predictive vector y=f of the target model target () And adding the member dataset record (Y, Y, in) to the training set of attack models +.>
Figure BDA0004061499950000077
Is a kind of medium. />
Figure BDA0004061499950000078
The representation is a data set that is disjoint from the training set of malicious adversary private models. Recording based on the dataset->
Figure BDA0004061499950000079
Calculating a predictive vector Y and adding non-member dataset records (Y, Y, out) to the attack training set +.>
Figure BDA00040614999500000710
Is a kind of medium. Finally, will->
Figure BDA0004061499950000081
Data set division into c target A number of subtask data sets. For subtask data and for each tag y, a separate predictive model is trained to predict the targetTraining data (x, y) e D of model target In or out member states of (c).
The reason for client-level membership inference attack designed by the invention is the over-fitting problem in the machine learning process. The malicious attacker identifies the subtle difference between the output of the target model, and trains the two classifiers by transmitting the training data set and the untrained data set to the target model, so that the two classifiers can realize the identification of member and non-member information, and the attack effect is shown in figure 5. In effect, the present invention converts the problem of identifying complex relationships between members of the training dataset and model outputs into a binary classification problem. Binary classification is a standard machine learning task, so the membership inference attack technique can use any of the most advanced machine learning frameworks or services to construct an attack model, and the attack method of a malicious client is independent of the specific model of attack model training.

Claims (6)

1. A client member reasoning attack method based on a federal distillation learning framework is characterized by comprising the following steps:
(1) Based on the federal learning framework, knowledge distillation technology and random client selection technology are adopted, so that communication overhead and privacy leakage of federal learning are reduced;
(2) Based on the reinforced federal learning knowledge distillation framework, analyzing the privacy disclosure degree of the participants;
(3) Establishing a client-level membership inference attack model according to the problem of leakage of local model membership information in the federal learning knowledge distillation framework;
(4) And establishing a complete federal distillation learning member privacy disclosure evaluation model according to the client-level member reasoning attack model.
2. The client side membership inference attack method based on the federal distillation learning framework according to claim 1, wherein the implementation process of the step (1) is as follows:
firstly, setting a local client model and a client private data set which participate in federal learning, wherein the local models of all participants are heterogeneous models, the internal structure and details of the local models and the local private data set are different, then, the federal communication algorithm selects a knowledge distillation technology as an optimization training algorithm of the client model to replace a model gradient parameter training algorithm with a gradient leakage problem, namely, the model gradient parameter is converted into a logic layer parameter, and the local client model optimization method is converted into the logic layer parameter optimization from gradient optimization, so that a federal learning framework with higher model performance and less communication expense is realized.
3. The client membership inference attack method based on federal distillation learning framework according to claim 1, wherein the improved federal learning knowledge distillation framework of step (2) is:
the federal learning knowledge distillation framework is provided with m participants, each participant has a local private data set which is independently and uniformly distributed or is not independently and uniformly distributed
Figure FDA0004061499930000011
Wherein k represents private data, N represents an Nth client, i represents an ith data record in the data set, x represents a data sample in a row of records, and y represents a tag in a row of records; a public training set is stored at the server side>
Figure FDA0004061499930000012
Figure FDA0004061499930000013
D public Providing a common training set available to all clients on behalf of the central server; each client participant independently designs their model f k To realize the respective data prediction classification task; model parameters are no longer shared between client participants under knowledge distillation enhanced communication protocol, but rather are shared with standard training data set D public Outputting logic layer parameters by the corresponding model;
first, each client uses a transfer learning technique on a common training data set D public The respective private models are pre-trained on the model and then the local private data set D is obtained k Performing revision training; in each round of federal distillation communications, each client calculates a common data set D that the server collects and distributes public Upper logical layer parameter vector
Figure FDA0004061499930000021
Uploading the output parameters of the logic layer of the model to a server; after the central server receives the model logical layer output parameters uploaded by the K clients, the server performs weighted average on all the received logical layer parameters to generate average logical layer parameters +.>
Figure FDA0004061499930000022
And issues it to each client; finally, each client uses knowledge distillation techniques and is based on the received +.>
Figure FDA0004061499930000023
The logic layer parameters carry out local private model knowledge distillation optimization training until the private models f of all clients k Reaching a convergence state.
4. The client side membership inference attack method based on the federal distillation learning framework according to claim 1, wherein the implementation process of the step (2) is as follows:
the method comprises the steps of designing member reasoning attack and reconstruction attack based on a federal distillation learning framework to evaluate the privacy disclosure degree of the framework, wherein member reasoning attack privacy evaluation adopts a NSH federal learning gradient member reasoning attack technology of the shakri invention, specifically, locally training a member reasoning attack two classifiers at a malicious client, actively influencing an aggregation average process of federal distillation learning by changing logic layer parameters uploaded to a central server by the malicious client, and finally defining average logic layer parameters issued by the central server as a training set of the attack two classifiers by the malicious client, and training a member reasoning model based on the difference between members and non-member information of the data set to realize the privacy evaluation of the member reasoning attack; the reconstruction attack evaluation adopts a deep privacy disclosure evaluation scheme to reconstruct data of average logic layer parameters issued by the central server so as to realize privacy evaluation of the reconstruction attack.
5. The client side membership inference attack method based on the federal distillation learning framework according to claim 1, wherein the implementation process of the step (3) is as follows:
combining the data set which participates in model optimization training and the data set which does not participate in training to form a new training data set at local by the malicious client, wherein the data set is used for training a member reasoning classifier of the malicious client; then, the malicious client inputs the training data set into a local model to obtain a confidence score vector output of the model, the malicious client forms a new attack classifier training set belonging sample by the confidence score vector and the original data set, labels on the basis of member and non-member information of the data sample and trains a member reasoning attack two-classifier; the finally trained attack two-classifier can complete the task of client side member reasoning attack, and malicious client side member reasoning attack is realized.
6. The client side membership inference attack method based on the federal distillation learning framework according to claim 1, wherein the implementation process of the step (4) is as follows:
the invention is based on the client-level member reasoning attack technology, adopts more than MNIST, EMNIST, CIFAR-10 and CIFAR-100 data sets to evaluate the performance of the attack technology, optimizes the network structure of the member reasoning attack two classifiers in the step (3), adopts a random forest algorithm with more excellent performance and an antagonism sample attack method, and finally establishes a complete federal distillation learning privacy evaluation model to analyze and evaluate the attack accuracy of the member reasoning attack initiated by a malicious client.
CN202310062980.7A 2023-01-16 2023-01-16 Client member reasoning attack method based on federal distillation learning framework Pending CN116187469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310062980.7A CN116187469A (en) 2023-01-16 2023-01-16 Client member reasoning attack method based on federal distillation learning framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310062980.7A CN116187469A (en) 2023-01-16 2023-01-16 Client member reasoning attack method based on federal distillation learning framework

Publications (1)

Publication Number Publication Date
CN116187469A true CN116187469A (en) 2023-05-30

Family

ID=86435958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310062980.7A Pending CN116187469A (en) 2023-01-16 2023-01-16 Client member reasoning attack method based on federal distillation learning framework

Country Status (1)

Country Link
CN (1) CN116187469A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117592584A (en) * 2023-12-11 2024-02-23 滇西应用技术大学 Random multi-model privacy protection method based on federal learning
CN117592042A (en) * 2024-01-17 2024-02-23 杭州海康威视数字技术股份有限公司 Privacy disclosure detection method and device for federal recommendation system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117592584A (en) * 2023-12-11 2024-02-23 滇西应用技术大学 Random multi-model privacy protection method based on federal learning
CN117592042A (en) * 2024-01-17 2024-02-23 杭州海康威视数字技术股份有限公司 Privacy disclosure detection method and device for federal recommendation system
CN117592042B (en) * 2024-01-17 2024-04-05 杭州海康威视数字技术股份有限公司 Privacy disclosure detection method and device for federal recommendation system

Similar Documents

Publication Publication Date Title
Hu et al. MHAT: An efficient model-heterogenous aggregation training scheme for federated learning
Cao et al. Interactive temporal recurrent convolution network for traffic prediction in data centers
CN112364943B (en) Federal prediction method based on federal learning
CN116187469A (en) Client member reasoning attack method based on federal distillation learning framework
CN106651030B (en) Improved RBF neural network hot topic user participation behavior prediction method
CN114897837A (en) Power inspection image defect detection method based on federal learning and self-adaptive difference
CN113657607B (en) Continuous learning method for federal learning
CN114091667A (en) Federal mutual learning model training method oriented to non-independent same distribution data
Long et al. Fedcon: A contrastive framework for federated semi-supervised learning
CN115344883A (en) Personalized federal learning method and device for processing unbalanced data
Zhang et al. Towards data-independent knowledge transfer in model-heterogeneous federated learning
CN116471286A (en) Internet of things data sharing method based on block chain and federal learning
Wang et al. Missing value filling based on the collaboration of cloud and edge in artificial intelligence of things
CN117893807B (en) Knowledge distillation-based federal self-supervision contrast learning image classification system and method
Gou et al. Clustered hierarchical distributed federated learning
Gu et al. Fedaux: An efficient framework for hybrid federated learning
Wang et al. Eidls: An edge-intelligence-based distributed learning system over internet of things
Yang et al. Federated continual learning via knowledge fusion: A survey
CN113850399A (en) Prediction confidence sequence-based federal learning member inference method
Cheng et al. GFL: Federated learning on non-IID data via privacy-preserving synthetic data
Zou et al. FedDCS: Federated learning framework based on dynamic client selection
Zheng et al. Federated Learning on Non-iid Data via Local and Global Distillation
CN115409155A (en) Information cascade prediction system and method based on Transformer enhanced Hooke process
CN115131605A (en) Structure perception graph comparison learning method based on self-adaptive sub-graph
Shan et al. CFL-IDS: An Effective Clustered Federated Learning Framework for Industrial Internet of Things Intrusion Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination