CN116527393B - Method, device, equipment and medium for defending against federal learning poisoning attack - Google Patents

Method, device, equipment and medium for defending against federal learning poisoning attack Download PDF

Info

Publication number
CN116527393B
CN116527393B CN202310662319.XA CN202310662319A CN116527393B CN 116527393 B CN116527393 B CN 116527393B CN 202310662319 A CN202310662319 A CN 202310662319A CN 116527393 B CN116527393 B CN 116527393B
Authority
CN
China
Prior art keywords
information
feature
embedded information
stored
local data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310662319.XA
Other languages
Chinese (zh)
Other versions
CN116527393A (en
Inventor
王伟
许向蕊
陈政
刘鹏睿
郝玉蓉
祝咏升
胡福强
吕晓婷
李超
段莉
刘吉强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202310662319.XA priority Critical patent/CN116527393B/en
Publication of CN116527393A publication Critical patent/CN116527393A/en
Application granted granted Critical
Publication of CN116527393B publication Critical patent/CN116527393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a defending method, a defending device, defending equipment and defending media oriented to federal learning poisoning attack, which comprise the following steps: acquiring first characteristic embedded information corresponding to local data from a plurality of clients, wherein the local data are pre-stored in the clients, and the local data in each client are disjoint sample subsets of a total training sample; performing mutual information calculation on the first characteristic embedded information and a data tag corresponding to the pre-stored local data; abnormal feature embedding and removing are carried out on the first feature embedding information according to the calculated mutual information, and the removed first feature embedding information is used as normal feature embedding information; training the pre-stored top model based on the normal feature embedded information to optimize top model parameters. The method can realize the avoidance of malicious samples without assistance of auxiliary data and contact with the bottom model of the client, does not influence the usability of the model, and is suitable for a longitudinal federal learning scene.

Description

Method, device, equipment and medium for defending against federal learning poisoning attack
Technical Field
The invention relates to the technical field of network security, in particular to a defending method, a defending device, defending equipment and defending media for federal learning poisoning attacks.
Background
Federal learning (Federated Learning) is a new machine learning model that allows multiple participants to co-train a model with multiparty data without revealing plaintext data, making the data invisible. The federal learning method is mainly divided into horizontal federal learning, longitudinal federal learning and the like. The characteristic of the horizontal federal learning is that the characteristic dimensions of the data of each participant are the same, but the sample IDs are different. Longitudinal federal learning (VFL) is a machine learning paradigm aimed at combining different subsets of features held by different clients to improve the performance of the model. In practice, VFL is suitable for knowledge fusion of heterogeneous and confidential signature sources between potentially competing companies to drive powerful predictive analysis. For example, an insurance company may wish to combine loan credits of the same principal with banking records provided by different financial institutions to predict future financial risk for that principal. Longitudinal federal learning is characterized by data sample IDs that are substantially identical and different in characteristics.
In the federal learning process, clients, when collecting data, are likely to collect "toxic" samples injected from malicious attackers, resulting in subsequent unreliable model training and constituting a potential risk for many high risk applications. Therefore, the server needs to perform quality supervision on the received feature embedding based on different client data to realize defense against the poisoning attack.
At present, a defense method aiming at federal learning poisoning attack mainly focuses on a horizontal federal learning scene. The core ideas of the methods are as follows: model updates uploaded by different clients are analyzed to avoid or eliminate abnormal clients from participating in training. However, in vertical federal learning, each client possesses unique data features, i.e., feature embedding shared by different clients has natural uniqueness. The method for defending against a poisoning attack for horizontal federal learning is not suitable for a scenario of vertical federal learning. Therefore, there is a need for a method of defending against a poisoning attack that is suitable for use in a vertical federal learning scenario.
Disclosure of Invention
The embodiment of the invention provides a defending method, a defending device, defending equipment and defending media oriented to federal learning poisoning attacks, which overcome the defects of the prior art.
In order to achieve the above purpose, the present invention adopts the following technical scheme.
In a first aspect, the present invention provides a defense method for federal learning poisoning attack, applied to a server, including:
acquiring first characteristic embedded information corresponding to local data from a plurality of clients; the local data are pre-stored in the clients, the local data in each client are disjoint sample subsets of the total training samples, and the first characteristic embedded information is obtained based on a pre-stored bottom model in the client;
performing mutual information calculation on the first characteristic embedded information and a data tag corresponding to pre-stored local data;
abnormal feature embedding and removing are carried out on the first feature embedding information according to the calculated mutual information, and the removed first feature embedding information is used as normal feature embedding information;
training a pre-stored top model based on the normal feature embedded information to optimize top model parameters.
Optionally, the calculating mutual information of the data tag corresponding to the first feature embedded information and the pre-stored local data includes:
based on data tag y N Embedding information with a first featureCalculating mutual information MI:
optionally, the performing abnormal feature embedding and removing on the first feature embedding information according to the calculated mutual information, and taking the removed first feature embedding information as normal feature embedding information includes:
taking mutual information lower than a preset threshold value as abnormal mutual information, and taking first characteristic embedded information corresponding to the abnormal mutual information as abnormal characteristic embedded information;
and rejecting the first characteristic embedded information according to the abnormal characteristic embedded information, and taking the rejected first characteristic embedded information as normal characteristic embedded information.
Optionally, training a pre-stored top model based on the normal feature embedding information to optimize top model parameters, including:
performing classified cross entropy loss calculation based on the normal feature embedded information and the data tag to obtain a loss value;
training a pre-stored top model based on the loss value to optimize parameters of the top model, and sending gradient information corresponding to normal feature embedded information in the top model training process to a corresponding client so as to update the bottom model in the client.
In a second aspect, the present invention further provides a defense method for federal learning poisoning attack, applied to a plurality of clients, including:
and performing feature transformation on the local data by utilizing a pre-stored bottom model to obtain first feature embedded information, and transmitting the first feature embedded information to a server.
Optionally, after sending the first feature embedded information to the server, the method further comprises:
gradient information corresponding to normal feature embedded information in the top model training process is obtained from the server;
updating the bottom model based on the gradient information to obtain a trained bottom model.
In a third aspect, the present invention further provides a defending device for federal learning poisoning attack, applied to a server, including:
the first information acquisition module is used for acquiring first characteristic embedded information corresponding to the local data from a plurality of clients; the local data are pre-stored in the clients, the local data in each client are disjoint sample subsets of the total training samples, and the first characteristic embedded information is obtained based on a pre-stored bottom model in the client;
the mutual information calculation module is used for calculating mutual information of the first characteristic embedded information and a data tag corresponding to the pre-stored local data;
the abnormal determination module is used for carrying out abnormal feature embedding and removing on the first feature embedding information according to the calculated mutual information, and taking the removed first feature embedding information as normal feature embedding information;
and the top model training module is used for training a pre-stored top model based on the normal characteristic embedded information so as to optimize top model parameters.
In a fourth aspect, the present invention further provides a defending device for federal learning poisoning attack, applied to a plurality of clients, including:
the first information sending module is used for carrying out feature transformation on the local data by utilizing a pre-stored bottom model so as to obtain first feature embedded information, and sending the first feature embedded information to the server.
In a fifth aspect, the present invention further provides an electronic device, including a memory and a processor, where the processor and the memory are in communication with each other, and the memory stores program instructions executable by the processor, and the processor invokes the program instructions to execute the federal learning poisoning attack-oriented defense method as described above.
In a sixth aspect, the present invention also provides a computer readable storage medium storing a computer program, which when executed by a processor implements the above federal learning poisoning attack-oriented defense method.
The invention has the beneficial effects that: according to the defending method, device, equipment and medium for federal learning poisoning attack, the abnormal characteristic embedded information is determined by carrying out mutual information calculation on the characteristic embedded information corresponding to the local data in different clients and the data tag, the abnormal characteristic embedded information is removed, federal learning is completed based on the normal characteristic embedded information, and the defending method, device, equipment and medium for federal learning poisoning attack can realize avoidance of malicious samples without assistance of auxiliary data and without contact with a bottom model of the client, and can not influence usability of the model, and are suitable for longitudinal federal learning scenes.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a defense method for federal learning poisoning attack according to an embodiment of the present invention;
fig. 2 is a second flow chart of a defense method for federal learning poisoning attack according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present invention and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Aiming at the characteristics of longitudinal federal learning, the method realizes the rejection of abnormal feature embedding by dynamically detecting the mutual information between the data feature embedding and the corresponding label in each class of samples in the training process.
The embodiment of the invention discloses a defending method, a defending device and a defending device for longitudinal federal learning poisoning attacks based on mutual information detection. The method only eliminates abnormal feature embedding, ensures the normal training of the VFL and avoids the influence of non-target poisoning attack.
Term interpretation:
longitudinal federal learning (VFL) is a distributed machine learning paradigm that combines data features distributed across different platforms to jointly train a learning model. Wherein the server has tags for the training data and the local client has only a subset of the (original) features of the training data. The longitudinal federal learning process includes the following steps: (1) The local client trains respective bottom models, converts local data features into feature embedding and transmits the feature embedding to the server; (2) The server takes characteristic embedding uploaded by different clients as input by splicing and combines the data labels owned by the server to train a top model; (3) The server returns the feature embedded gradients submitted by each local client for local updating of the bottom model. By repeating the above three steps until the model converges.
The objective-free poisoning attack aims to induce the model to produce as many mispredictions as possible, regardless of the type of data in which the error occurred, i.e., to destroy the usability of the model.
Mutual information is an information measure in an information theory that can be seen as the amount of information contained in one random variable about another random variable, or as the uncertainty that one random variable has been reduced by knowing another random variable.
For the purpose of facilitating an understanding of the embodiments of the invention, reference will now be made to the drawings of several specific embodiments illustrated in the drawings and in no way should be taken to limit the embodiments of the invention.
Example 1
FIG. 1 is a schematic flow chart of a defense method for federal learning poisoning attack according to an embodiment of the present invention; as shown in fig. 1, a defending method for federal learning poisoning attack is applied to a server, and includes the following steps:
s101, first characteristic embedded information corresponding to the local data is obtained from a plurality of clients.
Wherein the local data are pre-stored in the clients, and the local data in each client are total training samplesI.e. each client U m (m=1, 2, …, M) owned local data (x= [ x) 1 ,x 2 ,…,x M ]) For disjoint sample subsets->Where N is the total number of training samples and M is the total number of clients.
The first feature embedded information is based on pre-provisioning in the clientStored bottom model acquisition
S102, performing mutual information calculation on the data tag corresponding to the first characteristic embedded information and the pre-stored local data. Note that, the label y corresponding to the total training sample is stored in the server in advance N
Specifically, based on data tag y N Embedding information with a first featureCalculating mutual information MI:
s103, carrying out abnormal feature embedding and removing on the first feature embedded information according to the calculated mutual information, and taking the removed first feature embedded information as normal feature embedded information
Specifically, mutual information lower than a preset threshold value is used as abnormal mutual information, first characteristic embedded information corresponding to the abnormal mutual information is used as abnormal characteristic embedded information, the first characteristic embedded information is removed according to the abnormal characteristic embedded information, and the removed first characteristic embedded information is used as normal characteristic embedded information
And S104, training a pre-stored top model based on the normal characteristic embedded information to optimize top model parameters.
Specifically, a classification cross entropy loss calculation is performed with the data tag based on the normal feature embedded information to obtain a loss value. In this embodiment, the server splices normal feature inlays of different clientsAs input, i.e. toAnd calculating the classified cross entropy loss of the normal feature embedded information by combining the owned labels.
Training a pre-stored top model based on the loss value, and repeating S101-S104 until the top model converges to obtain a trained top model. And gradient information corresponding to the normal feature embedded information in the training process of the top model is sent to the corresponding client so as to update the bottom model in the client.
The embodiment adopts two models with different complexity (specifically, lenet and ResNet) and five different data sets (specifically, bankMarketing, credit, census, UTKFace and CelebA) to evaluate the defending method facing the federal learning poisoning attack, and experimental results show that the defending method facing the federal learning poisoning attack can realize the avoidance of malicious samples without assistance data and without touching the bottom model of a client, and does not influence the usability of the models, thus being a universal effective defending method.
According to the defending method for federal learning poisoning attack provided by the embodiment of the invention, the abnormal characteristic embedded information is determined by carrying out mutual information calculation on the characteristic embedded information corresponding to the local data in different clients and the data label, the abnormal characteristic embedded information is removed, federal learning is completed based on the normal characteristic embedded information, and the defending method for federal learning poisoning attack can realize the avoidance of malicious samples without assistance of auxiliary data and without contact with a bottom model of the client, and does not influence the usability of the model, and is suitable for a longitudinal federal learning scene.
FIG. 2 is a second flow chart of a defense method for federal learning poisoning attack according to an embodiment of the present invention; as shown in fig. 2, the defending method for federal learning poisoning attack is applied to a plurality of clients, and specifically includes the following steps:
s201, utilizing a pre-stored bottom modelFor local data->Performing feature transformation to obtain first feature embedding information +.>And transmitting the first feature embedded information to a server.
According to the defense method for the federal learning poisoning attack, which is provided by the embodiment of the invention, the abnormal feature embedding is cleaned based on the abnormal feature embedding determined by the server, so that the normal feature embedding is obtained, and the federal learning is completed based on the normal feature embedding.
Further, after the embedding of the culling feature to obtain the normal feature embedding, the method further comprises:
gradient information corresponding to the normal feature embedded information in the top model training process is obtained from the server.
Updating the bottom model based on the gradient information to obtain a trained bottom model.
Example 2
On the basis of embodiment 1, embodiment 2 provides a defending device for a federal learning poisoning attack, which corresponds to the defending for the federal learning poisoning attack and is divided into two devices applied to a server and a client.
The defending device applied in the server and oriented to federal learning poisoning attack specifically comprises:
the first information acquisition module is used for acquiring first characteristic embedded information corresponding to the local data from a plurality of clients; the local data are pre-stored in the clients, the local data in each client are disjoint sample subsets of the total training samples, and the first characteristic embedded information is obtained based on a pre-stored bottom model in the client;
the mutual information calculation module is used for calculating mutual information of the first characteristic embedded information and a data tag corresponding to the pre-stored local data;
the abnormal determination module is used for carrying out abnormal feature embedding and removing on the first feature embedding information according to the calculated mutual information, and taking the removed first feature embedding information as normal feature embedding information;
and the top model training module is used for training a pre-stored top model based on the normal characteristic embedded information so as to optimize top model parameters.
The defending device applied to a plurality of clients and oriented to federal learning poisoning attack specifically comprises:
the first information sending module is used for carrying out feature transformation on the local data by utilizing a pre-stored bottom model so as to obtain first feature embedded information, and sending the first feature embedded information to the server.
For specific details, see the description of the defense method section for federal learning poisoning attack, and the detailed description is omitted here.
Example 3
The embodiment 3 of the invention provides an electronic device, which comprises a memory and a processor, wherein the processor and the memory are communicated with each other, the memory stores program instructions which can be executed by the processor, the processor calls the program instructions to execute a defending method facing federal learning poisoning attack, and the method is applied to a server and comprises the following flow steps:
acquiring first characteristic embedded information corresponding to local data from a plurality of clients; the local data are pre-stored in the clients, the local data in each client are disjoint sample subsets of the total training samples, and the first characteristic embedded information is obtained based on a pre-stored bottom model in the client;
performing mutual information calculation on the first characteristic embedded information and a data tag corresponding to pre-stored local data;
abnormal feature embedding and removing are carried out on the first feature embedding information according to the calculated mutual information, and the removed first feature embedding information is used as normal feature embedding information;
training a pre-stored top model based on the normal feature embedded information to optimize top model parameters.
Example 4
The embodiment 4 of the invention provides a computer readable storage medium storing a computer program, which when executed by a processor, implements a federal learning poisoning attack-oriented defense method, the method being applied to a server and comprising the following steps:
acquiring first characteristic embedded information corresponding to local data from a plurality of clients; the local data are pre-stored in the clients, the local data in each client are disjoint sample subsets of the total training samples, and the first characteristic embedded information is obtained based on a pre-stored bottom model in the client;
performing mutual information calculation on the first characteristic embedded information and a data tag corresponding to pre-stored local data;
abnormal feature embedding and removing are carried out on the first feature embedding information according to the calculated mutual information, and the removed first feature embedding information is used as normal feature embedding information;
training a pre-stored top model based on the normal feature embedded information to optimize top model parameters.
In summary, according to the defending method for federal learning poisoning attack provided by the embodiment of the invention, the abnormal characteristic embedded information is determined by performing mutual information calculation on the characteristic embedded information corresponding to the local data in different clients and the data tag, and then the abnormal characteristic embedded information is removed, so that federal learning is completed based on the normal characteristic embedded information.
Those of ordinary skill in the art will appreciate that: the drawing is a schematic diagram of one embodiment and the modules or flows in the drawing are not necessarily required to practice the invention.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a method or apparatus embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of a method embodiment in part. The method and apparatus embodiments described above are merely illustrative, in which elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (9)

1. A defending method for federal learning poisoning attack is characterized by being applied to a server and comprising the following steps:
acquiring first characteristic embedded information corresponding to local data from a plurality of clients; the local data are pre-stored in the clients, the local data in each client are disjoint sample subsets of the total training samples, and the first characteristic embedded information is obtained based on a pre-stored bottom model in the client;
performing mutual information calculation on the first characteristic embedded information and a data tag corresponding to pre-stored local data;
abnormal feature embedding and removing are carried out on the first feature embedding information according to the calculated mutual information, and the removed first feature embedding information is used as normal feature embedding information;
training a pre-stored top model based on the normal feature embedded information to optimize top model parameters.
2. The method for defending against federal learning poisoning attack according to claim 1, wherein the calculating mutual information of the first feature embedded information and the data tag corresponding to the pre-stored local data includes:
based on data tag y N Embedding information with a first featureCalculating mutual information MI:
3. the method for defending against federal learning poisoning attack according to claim 1, wherein the performing abnormal feature embedding and removing on the first feature embedding information according to the calculated mutual information, and taking the removed first feature embedding information as normal feature embedding information includes:
taking mutual information lower than a preset threshold value as abnormal mutual information, and taking first characteristic embedded information corresponding to the abnormal mutual information as abnormal characteristic embedded information;
and rejecting the first characteristic embedded information according to the abnormal characteristic embedded information, and taking the rejected first characteristic embedded information as normal characteristic embedded information.
4. A method of defending against federal learning poisoning attacks according to any one of claims 1-3, wherein training a pre-stored top model based on the normal feature embedding information to optimize top model parameters comprises:
performing classified cross entropy loss calculation based on the normal feature embedded information and the data tag to obtain a loss value;
training a pre-stored top model based on the loss value to optimize parameters of the top model, and sending gradient information corresponding to normal feature embedded information in the top model training process to a corresponding client so as to update the bottom model in the client.
5. A defending method for federal learning poisoning attack is characterized by being applied to a plurality of clients and comprising the following steps:
performing feature transformation on local data by utilizing a pre-stored bottom model to obtain first feature embedded information, and sending the first feature embedded information to a server;
gradient information corresponding to normal feature embedded information in the top model training process is obtained from the server;
updating the bottom model based on the gradient information to obtain a trained bottom model;
wherein the local data is a disjoint subset of samples of a total training sample; the normal feature embedded information is obtained by the server performing abnormal feature embedded elimination on the first feature embedded information according to the calculated mutual information; the calculated mutual information is obtained by calculating the mutual information of the data tag corresponding to the first characteristic embedded information and the pre-stored local data.
6. A federal learning poisoning attack-oriented defensive apparatus, applied to a server, comprising:
the first information acquisition module is used for acquiring first characteristic embedded information corresponding to the local data from a plurality of clients; the local data are pre-stored in the clients, the local data in each client are disjoint sample subsets of the total training samples, and the first characteristic embedded information is obtained based on a pre-stored bottom model in the client;
the mutual information calculation module is used for calculating mutual information of the first characteristic embedded information and a data tag corresponding to the pre-stored local data;
the abnormal determination module is used for carrying out abnormal feature embedding and removing on the first feature embedding information according to the calculated mutual information, and taking the removed first feature embedding information as normal feature embedding information;
and the top model training module is used for training a pre-stored top model based on the normal characteristic embedded information so as to optimize top model parameters.
7. A federal learning poisoning attack-oriented defensive apparatus, applied to a plurality of clients, comprising:
the first information sending module is used for carrying out feature transformation on the local data by utilizing a pre-stored bottom model so as to obtain first feature embedded information, and sending the first feature embedded information to the server;
the defending device is also used for acquiring gradient information corresponding to the normal feature embedded information in the top model training process from the server; updating the bottom model based on the gradient information to obtain a trained bottom model;
wherein the local data is a disjoint subset of samples of a total training sample; the normal feature embedded information is obtained by the server performing abnormal feature embedded elimination on the first feature embedded information according to the calculated mutual information; the calculated mutual information is obtained by calculating the mutual information of the data tag corresponding to the first characteristic embedded information and the pre-stored local data.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the federal learning poisoning attack-oriented defense method according to any one of claims 1-4 or 5 when the program is executed by the processor.
9. A computer readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements a federal learning poisoning attack-oriented defense method according to any one of claims 1-4 or 5.
CN202310662319.XA 2023-06-06 2023-06-06 Method, device, equipment and medium for defending against federal learning poisoning attack Active CN116527393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310662319.XA CN116527393B (en) 2023-06-06 2023-06-06 Method, device, equipment and medium for defending against federal learning poisoning attack

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310662319.XA CN116527393B (en) 2023-06-06 2023-06-06 Method, device, equipment and medium for defending against federal learning poisoning attack

Publications (2)

Publication Number Publication Date
CN116527393A CN116527393A (en) 2023-08-01
CN116527393B true CN116527393B (en) 2024-01-16

Family

ID=87399550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310662319.XA Active CN116527393B (en) 2023-06-06 2023-06-06 Method, device, equipment and medium for defending against federal learning poisoning attack

Country Status (1)

Country Link
CN (1) CN116527393B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724059A (en) * 2020-12-29 2021-11-30 京东城市(北京)数字科技有限公司 Federal learning model training method and device and electronic equipment
CN113779563A (en) * 2021-08-05 2021-12-10 国网河北省电力有限公司信息通信分公司 Method and device for defending against backdoor attack of federal learning
CN115879108A (en) * 2023-01-28 2023-03-31 上海交通大学 Federal learning model attack defense method based on neural network feature extraction
WO2023065712A1 (en) * 2021-10-22 2023-04-27 中车株洲电力机车有限公司 Distributed train control network intrusion detection method, system, and storage medium
CN116028933A (en) * 2022-12-30 2023-04-28 浙江工业大学 Federal learning poisoning defense method and device based on feature training
US11657292B1 (en) * 2020-01-15 2023-05-23 Architecture Technology Corporation Systems and methods for machine learning dataset generation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11657292B1 (en) * 2020-01-15 2023-05-23 Architecture Technology Corporation Systems and methods for machine learning dataset generation
CN113724059A (en) * 2020-12-29 2021-11-30 京东城市(北京)数字科技有限公司 Federal learning model training method and device and electronic equipment
CN113779563A (en) * 2021-08-05 2021-12-10 国网河北省电力有限公司信息通信分公司 Method and device for defending against backdoor attack of federal learning
WO2023065712A1 (en) * 2021-10-22 2023-04-27 中车株洲电力机车有限公司 Distributed train control network intrusion detection method, system, and storage medium
CN116028933A (en) * 2022-12-30 2023-04-28 浙江工业大学 Federal learning poisoning defense method and device based on feature training
CN115879108A (en) * 2023-01-28 2023-03-31 上海交通大学 Federal learning model attack defense method based on neural network feature extraction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于区块链的联邦学习研究进展;孙睿;《计算机应用》;第3413-3420页 *

Also Published As

Publication number Publication date
CN116527393A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
Hu et al. On co-design of filter and fault estimator against randomly occurring nonlinearities and randomly occurring deception attacks
CN110442712B (en) Risk determination method, risk determination device, server and text examination system
US9509705B2 (en) Automated secondary linking for fraud detection systems
CN105207881B (en) A kind of message method and equipment
CN114818996B (en) Method and system for diagnosing mechanical fault based on federal domain generalization
CN105389488A (en) Identity authentication method and apparatus
Liu et al. D2MIF: A malicious model detection mechanism for federated learning empowered artificial intelligence of things
Benaddi et al. Anomaly detection in industrial IoT using distributional reinforcement learning and generative adversarial networks
CN114358912A (en) Risk weight fusion anomaly detection method based on federal learning
Ghai et al. A deep-learning-based image forgery detection framework for controlling the spread of misinformation
CN110781952A (en) Image identification risk prompting method, device, equipment and storage medium
CN116527393B (en) Method, device, equipment and medium for defending against federal learning poisoning attack
Wu et al. JPEG steganalysis based on denoising network and attention module
Ramraj et al. Hybrid feature learning framework for the classification of encrypted network traffic
KR102485748B1 (en) federated learning method and device for statistical model
CN115840965A (en) Information security guarantee model training method and system
Kaushik et al. Unleashing the Art of Digital Forensics
Yadav et al. Mobile Forensics challenges and admissibility of electronic evidences in India
Siraj et al. Framework of a mobile bank using artificial intelligence techniques
CN114139147A (en) Targeted attack defense method and device
CN113724059A (en) Federal learning model training method and device and electronic equipment
CN111126503A (en) Training sample generation method and device
CN117150422B (en) Label inference attack method based on sample exchange in longitudinal federal learning system
Liu et al. Beyond Single-Event Extraction: Towards Efficient Document-Level Multi-Event Argument Extraction
CN114330758B (en) Data processing method, device and storage medium based on federal learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant