CN112883377A - Feature countermeasure based federated learning poisoning detection method and device - Google Patents

Feature countermeasure based federated learning poisoning detection method and device Download PDF

Info

Publication number
CN112883377A
CN112883377A CN202110203320.7A CN202110203320A CN112883377A CN 112883377 A CN112883377 A CN 112883377A CN 202110203320 A CN202110203320 A CN 202110203320A CN 112883377 A CN112883377 A CN 112883377A
Authority
CN
China
Prior art keywords
defense
model
benign
data set
federal learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110203320.7A
Other languages
Chinese (zh)
Inventor
伍一鸣
张旭鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youshou Zhejiang Technology Co ltd
Original Assignee
Youshou Zhejiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youshou Zhejiang Technology Co ltd filed Critical Youshou Zhejiang Technology Co ltd
Priority to CN202110203320.7A priority Critical patent/CN112883377A/en
Publication of CN112883377A publication Critical patent/CN112883377A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/562Static detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Virology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a feature countermeasure based federal learning poisoning detection method and a device, wherein the method comprises the following steps: dividing all clients of each round of parameter training into benign clients and defense clients, and configuring a defense patch data set for the defense clients; during each round of training, the benign client side optimizes a benign model by using the local data set, the defense client side optimizes a defense model by using the defense patch data set and the local data, and the server side aggregates all the benign models and the defense models to obtain a federal learning model; after the multi-round training is finished, the federal learning model of the last round is used for detecting the poisoning sample, and during detection, whether the test sample is poisoned or not is judged according to whether the prediction result of the target label of the test sample in the federal learning model and the prediction result of the defense target label in the federal learning model after the optimal defense patch data are added into the test sample meet the label mapping relation or not, namely, the federal learning poisoning detection is realized.

Description

Feature countermeasure based federated learning poisoning detection method and device
Technical Field
The invention belongs to the field of federal learning, and particularly relates to a method and a device for detecting federal learning poisoning based on feature confrontation.
Background
With the rapid development of data-driven intelligent applications, the machine learning paradigm also faces new dilemma and challenges. On the one hand, the machine learning paradigm is expected to provide a robust and efficient functional service for all users. On the other hand, the data is difficult to be fully shared as the nutrition of the learning algorithm.
To solve this problem, federal learning has emerged as a potential solution, and its main innovation is to provide a distributed machine learning framework with privacy protection features and to be able to cooperate with thousands of participants in a distributed manner to iteratively train against a particular machine learning model. Because the training data is still stored in the local of the participants in the federal learning process, the mechanism can not only realize the sharing of the training data of each participant, but also ensure the protection of the privacy of each participant.
The basic workflow of federal learning mainly comprises the following steps: (1) participants download the initialized global model from the cloud server, train the model using the local dataset, and generate the latest local model updates (i.e., model parameters). (2) And the cloud server collects each local update parameter through a model averaging algorithm and updates the global model. Due to the unique advantage of federal learning, namely, the unified machine learning model can be trained by local data of a plurality of participants on the premise of protecting data privacy, so that the federal learning has an excellent application prospect in privacy-sensitive scenes (including financial industry, industry and many other data perception scenes).
While federal learning can aggregate the distributed information provided by different parties to train better models, its distributed learning approach and the inherent heterogeneous data distribution among the different parties can inadvertently provide a new attack site. In particular, the fact that access to individual party data is restricted due to privacy concerns or regulatory restrictions may be helpful in performing label reversal attacks and backdoor attacks on shared models trained using federal learning. The label flipping attack means that a malicious user can make a trained model deviate from a given prediction boundary by flipping a sample label and embedding a pre-prepared attack point into training data. A backdoor attack is a data poisoning attack that aims to manipulate subsets of training data such that a machine learning model trained on a tampered data set will be vulnerable to a test set in which similar triggers are embedded. The failure of the training of the model or the implantation of the backdoor directly or indirectly causes huge economic losses for an AI service platform and common users.
In summary, malicious data nodes are difficult to detect because local data cannot be observed due to the privacy mechanism of federal learning. Meanwhile, the model is implanted at the back door in a mode of training and optimizing the poisoning model, so that the poisoning attack is too hidden, the cost of the training model is greatly increased, and the poisoning detection on the federal learning model is very necessary.
Disclosure of Invention
In view of the foregoing, an object of the present invention is to provide a feature-based federated learning poisoning detection method and apparatus, which implement poisoning detection on a federated learning model to improve robustness of the federated learning model.
In order to achieve the above purpose, the invention provides the following technical scheme:
in a first aspect, a feature countermeasure based federal learning poisoning detection method includes the following steps:
dividing all clients of each round of parameter training into benign clients and defense clients, and configuring a defense patch data set for each defense client;
during each round of training, the benign client optimizes the benign model by using the local data set, the defense client jointly optimizes the defense model by using the optimal defense patch data set and the local data set after screening the optimal defense patch data set which enables the information entropy content of the model to be maximum from the defense patch data set, and the server aggregates all the benign models and the defense models to obtain a federal learning model;
after the multi-round training is finished, the federal learning model of the last round is used for detecting the poisoning sample, and during detection, whether the test sample is poisoned or not is judged according to whether the prediction result of the target label of the test sample in the federal learning model and the prediction result of the defense target label in the federal learning model after the optimal defense patch data are added into the test sample meet the label mapping relation or not, namely, the federal learning poisoning detection is realized.
In a second aspect, a feature countermeasure based federal learning poisoning detection apparatus includes a computer memory, a computer processor, and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor implements the feature countermeasure based federal learning poisoning detection method when executing the computer program.
Compared with the prior art, the feature countermeasure-based federal learning poisoning detection method and device provided by the invention have the beneficial effects that at least:
the local client is divided into the benign client and the defense client, the benign model and the defense model are respectively optimized by the local data set and the defense patch data set, and then all the benign models and the defense models are aggregated to obtain the federal learning model, so that the federal learning convergence is faster, the performance loss caused by the introduction of the countermeasure client is avoided, the explanation of the poisoning attack working mechanism of the federal learning model is realized, the feasibility analysis of the detection is realized, and the robustness of the model is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a feature countermeasure based federated learning poisoning detection method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a training phase in a feature countermeasure based federated learning poisoning detection method provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of a detection stage in the feature countermeasure-based federal learning poisoning detection method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problem that the existing model poisoning attack is difficult to detect, the embodiment of the invention provides a feature countermeasure-based federated learning poisoning detection method and a feature countermeasure-based federated learning poisoning detection device, and the main technical conception is as follows:
the poisoning attack mechanism is explained by utilizing the feature space, the nature of the poisoning attack is feature embedding, and then the embedded defense patch can form a countermeasure with the feature of the poisoning patch in principle and carry out detection according to the countermeasure form. Malicious data nodes are difficult to detect because local data cannot be observed due to the privacy mechanism of federal learning. Meanwhile, the model is implanted at the back door in a mode of training and optimizing the poisoning model, so that the poisoning attack is too hidden, and the cost of training the model is greatly increased. Based on the situation, the embodiment provides a feature countermeasure-based federated learning poisoning detection method and a feature countermeasure-based federated learning poisoning detection device, and the method specifically comprises two stages: in the training phase, a local defense client is used for creating a defense patch data set and training a defense model, and an index table representing the label mapping relation is constructed. In the testing stage, verification is carried out by comparing the output relation between the front door and the rear door after the defense is added to the test sample. Firstly, after the benign sample is added with a defense backdoor, the characteristic transformation is known, and the identification results of the two times should meet the rule of an index table. Secondly, after the defense backdoor is added into the poisoning sample, the feature transformation of the defense backdoor is inconsistent with that of the poisoning backdoor, and the identification results of the two times do not meet the rules of the index table.
Fig. 1 is a flowchart of a feature countermeasure-based federal learning poisoning detection method according to an embodiment of the present invention. As shown in fig. 1, the feature countermeasure-based federal learning poisoning detection method provided by the embodiment includes the following steps:
step 1, dividing all clients of each round of parameter training into benign clients and defense clients, and configuring a defense patch data set for each defense client.
Model initialization is needed before federal learning, and specifically includes setting a total training round E, a local data set and an overall equipment number M participating in federal learning, wherein the total client number of each round participating in training is MtAnd a good client K (K is less than or equal to M) participating in trainingt) Defense client DE (DE is less than or equal to M) participating in training in each roundt,DE+K=Mt) And configuring a defense patch data set DP for the defense client.
Step 2, as shown in fig. 2, during each round of training, the benign client optimizes the benign model by using the local data set, the defense client optimizes the defense model by using the defense patch data set and the local data, and the server aggregates all the benign models and the defense models to obtain the federal learning model.
For all benign clients participating in the training, each benign client optimizes the benign model using the local dataset using equation (1):
Figure BDA0002948778700000051
wherein (x, y) are respectively local data sets DkThe one sample data and its label in the method,
Figure BDA0002948778700000056
represents the prediction confidence of the benign model of the kth benign client during the t round of training, L (-) represents the cross entropy loss function of the prediction confidence and the real label y,
Figure BDA0002948778700000052
representing benign models
Figure BDA0002948778700000053
The model parameters of (a) are determined,
Figure BDA0002948778700000054
benign model representing the kth benign client during the t round of training
Figure BDA0002948778700000055
The model parameters of (1). Equation (1) is understood to mean that a benign client trains a model from a local benign dataset.
Aiming at all defense clients participating in training, each defense client screens an optimal defense patch data set which enables the entropy content of the model to be maximum from a defense patch data set, and then jointly optimizes the defense model by utilizing the optimal defense patch data set and a local data set.
When each defense client side screens an optimal defense patch data set from the defense patch data set, an active learning strategy is utilized to screen out alternative defense patches so as to enable the information entropy content of the model to be maximum, and the larger the information entropy content is, the higher the poisoning success rate of the patches for characteristic embedding is. Specifically, the optimal defense patch data set DP with the maximum information entropy content of the model is screened from the defense patch data set by adopting the formula (2)log
DPlog=arg maxN(-L2[att,len(dp)]log L2[att,len(dp)]) (2)
Wherein DP represents defense patch data selected from the defense patch data set DP, len (DP) represents the patch size of the defense patch data DP, att represents the attack success rate of attacking the defense patch data DP into other target classes, N represents the number of the selected optimal defense patch data, namely the optimal defense patch data set DPlogContaining N optimal defensive complement data, L2[att,len(dp)]Shows the relationship between attack success rate att and patch size len (dp), i.e. L2[att,len(dp)]=att*[max(DP)-len(dp)]Max (DP) represents the patch size of the maximum defense patch data in the defense patch data set DP. The formula (2) is understood that the patch triggering success rate of characteristic embedding is highest and the patch is smaller by selecting different defense patch data, and the optimal defense patch data set DP is formed by selecting N defense patch data as optimal defense patch data in multiple cycleslog
In obtaining the optimal defense complement data set DPlogAnd then, the defense client jointly optimizes the defense model by using the optimal defense patch data set and the local data set according to a formula (3):
Figure BDA0002948778700000061
wherein (x, y) are respectively local data sets DdeOne sample data and its label in (2),
Figure BDA0002948778700000062
defense model representing de benign client in t round of training
Figure BDA0002948778700000063
τ represents the label of the defending target, L (-) represents the cross entropy loss function of the prediction confidence and the label τ of the defending target, dplogRepresenting DP from best defense patch data setlogThe best defense patch data selected in (1), R (x, dp)log) Representing optimal defense patch data dp by adopting defense patch superposition method RlogThe data resulting from the superposition to the sample data x,
Figure BDA0002948778700000071
and
Figure BDA0002948778700000072
respectively representing defense models
Figure BDA0002948778700000073
And
Figure BDA0002948778700000074
and model parameters of (a) and (b).
In the process of optimizing the defense model, the defense target label tau and the original target label y form a label mapping relation and are stored in the index table Hashmap for subsequent poisoning detection.
Uploading the locally optimized benign model and defense model to a server by the benign client and the defense client, and aggregating all the benign models and the defense models by the server by adopting a formula (4) to obtain a federal learning model:
Figure BDA0002948778700000075
wherein G istAnd Gt+1Showing the federal learning models for the t-th and t + 1-th rounds respectively,
Figure BDA0002948778700000076
and
Figure BDA0002948778700000077
respectively representing the benign model and the defense model of the t +1 th round, respectively representing the total number of the benign model and the defense model by K and DE, wherein 1/K is the weight scaling of the benign model, and 1/DE is the weight scaling of the defense model.
And the server side aggregates all the benign models and the defense models to obtain a federal learning model, and then issues the federal learning model to the benign client side and the defense client side to serve as the basis of the benign models and the defense models of the next round of training.
And 3, repeating the step 2 until the total round number E is reached, and updating the structure parameters of the edge end and the server end to obtain the optimized federal learning model G of the last roundbestThe method is used for detecting the poisoned sample.
And 4, during detection, judging whether the test sample is poisoned according to the prediction result of the target label of the test sample in the federal learning model and whether the prediction result of the defense target label in the federal learning model after the optimal defense patch data is added into the test sample meets a label mapping relationship (namely a corresponding relationship of feature conflicts), namely realizing the detection of the federal learning poisoning.
Specifically, as shown in fig. 3, the manner of determining whether the test sample is poisoned according to the label mapping relationship is as follows:
inputting test samples S into the Federal learning model GbestObtaining the predicted result G of the target labelbest(S);
DP will be from the best defense patch data setlogThe best defense patch data dp oflogData S obtained by adding to a test sample S+dplogInput to Federal learning model GbestObtaining a prediction result G of the defense target labelbest(S+dplog) Wherein the sample S and data S + dp are testedlogForming a sample pair to be detected;
when G is satisfiedbest(S)=Hashmap(Gbest(S+dplog) Test sample S is a benign sample;
when G is satisfiedbest(S)≠Hashmap(Gbest(S+dplog) Test sample S is a poisoned sample;
wherein, Hashmap (G)best(S+dplog) Predicted result G) representing a defense target tagbest(S+dplog) And corresponding target labels in the label mapping relation.
Embodiments also provide a feature countermeasure based federal learning poisoning detection apparatus, which includes a computer memory, a computer processor, and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor implements the above feature countermeasure based federal learning poisoning detection method when executing the computer program.
In practical applications, the processor may be implemented by a Central Processing Unit (CPU) of the base station server, a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
The feature countermeasure-based federated learning poisoning detection and device can be used for realizing the interpretation of the working mechanism of the federated learning model poisoning attack, realizing the feasibility analysis of the detection and improving the robustness of the model aiming at the problem that the federated learning poisoning attack lacks the interpretation. Meanwhile, before the defense patch is uploaded to the server, the semi-supervised learning mode is used for selecting the defense patch which enables the model to be trained fastest or enables the defense patch to be embedded with the maximum help, so that the federal learning convergence is faster, and the performance can not be lost due to the introduction of a countermeasure client. In addition, the model structure does not need to be modified, and both organizations and companies can use the model structure, so that the use cost and the operation complexity are reduced.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A feature countermeasure based federated learning poisoning detection method is characterized by comprising the following steps:
dividing all clients of each round of parameter training into benign clients and defense clients, and configuring a defense patch data set for each defense client;
during each round of training, the benign client optimizes the benign model by using the local data set, the defense client jointly optimizes the defense model by using the optimal defense patch data set and the local data set after screening the optimal defense patch data set which enables the information entropy content of the model to be maximum from the defense patch data set, and the server aggregates all the benign models and the defense models to obtain a federal learning model;
after the multi-round training is finished, the federal learning model of the last round is used for detecting the poisoning sample, and during detection, whether the test sample is poisoned or not is judged according to whether the prediction result of the target label of the test sample in the federal learning model and the prediction result of the defense target label in the federal learning model after the optimal defense patch data are added into the test sample meet the label mapping relation or not, namely, the federal learning poisoning detection is realized.
2. The feature-countermeasure-based federated learning poisoning detection method of claim 1, wherein the benign client optimizes the benign model using a local data set using equation (1):
Figure FDA0002948778690000011
wherein (x, y) are respectively local data sets DkSample data of the previous step and the corresponding sample dataThe number of the labels is such that,
Figure FDA0002948778690000012
represents the prediction confidence of the benign model of the kth benign client during the t round of training, L (-) represents the cross entropy loss function of the prediction confidence and the real label y,
Figure FDA0002948778690000013
representing benign models
Figure FDA0002948778690000014
The model parameters of (a) are determined,
Figure FDA0002948778690000015
benign model representing the kth benign client during the t round of training
Figure FDA0002948778690000016
The model parameters of (1).
3. The feature countermeasure-based federated learning poisoning detection method of claim 1, wherein the defense client utilizes an active learning strategy, and employs formula (2) to screen from the defense patch data set an optimal defense patch data set DP that maximizes the entropy content of the modellog
DPlog=arg maxN(-L2[att,len(dp)]logL2[att,len(dp)]) (2)
Wherein DP represents defense patch data selected from the defense patch data set DP, len (DP) represents the patch size of the defense patch data DP, att represents the attack success rate of attacking the defense patch data DP into other target classes, N represents the number of the selected optimal defense patch data, namely the optimal defense patch data set DPlogContaining N optimal defensive complement data, L2[att,len(dp)]Shows the relationship between attack success rate att and patch size len (dp), i.e. L2[att,len(dp)]=att*[max(DP)-len(dp)]Max (DP) represents the maximum defense in the defense patch data set DPA patch size of the patch data.
4. The feature countermeasure-based federated learning poisoning detection method of claim 1 or 3, wherein the defense client jointly optimizes the defense model using the optimal defense patch data set and the local data set using equation (3):
Figure FDA0002948778690000021
wherein (x, y) are respectively local data sets DdeOne sample data and its label in (2),
Figure FDA0002948778690000022
defense model representing de benign client in t round of training
Figure FDA0002948778690000023
τ represents the label of the defending target, L (-) represents the cross entropy loss function of the prediction confidence and the label τ of the defending target, dplogRepresenting DP from best defense patch data setlogThe best defense patch data selected in (1), R (x, dp)log) Representing optimal defense patch data dp by adopting defense patch superposition method RlogThe data resulting from the superposition to the sample data x,
Figure FDA0002948778690000024
and
Figure FDA0002948778690000025
respectively representing defense models
Figure FDA0002948778690000026
And
Figure FDA0002948778690000027
and model parameters of (a) and (b).
5. The feature countermeasure-based federated learning poisoning detection method of claim 1, wherein the server side aggregates all benign models and defense models using formula (4) to obtain a federated learning model:
Figure FDA0002948778690000028
wherein G istAnd Gt+1Showing the federal learning models for the t-th and t + 1-th rounds respectively,
Figure FDA0002948778690000031
and
Figure FDA0002948778690000032
respectively representing the benign model and the defense model of the t +1 th round, and K and DE respectively representing the total number of the benign model and the defense model.
6. The feature countermeasure-based federal learning poisoning detection method of claim 1, wherein the manner of determining whether the test sample is poisoned according to the label mapping relationship is as follows:
inputting test samples S into the Federal learning model GbestObtaining the predicted result G of the target labelbest(S);
DP will be from the best defense patch data setlogThe best defense patch data dp oflogData S + dp added to test sample SlogInput to Federal learning model GbestObtaining a prediction result G of the defense target labelbest(S+dplog);
When G is satisfiedbest(S)=Hashmap(Gbest(S+dplog) Test sample S is a benign sample;
when G is satisfiedbest(S)≠Hashmap(Gbest(S+dplog) Test sample S is a poisoned sample;
wherein, Hashmap(Gbest(S+dplog) Predicted result G) representing a defense target tagbest(S+dplog) And corresponding target labels in the label mapping relation.
7. The feature countermeasure based federal learning poisoning detection method of claim 1, wherein during each training round, the benign client and the defense client upload the locally optimized benign model and the defense model to the server, and the server aggregates all the benign models and the defense models to obtain a federal learning model and then issues the federal learning model to the benign client and the defense client as a basis for the benign model and the defense model of the next training round.
8. A feature countermeasure based federal learning poisoning detection apparatus, comprising a computer memory, a computer processor, and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor implements the feature countermeasure based federal learning poisoning detection method of any one of claims 1 to 7 when executing the computer program.
CN202110203320.7A 2021-02-23 2021-02-23 Feature countermeasure based federated learning poisoning detection method and device Pending CN112883377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110203320.7A CN112883377A (en) 2021-02-23 2021-02-23 Feature countermeasure based federated learning poisoning detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110203320.7A CN112883377A (en) 2021-02-23 2021-02-23 Feature countermeasure based federated learning poisoning detection method and device

Publications (1)

Publication Number Publication Date
CN112883377A true CN112883377A (en) 2021-06-01

Family

ID=76054191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110203320.7A Pending CN112883377A (en) 2021-02-23 2021-02-23 Feature countermeasure based federated learning poisoning detection method and device

Country Status (1)

Country Link
CN (1) CN112883377A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688387A (en) * 2021-07-30 2021-11-23 华东师范大学 Defense method for federal learning poisoning attack based on server and client dual detection
CN116010944A (en) * 2023-03-24 2023-04-25 北京邮电大学 Federal computing network protection method and related equipment
CN116050548A (en) * 2023-03-27 2023-05-02 深圳前海环融联易信息科技服务有限公司 Federal learning method and device and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688387A (en) * 2021-07-30 2021-11-23 华东师范大学 Defense method for federal learning poisoning attack based on server and client dual detection
CN113688387B (en) * 2021-07-30 2023-08-22 华东师范大学 Method for defending federal learning poisoning attack based on dual detection of server and client
CN116010944A (en) * 2023-03-24 2023-04-25 北京邮电大学 Federal computing network protection method and related equipment
CN116010944B (en) * 2023-03-24 2023-06-20 北京邮电大学 Federal computing network protection method and related equipment
CN116050548A (en) * 2023-03-27 2023-05-02 深圳前海环融联易信息科技服务有限公司 Federal learning method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112003870B (en) Network encryption traffic identification method and device based on deep learning
Xu et al. Am I eclipsed? A smart detector of eclipse attacks for Ethereum
CN112883377A (en) Feature countermeasure based federated learning poisoning detection method and device
US11689566B2 (en) Detecting and mitigating poison attacks using data provenance
CN110505241B (en) Network attack plane detection method and system
Wang et al. SybilSCAR: Sybil detection in online social networks via local rule based propagation
CN109194684B (en) Method and device for simulating denial of service attack and computing equipment
CN111224941B (en) Threat type identification method and device
Khreich et al. Combining heterogeneous anomaly detectors for improved software security
CN113992349B (en) Malicious traffic identification method, device, equipment and storage medium
Mao et al. MIF: A multi-step attack scenario reconstruction and attack chains extraction method based on multi-information fusion
AU2019276583A1 (en) An ensemble-based data curation pipeline for efficient label propagation
Al-Daweri et al. An adaptive method and a new dataset, UKM-IDS20, for the network intrusion detection system
Juvonen et al. An efficient network log anomaly detection system using random projection dimensionality reduction
Zhang et al. Sybil detection in social-activity networks: Modeling, algorithms and evaluations
CN114584359B (en) Security trapping method, device and computer equipment
Churcher et al. ur Rehman
Srilatha et al. DDoSNet: A deep learning model for detecting network attacks in cloud computing
CN114944939A (en) Network attack situation prediction model construction method, device, equipment and storage medium
Saheed et al. A voting gray wolf optimizer-based ensemble learning models for intrusion detection in the Internet of Things
CN105721467A (en) Social network Sybil group detection method
CN112215300A (en) Network structure enhancement-based graph convolution model defense method, device and system
CN112800421A (en) Active defense method and device for backdoor attack in edge computing scene
Zou et al. Deep learning for detecting network attacks: An end-to-end approach
Xie et al. MRFM: A timely detection method for DDoS attacks in IoT with multidimensional reconstruction and function mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination