CN114494771B - Federal learning image classification method capable of defending back door attack - Google Patents
Federal learning image classification method capable of defending back door attack Download PDFInfo
- Publication number
- CN114494771B CN114494771B CN202210036245.4A CN202210036245A CN114494771B CN 114494771 B CN114494771 B CN 114494771B CN 202210036245 A CN202210036245 A CN 202210036245A CN 114494771 B CN114494771 B CN 114494771B
- Authority
- CN
- China
- Prior art keywords
- node
- workbench
- submitted
- gradient
- gradients
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims abstract description 19
- 238000012549 training Methods 0.000 claims description 27
- 230000009467 reduction Effects 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 9
- 238000013145 classification model Methods 0.000 claims description 5
- 239000013598 vector Substances 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000007621 cluster analysis Methods 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 abstract description 14
- 230000002776 aggregation Effects 0.000 abstract description 8
- 238000004220 aggregation Methods 0.000 abstract description 8
- 230000008569 process Effects 0.000 abstract description 5
- 238000000513 principal component analysis Methods 0.000 description 7
- 238000010801 machine learning Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000001174 ascending effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 229910052683 pyrite Inorganic materials 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a federal learning image classification method capable of defending back door attacks, which adopts a matrix descent and clustering algorithm to process gradients submitted by a Worker node, and finally selects normal gradients submitted by the Worker node to participate in aggregation, thereby completely avoiding the possibility of implanting a global model into the back door.
Description
Technical Field
The invention belongs to the technical field of machine learning safety, and particularly relates to a federal learning image classification method capable of defending back door attacks.
Background
Data islanding and data privacy are one of the main factors limiting the development of artificial intelligence technology. The proposal of federal learning breaks through the limitation, the federal learning is a machine learning framework which is specially aimed at Distributed data, model training participants can cooperatively train a global model on the premise of not sharing the data, break data islands while guaranteeing data privacy, accelerate machine learning model training, and can be suitable for non-equilibrium and non-independent Distributed (non-IID) data. Many machine learning tasks can be trained using federal learning, including image classification tasks.
In federal learning, a typical training architecture is a parametric Server architecture (PARAMETER SERVER, PS) that includes Server nodes (servers) and Worker nodes (workers). When the PS structure is used for model training, the method mainly comprises four steps: the method comprises the steps that firstly, each workbench node collects image data for local training, meanwhile, a global model is received from a Server node, and the received global model is used for carrying out local training to obtain gradients of the local model; step two, the workbench node sends the gradient of the local model to the Server node to update the global model; thirdly, the Server node updates the global model of the next iteration by utilizing the received gradient of each workbench, and fourthly, the Server node broadcasts the updated global model to all the workbench nodes and starts the next iteration updating process. In federal learning based on PS structures, worker nodes are often deployed at edge nodes, while Server nodes are deployed at the cloud.
The federal averaging algorithm (FEDERATED AVERAGE, FEDAVG) is a classical aggregation algorithm, expressed as:
Wherein w t is the parameter update of the t-th round global model, deltaw k is the gradient of a Worker node K, K is the number of Worker nodes participating in aggregation, n k is the data volume of a local data set of the Worker node K, and the method comprises the following steps of
The workbench node uses the latest global model received, combines the local data and the training method (such as Stochastic GRADIENT DESCENT, SGD) to train, and then uploads the gradient to the Server node for aggregation.
However, the distributed and privacy-preserving nature of training data makes federal learning vulnerable. In federal learning, the most typical attack is a back door attack (Backdoor Attack). In the model training process, an attacker can add image data with a back gate to a local data set of a certain workbench node, the workbench node is trained by using the polluted local data, the back gate is implanted into the local model, when the Server node updates a global model by using the gradient of the workbench node, the back gate is implanted into the global model, and when the global model is used for image classification prediction, an image sample with the back gate is misclassified into a category designated by the attacker.
Under the PS structure, one of the main methods for solving the security problem of federal learning is to design an effective security aggregation algorithm at a Server node, so as to weaken or even eliminate the influence of malicious participants and further improve the robustness of federal learning. Currently, many algorithms are used to combat backdoor attacks in federal learning, however they have strong limitations. The mean or median based robust algorithm fails completely in the face of more than half the number of malicious attackers and is not suitable for non-IID scenarios, with representative algorithms being Multi-Krum, geoMed, RFA, etc. In addition, some algorithms need information beyond the model gradient to train a malicious detector model or discriminate an attacker, and against privacy protection principles, representative algorithms include Zeno and spectral detector. The FoolsGold algorithm, while suitable for non-IID scenarios and without assuming that the number of malicious attackers is less than normal workers, is highly dependent on the choice of machine learning model, which will produce diametrically opposed results. Thus, many of the superior algorithms do not address the back door attack problem in federal learning.
Disclosure of Invention
In view of the above, the present invention provides a federal learning image classification method capable of defending a back door attack, which can defend a back door attack when performing image classification using federal learning, has high classification accuracy in normal image data and has low classification accuracy in image data with a back door.
The invention provides a federal learning image classification method capable of defending back door attacks, which comprises the following steps:
Step 1, determining image sample data and a back door attack mode of local training of a workbench node;
step 2, the workbench node receives a global model issued by the Server node and trains based on the local image data and a training method; after the local training is finished, uploading the gradient to a Server node;
Step 3, the Server node generates gradients submitted by each workbench node into one dimension and forms a matrix, and rows in the matrix represent the gradients submitted by the workbench nodes and dimension reduction is carried out on the matrix; calculating the sum of Euclidean distances of gradients submitted by each workbench node and gradients submitted by the rest workbench nodes in the matrix after dimension reduction, and eliminating gradients of the workbench nodes with the distances larger than a threshold epsilon 1; performing cluster analysis on the rest gradients; calculating cosine similarity between gradients submitted by each workbench node in each class and gradients submitted by other workbench nodes in the class, carrying out average processing on each workbench node, and selecting the median after the average processing as the similarity of the class; selecting a class with the minimum similarity, and eliminating a workbench node with a distance larger than a threshold epsilon 2 in the class;
step 4, the Server node aggregates the gradient submitted by the workbench node obtained in the step 3 to generate a global model of the next iteration; issuing a global model to each workbench node;
Step 5, returning to the step 2 until the global model converges to obtain a final global model, and taking the final global model as a federal learning image classification model which is obtained through training and can defend back door attacks;
And step 6, preprocessing the images to be classified, and inputting the preprocessed images into a federal learning image classification model which is obtained through training and can defend back door attacks, so as to obtain the types of the images to be classified.
Further, the image sample data in the step 1 is obtained by normalizing the pixel point values of the image data.
Further, the Server node in the step3 forms a matrix by gradients submitted by each workbench node, and the method comprises the following steps:
The Server node generates one-dimensional vectors from the collected gradients of all the workbench nodes, and a plurality of one-dimensional vectors are spliced in row dimension to form a matrix, each row is a gradient submitted by the workbench node, and each column is a gradient set of a certain parameter of the gradient submitted by each workbench node in the model.
Further, the dimension reduction method in the step 3 is as follows: the matrix is reduced to 2 dimensions by PCA method.
Further, the value of the threshold epsilon 1 in the step 3 is 8.
Further, in the step 3, the cosine similarity between the gradient submitted by each workbench node in each class and the gradient submitted by the rest workbench nodes in the class is calculated, wherein the gradient is the original gradient, and the calculation formula is as follows:
Wherein, deltaw i、Δwj is the gradient submitted by the workbench node i and the workbench node j, deltaw ik is the gradient of the workbench node i at the kth parameter, deltaw jk is the gradient of the workbench node j at the kth parameter, and n is the number of model parameters.
Further, the value of the threshold epsilon 2 in the step 3 is 4.
The beneficial effects are that:
The invention adopts matrix dimension reduction and clustering algorithm to process the gradient submitted by the Worker node, and finally selects the gradient submitted by the normal Worker node to participate in aggregation, thereby completely avoiding the possibility of implanting a global model into a back door.
Drawings
Fig. 1 is a training flowchart of a federal learning image classification method capable of defending a back door attack.
Detailed Description
The invention will now be described in detail by way of example with reference to the accompanying drawings.
The invention provides a federal learning image classification method capable of defending back door attacks, which has the following basic ideas: the PCA technology is used for reducing the dimension of the gradient received by the Server, then Kmeans++ is used for clustering the gradient after the dimension reduction, and the class of normal Worker nodes is selected for aggregation, so that the possibility that the back door is implanted into the global model can be completely avoided by processing, the precision of the global model in an image with the back door is very low, and the high classification precision is maintained on a normal image.
The invention provides a federal learning image classification method capable of defending back door attacks, which specifically comprises the following steps:
and step 1, determining image sample data and a back door attack model for training a federal learning model capable of defending back door attacks.
In the invention, an image preprocessing mode of normalizing pixel point number values of image data is adopted to form image sample data; the back door attack adopts a back door with a triangle shape added in the image.
And 2, receiving a global model issued by the Server node by the workbench node, and training based on the local image data and a training method.
Each workbench node receives a global model of the Server node, and trains by utilizing local image data, wherein the local data of the workbench node with the back door comprises a normal image sample and an image sample with the back door, a gradient descent method is adopted in the training process, an optimizer selects Adam, and gradient update of the local model is obtained after multiple rounds of training.
And 3, after the local training is finished, the workbench node (comprising the workbench node and the normal workbench node of the back door attacker) uploads the gradient to the Server node.
And 4, merging gradients submitted by each workbench node into a matrix by the Server node, and performing dimension reduction by using a Principal Component Analysis (PCA).
The Server node expands the received gradients of each workbench into one dimension, and then splices the gradients of each workbench node on a row to combine into a matrix. If the number of workbench nodes is N and the number of model parameters is m, the dimension of the gradient set is Nxm. And reducing the dimension of the gradient set by using PCA technical means, wherein the dimension of the gradient set after the dimension reduction is Nx2.
And 5, after the PCA is used for dimension reduction, calculating the Euclidean distance sum of the gradient submitted by each Worker node and the gradient submitted by the rest Worker nodes, and eliminating the Worker nodes with the distance larger than a threshold epsilon 1.
After PCA dimension reduction, the gradient submitted by each workbench node is a vector with 2 dimensions, and the sum of Euclidean distances between the gradient of the rest of workbench nodes is as follows:
Wherein D i is the sum of Euclidean distances between the Worker node i and the rest of the Worker nodes, And/>The gradient after dimension reduction is the Worker node i and the Worker node j, and the I 2 is a 2-paradigm. After the sum D i of the euclidean distances of each Worker node is calculated, the Worker nodes are ascending, then dimensionless processing is carried out by utilizing the median, and the Worker nodes with the values larger than the threshold epsilon 1 =8 are removed.
And 6, carrying out cluster analysis on the set obtained in the step 5 by using a Kmeans++ cluster algorithm.
When clustering is carried out by using a Kmeans++ clustering algorithm, testing the effect (h is the number of removed workbench nodes) of the class number of 1 to (N-h), evaluating the clustering effect by using the contour coefficient, and selecting the one with the best clustering effect as the final clustering result.
And 7, after Kmeans++ clustering is utilized, in each class, calculating cosine similarity between gradients submitted by each Worker node in the class and gradients submitted by other Worker nodes in the class, then carrying out average processing on each Worker node, and selecting the median after the average processing as the similarity of the class.
And (3) after clustering in the step (6), the number of the categories is T, and for each category T epsilon T, the cosine similarity of the gradient of each workbench node in the category and the gradient of the rest workbench nodes in the category is calculated according to the following calculation formula:
Similarity set of workbench node i and other workbench nodes Wherein f is the number of the workbench nodes in the t class, and the workbench node i does not calculate the similarity with itself. And then, carrying out average processing on the similarity set of each workbench node, carrying out ascending order on the similarity, and finally selecting the median as the similarity of the category.
And 8, comparing the similarity of each class, selecting the class with the smallest similarity, and removing the Worker nodes with the distance larger than a threshold epsilon 2 =4 from the Worker nodes in the class by using a processing method similar to the step 4.
And 9, for the gradient submitted by the workbench node obtained in the step 8, carrying out aggregation by utilizing FedAvg to generate a global model of the next iteration.
And step 10, issuing the global model to each workbench node.
And 11, repeatedly executing the steps 2 to 9 until the global model converges to obtain a final global model.
And step 12, preprocessing the image to be classified, and inputting the preprocessed image to a federal learning image classification model which is obtained through training and can defend back door attacks, so as to obtain the type of the image to be classified.
Finally, the performance of the invention is evaluated through experiments, and the defense test of image classification is performed by adopting a back door attack model based on the MNIST data set and the FEMNIST data set. In terms of accuracy, our method is fully resistant to back door attacks, with a test accuracy of about 0% on images with back doors and 99% on normal image data.
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. A federal learning image classification method capable of defending back door attacks is characterized by comprising the following steps:
Step 1, determining image sample data and a back door attack mode of local training of a workbench node;
step 2, the workbench node receives a global model issued by the Server node and trains based on the local image data and a training method; after the local training is finished, uploading the gradient to a Server node;
Step 3, the Server node generates gradients submitted by each workbench node into one dimension and forms a matrix, and rows in the matrix represent the gradients submitted by the workbench nodes and dimension reduction is carried out on the matrix; calculating the sum of Euclidean distances of gradients submitted by each workbench node and gradients submitted by the rest workbench nodes in the matrix after dimension reduction, and eliminating gradients of the workbench nodes with the distances larger than a threshold epsilon 1; performing cluster analysis on the rest gradients; calculating cosine similarity between gradients submitted by each workbench node in each class and gradients submitted by other workbench nodes in the class, carrying out average processing on each workbench node, and selecting the median after the average processing as the similarity of the class; selecting a class with the minimum similarity, and eliminating a workbench node with a distance larger than a threshold epsilon 2 in the class;
step 4, the Server node aggregates the gradient submitted by the workbench node obtained in the step 3 to generate a global model of the next iteration; issuing a global model to each workbench node;
Step 5, returning to the step 2 until the global model converges to obtain a final global model, and taking the final global model as a federal learning image classification model which is obtained through training and can defend back door attacks;
And step 6, preprocessing the images to be classified, and inputting the preprocessed images into a federal learning image classification model which is obtained through training and can defend back door attacks, so as to obtain the types of the images to be classified.
2. The federal learning image classification method according to claim 1, wherein the image sample data in the step 1 is obtained by normalizing pixel values of image data.
3. The federal learning image classification method according to claim 1, wherein the Server node in step 3 merges gradients submitted by the respective Worker nodes into a matrix, comprising the steps of:
The Server node generates one-dimensional vectors from the collected gradients of all the working nodes, and a plurality of one-dimensional vectors are spliced in row dimension to form a matrix, each row of the matrix is a gradient submitted by a workbench node, and each column is a gradient set of a certain parameter of the gradient submitted by each workbench node in the model.
4. The federal learning image classification method according to claim 1, wherein the dimension reduction method in step 3 is as follows: the matrix is reduced to 2 dimensions by PCA method.
5. The federal learning image classification method according to claim 1, wherein the threshold value epsilon 1 in step 3 has a value of 8.
6. The federal learning image classification method according to claim 1, wherein the step 3 calculates cosine similarity between the gradient submitted by each Worker node in each class and the gradient submitted by the rest of Worker nodes in the class, the gradient is an original gradient, and the calculation formula is as follows:
Wherein, deltaw i、Δwj is the gradient submitted by the workbench node i and the workbench node j, deltaw ik is the gradient of the workbench node i at the kth parameter, deltaw jk is the gradient of the workbench node j at the kth parameter, and n is the number of model parameters.
7. The federal learning image classification method according to claim 1, wherein the threshold value epsilon 2 in step 3 has a value of 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210036245.4A CN114494771B (en) | 2022-01-10 | 2022-01-10 | Federal learning image classification method capable of defending back door attack |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210036245.4A CN114494771B (en) | 2022-01-10 | 2022-01-10 | Federal learning image classification method capable of defending back door attack |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114494771A CN114494771A (en) | 2022-05-13 |
CN114494771B true CN114494771B (en) | 2024-06-07 |
Family
ID=81512882
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210036245.4A Active CN114494771B (en) | 2022-01-10 | 2022-01-10 | Federal learning image classification method capable of defending back door attack |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114494771B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024028196A1 (en) * | 2022-08-02 | 2024-02-08 | F. Hoffmann-La Roche Ag | Methods for training models in a federated system |
CN115758350B (en) * | 2022-11-09 | 2023-10-24 | 中央财经大学 | Aggregation defense method and device for resisting poisoning attack and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112163638A (en) * | 2020-10-20 | 2021-01-01 | 腾讯科技(深圳)有限公司 | Defense method, device, equipment and medium for image classification model backdoor attack |
CN112966741A (en) * | 2021-03-05 | 2021-06-15 | 北京理工大学 | Federal learning image classification method capable of defending Byzantine attack |
WO2021120676A1 (en) * | 2020-06-30 | 2021-06-24 | 平安科技(深圳)有限公司 | Model training method for federated learning network, and related device |
CN113221105A (en) * | 2021-06-07 | 2021-08-06 | 南开大学 | Robustness federated learning algorithm based on partial parameter aggregation |
CN113779563A (en) * | 2021-08-05 | 2021-12-10 | 国网河北省电力有限公司信息通信分公司 | Method and device for defending against backdoor attack of federal learning |
-
2022
- 2022-01-10 CN CN202210036245.4A patent/CN114494771B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021120676A1 (en) * | 2020-06-30 | 2021-06-24 | 平安科技(深圳)有限公司 | Model training method for federated learning network, and related device |
CN112163638A (en) * | 2020-10-20 | 2021-01-01 | 腾讯科技(深圳)有限公司 | Defense method, device, equipment and medium for image classification model backdoor attack |
CN112966741A (en) * | 2021-03-05 | 2021-06-15 | 北京理工大学 | Federal learning image classification method capable of defending Byzantine attack |
CN113221105A (en) * | 2021-06-07 | 2021-08-06 | 南开大学 | Robustness federated learning algorithm based on partial parameter aggregation |
CN113779563A (en) * | 2021-08-05 | 2021-12-10 | 国网河北省电力有限公司信息通信分公司 | Method and device for defending against backdoor attack of federal learning |
Non-Patent Citations (1)
Title |
---|
联邦学习可视化:挑战与框架;潘如晟;韩东明;潘嘉铖;周舒悦;魏雅婷;梅鸿辉;陈为;;计算机辅助设计与图形学学报(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114494771A (en) | 2022-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111914256B (en) | Defense method for machine learning training data under toxic attack | |
CN111460443B (en) | Security defense method for data manipulation attack in federated learning | |
CN112738015B (en) | Multi-step attack detection method based on interpretable convolutional neural network CNN and graph detection | |
CN110334808A (en) | A kind of confrontation attack defense method based on confrontation sample training | |
US20150134578A1 (en) | Discriminator, discrimination program, and discrimination method | |
CN114494771B (en) | Federal learning image classification method capable of defending back door attack | |
CN112464245B (en) | Generalized security evaluation method for deep learning image classification model | |
CN112365005B (en) | Federal learning poisoning detection method based on neuron distribution characteristics | |
CN117272306A (en) | Federal learning half-target poisoning attack method and system based on alternate minimization | |
CN107579846A (en) | A kind of cloud computing fault data detection method and system | |
CN113901448B (en) | Intrusion detection method based on convolutional neural network and lightweight gradient elevator | |
CN109951462A (en) | A kind of application software Traffic anomaly detection system and method based on holographic modeling | |
CN117350368A (en) | Federal learning defense method, apparatus, device and storage medium | |
CN112966741B (en) | Federal learning image classification method capable of defending Byzantine attack | |
Mathiyalagan et al. | An efficient intrusion detection system using improved bias based convolutional neural network classifier | |
Wang et al. | Positive sequential data modeling using continuous hidden markov models based on inverted dirichlet mixtures | |
Wang et al. | MITDBA: Mitigating Dynamic Backdoor Attacks in Federated Learning for IoT Applications | |
CN117892340A (en) | Federal learning attack detection method, system and device based on feature consistency | |
CN113378985B (en) | Method and device for detecting countermeasure sample based on layer-by-layer correlation propagation | |
CN114169007B (en) | Medical privacy data identification method based on dynamic neural network | |
CN106023258A (en) | Improved adaptive gaussian mixture model moving target detection method | |
CN116523078A (en) | Horizontal federal learning system defense method | |
Sheikholeslami et al. | Efficient randomized defense against adversarial attacks in deep convolutional neural networks | |
CN117152486A (en) | Image countermeasure sample detection method based on interpretability | |
CN117134943A (en) | Attack mode prediction method based on fuzzy Bayesian network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |