CN112600794A - Method for detecting GAN attack in combined deep learning - Google Patents
Method for detecting GAN attack in combined deep learning Download PDFInfo
- Publication number
- CN112600794A CN112600794A CN202011325674.0A CN202011325674A CN112600794A CN 112600794 A CN112600794 A CN 112600794A CN 202011325674 A CN202011325674 A CN 202011325674A CN 112600794 A CN112600794 A CN 112600794A
- Authority
- CN
- China
- Prior art keywords
- attack
- deep learning
- classifier
- gradient
- gan
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1416—Event detection, e.g. attack signature detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/566—Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computer Hardware Design (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Virology (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses a method for detecting and generating a counterattack network attack in combined deep learning, which comprises the following steps: the server and the participants initiate a joint deep learning model training task and complete the initialization of the joint deep learning model; the server simulates GAN attack and acquires sample data; the server constructs a GAN attack detection classifier through a deep neural network and trains the GAN attack detection classifier; and (3) extracting the characteristics of the updated gradient in the training process of the combined deep learning model through a layer classifier, inputting the extracted gradient into a detecting GAN attack total classifier for prediction, and obtaining the probability of malicious data containing wrong classification information uploaded by a participant. According to the method, the update gradient uploaded by the participants in the joint learning is used as a training data set, different features are extracted, and a classifier is constructed to identify and filter the update gradient containing wrong classification information, so that the privacy of the participants and the safety of a model are protected.
Description
Technical Field
The invention belongs to the field of privacy data protection and deep learning, and particularly relates to a method for detecting GAN attacks in combined deep learning.
Background
Deep learning is continuously developed along with breakthrough of algorithms and collection of big data. Meanwhile, the large-scale deployment of deep learning is promoted by enhancing the computing and storage performance of the cloud platform and improving hardware equipment such as a GPU. Deep learning has been applied in the fields of data mining, computer vision, medical diagnosis, security detection, and the like.
Although deep learning brings great convenience to people's lives, it also has a problem of data security. Deep learning typically trains the model in a centralized manner, and the data of all participants will be aggregated to the server side and processed by the same algorithm. If the data contains private data of participants (such as addresses, time, personal habits), public data of an organization, and business confidential data of a company, a centralized training mode causes a private information leakage problem. Now, training data sets are increasingly difficult to collect. On one hand, with the enhancement of data privacy awareness, users are reluctant to directly provide private data of the users; on the other hand, countries around the world are actively enacting laws that protect data security and privacy. For example, the European Union implemented the general data protection Act in 2018.
In order to solve the data collection and privacy protection problems, a joint deep learning framework is proposed. The framework allows multiple participants to train local models on their private data. Participants may upload local model gradients to build a global model. And the server collects the model gradients uploaded by all users and aggregates and updates the global model. While the participants may download the latest global model to update the local model. The framework avoids direct disclosure of private data caused by direct uploading of training data by participants. Furthermore, the framework allows asynchronous training of the model by optimization algorithms in deep learning, e.g. Stochastic Gradient Descent (SGD). However, the joint deep learning framework still cannot completely avoid the information leakage problem. The attacker can participate in the joint learning process and deduce the privacy information of other participants.
In summary, existing defense strategies are mostly passive defense with the help of differential privacy, secure multiparty computation, etc. cryptographic knowledge. Under these strategies, the model can only passively accept malicious attacks, and cannot punish or evict attackers in the joint learning process.
Disclosure of Invention
The invention aims to provide a method for detecting GAN attack in combined deep learning.
The technical solution for realizing the purpose of the invention is as follows: a method for detecting and generating counterattack network attack in combined deep learning comprises the following specific steps:
step 2, the server simulates GAN attack and acquires sample data: the server simulates the training process of the combined deep learning model, and launches a GAN attack on the simulation model to obtain the updated gradient data in the training process of the simulation model;
step 3, the server constructs a GAN attack detection classifier through a deep neural network and trains the GAN attack detection classifier;
and 4, step 4: and (3) extracting the characteristics of the updated gradient in the training process of the combined deep learning model through a layer classifier, inputting the extracted gradient into a detecting GAN attack total classifier for prediction, and obtaining the probability of malicious data containing wrong classification information uploaded by a participant.
Preferably, the participants include attackers who launch GAN attacks and normal participants who do not launch attacks.
Preferably, the simulation model and the joint deep learning model have the same architecture and initial values.
Preferably, the specific steps of the training process of the server simulation joint deep learning model are as follows:
the server constructs an auxiliary training set according to a training target and a label of the joint deep learning model;
the server divides the auxiliary training set into a malicious data set and a normal data set;
and the server uses the malicious data set and the normal training set to carry out simulation model training and initiates a GAN attack to obtain simulation model update gradient data with correct classification and malicious simulation model update gradient data containing wrong classification information.
Preferably, the specific steps of constructing and training the GAN attack detection classifier are as follows:
step 3.1: the server marks the updated gradient data obtained in the simulated GAN attack in the step 2 into different labels according to the normal updated gradient data and the malicious gradient data, trains two classification tasks and distinguishes attackers and normal participants;
step 3.2: normalizing the update gradients of the simulation models uploaded after training of normal participants and attackers;
step 3.3: constructing a layer classifier for the weight parameter of each layer of the simulation model updating gradient, wherein the input dimension of the layer classifier is the dimension of the weight parameter of the layer corresponding to the simulation model updating gradient, and the output of the layer classifier is a score which represents the probability that the simulation model updating gradient is malicious data containing wrong classification information uploaded by a participant;
step 3.4: for the weight parameter of each layer of the simulation model updating gradient, performing feature extraction by using the layer classifier trained in the step 2, and taking the output of the layer classifier as the weight feature of the layer corresponding to the simulation model updating gradient; taking the bias of the layer corresponding to the updated gradient of the simulation model as a bias characteristic;
the aggregation simulation model updates the weight characteristics and the bias characteristics of each layer of the gradient to obtain the overall characteristics of the updated gradient;
constructing a general classifier for detecting GAN attack by using a neural network, wherein the input of the general classifier is the dimensionality of general characteristics, the output of the general classifier is a score, and the score represents the probability that a model updating parameter is malicious data which is uploaded by a participant and contains wrong classification information;
and inputting the overall characteristics of the updated gradient and the corresponding labels into a general classifier to train the overall characteristics.
Preferably, the training process of the joint deep learning model specifically includes:
the participator downloads the latest parameters of the joint deep learning model from the server, carries out local training and uploads the updated gradient loss parameters; and the server extracts the characteristics of the updated gradient in the training process of the combined deep learning model through a layer classifier, inputs the updated gradient into a trained global classifier for detecting GAN attack to predict, and obtains the probability of malicious data containing wrong classification information uploaded by participants.
Compared with the prior art, the invention has the following remarkable advantages: 1) the invention realizes the defense of the GAN attack by using the attack detection method for the first time, and protects the privacy of participants and the safety of a model while ensuring the normal training of joint learning; 2) the method for detecting the GAN attack is an active defense, can identify the identity of an attacker, and can actively limit the behavior of the attacker; 3) the method retains the characteristics of distributed and parallelized joint learning, does not increase the calculation overhead and communication overhead of participants, and does not influence the accuracy and convergence speed of the model; 4) when the invention trains the GAN attack detection classifier, the supervised learning and the unsupervised learning are combined, thereby improving the accuracy of the classifier.
The present invention is described in further detail below with reference to the attached drawings.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic diagram of constructing a GAN attack detection classifier according to the present invention.
Detailed Description
A method for detecting and generating a counterattack network attack in combined deep learning is shown in figure 1, and comprises the following specific steps:
The joint deep learning model training task is a training task under the white-box condition, namely, the participants know the specific details of the model, including the parameter number of each layer of the neural network, the setting of an activation function, a loss function and the like.
Step 2, the server simulates GAN attack and acquires sample data: the server simulates the training process of a combined deep learning model (simulation model), launches a GAN attack on the simulation model, and acquires the update gradient data of a participant in the training process of the simulation model;
specifically, the participants trained by the simulation model include an "attacker" who initiates a GAN attack and a "normal participant" who does not initiate an attack. The server extracts the updated gradient data of the 'attacker' and the 'normal participant' in the simulation model training process, and the updated gradient data is used as a sample for detecting and generating the network attack resistance. The simulation model and the joint deep learning model in step 1 have the same architecture and initial values.
Specifically, the specific steps of the training process of the server simulation joint deep learning model (simulation model) are as follows:
and the server constructs an auxiliary training set Data _ aux according to the training targets and the labels of the joint deep learning model. The auxiliary data set may be obtained from a common data set by means of sampling.
The server divides the secondary training set into two parts: malicious datasetAn attacker maliciously modifies the label of the data set, participates in the training of the simulation model, and uploads the simulation model update gradient with wrong classification; the normal Data set Data _ P, the Data set owned by the "normal participant", does not modify the Data set any way. The 'normal participants' participate in the training of the simulation model, and upload the simulation model update gradient with correct classification.
And the server performs simulation model training by using the malicious data set and the normal training set, and initiates a GAN attack to obtain simulation model update gradient data which are uploaded by normal participants and correctly classified, and malicious simulation model update gradient data which are uploaded by attackers and contain wrong classification information.
Specifically, in the joint training process of the simulation model, the update gradients uploaded by the "attacker" and the "normal participant" include different features of the training data: for the training set (x, y), where x is the input data of the neural network and y is the label, the neural network first propagates forward to compute the Loss function Loss (f (x; w), y), f () is the neural network model and w is the parameter of the model, and then the neural network tries to minimize the empirical expectation E of the Loss function L to get the correct prediction result. In neural network back propagation, gradient loss by calculating all parametersThe parameter w is updated. (x, y) can be represented asIn a certain form, i.e. fromCan deduce the information of (x, y). Update gradient uploaded by participants in the inventionInferring whether participants are trainedThe training data set (x, y) is injected with erroneous classification information.
In this step, the simulation training obtains the update gradient samples of the "attacker" and the "normal participant", and the samples are used as the training sample Data of the GAN attack detection classifier (f)1,f2,...fn) Wherein f isiRepresenting the gradient loss parameters uploaded by the "attacker" and the "normal participant" during the simulation training. Each sample fiThe corresponding label is "attacker"And "normal participants" P.
Step 3, constructing a GAN attack detection classifier and training the GAN attack detection classifier: the server constructs a GAN attack detection layer classifier according to the parameters of each layer of the neural network, then constructs a GAN attack detection total classifier through the layer classifier,
as shown in fig. 2, in a further embodiment, the specific steps of constructing the GAN attack detection classifier are as follows:
step 3.1: the server marks the updated gradient data obtained in the simulated GAN attack in the step 2 into different labels, the normal updated gradient data uploaded by the normal participant is marked as P, and the malicious gradient data uploaded by the attacker and containing the wrong classification information is marked as PTraining a secondary classification task through a neural network so as to distinguish an attacker from a normal participant;
step 3.2: sample data preprocessing: and normalizing the update gradients of the simulation models uploaded after training of normal participants and attackers. The influence of the difference of the data volume and the number of rounds of local training on the amplitude of the gradient update is reduced.
Step 3.3: constructing a GAN attack detection layer classifier and training the classifier: constructing a layer classifier for each layer parameter of the simulation model updating gradient; the layer classifier is a neural network and comprises an input layer, a plurality of hidden layers and an output layer; and each layer of the simulation model updating gradient comprises a weight (Wieght, W) and a Bias (Bias, B), and a GAN attack detection layer classifier of the layer is constructed according to the weight parameter of each layer of the simulation model updating gradient. The input dimension of the layer classifier is the dimension of the layer weight parameter, the output of the layer classifier is a score, and the score represents the probability that the model updating parameter is malicious data containing wrong classification information uploaded by a participant;
respectively inputting the weight and the label of each layer of the simulation model updating gradient into a corresponding GAN attack detection layer classifier to train the GAN attack detection layer classifier;
step 3.4: constructing a GAN attack detection total classifier and training the general classifier: for the weight parameter of each layer of the simulation model updating gradient, using the layer classifier trained in the step 2 to perform feature extraction, and using the output of the last hidden layer of the layer classifier as the weight feature extraction result of the layer; and updating the bias of the corresponding layer of the gradient by the simulation model as the bias characteristic of the layer.
Aggregating the weight features and bias features of each layer to obtain an overall feature of the update gradient: l ═ L1+ Δ B1 '+ L2+ Δ B2' +. + Ln + Δ Bn '), where Li is the weight feature after each layer feature extraction and Δ Bi' is the bias feature after each layer normalization.
Constructing a global classifier for detecting GAN attacks by utilizing a neural network, wherein the neural network comprises an input layer, a plurality of hidden layers and an output layer;
the input of the overall classifier is the dimension of the overall characteristic L, the output of the overall classifier is a score, and the score represents the probability that the model updating parameter is malicious data which is uploaded by the participant and contains wrong classification information.
And inputting the overall characteristics of the updated gradient and the corresponding labels into a general classifier to train the overall characteristics.
And 4, step 4: and (3) extracting the characteristics of the updated gradient in the training process of the combined deep learning model through a layer classifier, inputting the extracted gradient into a detecting GAN attack total classifier for prediction, and obtaining the probability of malicious data containing wrong classification information uploaded by a participant. If the gradient is malicious data uploaded by an attacker, the model is not updated; if the gradient is data uploaded by normal participants, the model will be updated. After an attacker is identified, the frequency with which the attacker obtains the global model may be limited or excluded from the joint learning process.
In the embodiment of one step, the training process of the joint deep learning model specifically comprises the following steps:
and the participator downloads the latest parameters of the joint deep learning model from the server, trains locally and uploads the updated gradient loss parameters. And the server extracts the characteristics of the gradient loss parameters, extracts the characteristics of the updated gradient in the training process of the combined deep learning model through a layer classifier, inputs the extracted gradient into a global classifier for detecting the GAN attack for prediction, and obtains the probability of malicious data containing wrong classification information uploaded by participants.
The invention realizes a detection method of the GAN attack, which can identify the identity of the GAN attacker, thereby protecting the privacy of the participants of normal model training and the safety of the global model. According to the method, the identity of the attacker is recognized in the process of uploading the malicious gradient, so that leakage of more private information of the victim and reduction of the model accuracy are avoided. The detection method provided by the invention is an active defense mode, and after the identity of an attacker is identified, the server can actively limit or punish the attacker, so that the further consumption of server resources by the attacker is avoided. The training and prediction of the GAN attack detection classifier provided by the invention are carried out at the server end, the calculation overhead and the communication overhead of participants are not influenced, and the accuracy and the convergence speed of a model are not influenced.
The method and the system perform feature extraction on sample data, normalize the update gradient of the model uploaded by normal participants and attackers, and reduce the influence of the difference of data volume and the number of rounds of local training on the gradient update amplitude. The feature extraction method performs dimensionality reduction on the weights with large quantities in the sample data, and improves training and prediction efficiency of the classifier. In the training stage of the classifier, the invention combines supervised learning and unsupervised learning. In the initial stage of classifier training, a GAN attack process is simulated, and gradient data marked as normal participants and attackers are obtained and used for supervised learning with labels. In the normal training stage of the joint deep learning, the invention randomly selects some unmarked gradients for unsupervised learning. Meanwhile, after certain training, supervised learning is carried out again to ensure the correctness of the classifier.
In summary, the present invention has the following features:
(1) can detect GAN attack, protect participant's privacy
The invention defends the GAN attack from the attack detection direction for the first time. According to the invention, all the updating gradients are used as a training data set, different characteristics are extracted, and then a classifier is constructed to filter the updating gradients containing wrong classification information, so that the privacy of participants is protected.
(2) Securing a global model
GAN attackers may inject incorrect classification information into the model, resulting in a reduction in the accuracy of the global model. According to the invention, through the GAN detection classifier, the attacker is identified, so that the safety of the model is protected. The detection method provided by the invention is an active defense mode, and after the identity of the attacker is identified, the server can actively limit or punish the attacker, so that the further consumption of server resources by the attacker is avoided.
(3) Non-ciphertext operations and model capabilities
Most of the existing defense strategies are passive defense by means of differential privacy, safe multi-party computation and other cryptographic knowledge. The method reserves the characteristics of joint learning distribution, parallelization and non-ciphertext operation, the training and prediction of the attack detection classifier are carried out at the server end, the calculation overhead and the communication overhead of participants cannot be increased, and the accuracy and the convergence speed of the model cannot be influenced.
(4) Supervised learning and unsupervised learning
When the invention trains the GAN attack detection classifier, the supervised learning and the unsupervised learning are combined, thereby improving the accuracy of the classifier.
Claims (6)
1. A method for detecting and generating counterattack network attack in combined deep learning is characterized by comprising the following specific steps:
step 1, initializing a combined deep learning model: the method comprises the following steps that a server and participants initiate a joint deep learning model training task, the server and the participants jointly determine the system structure, the target, the label and the like of the joint deep learning model, and initialization of the joint deep learning model is completed;
step 2, the server simulates GAN attack and acquires sample data: the server simulates the training process of the combined deep learning model, and launches a GAN attack on the simulation model to obtain the updated gradient data in the training process of the simulation model;
step 3, the server constructs a GAN attack detection classifier through a deep neural network and trains the GAN attack detection classifier;
and 4, step 4: and (3) extracting the characteristics of the updated gradient in the training process of the combined deep learning model through a layer classifier, inputting the extracted gradient into a detecting GAN attack total classifier for prediction, and obtaining the probability of malicious data containing wrong classification information uploaded by a participant.
2. The method for detecting and generating a countermeasure network attack in joint deep learning according to claim 1, wherein the participants include an attacker who launches a GAN attack and a normal participant who does not launch the attack.
3. The method for detecting and generating network attack defense system in joint deep learning according to claim 1, wherein the simulation model and the joint deep learning model have the same architecture and initial value.
4. The method for detecting and generating the attack on the countermeasure network in the joint deep learning according to claim 1, wherein the specific steps of the server simulating the training process of the joint deep learning model are as follows:
the server constructs an auxiliary training set according to a training target and a label of the joint deep learning model;
the server divides the auxiliary training set into a malicious data set and a normal data set;
and the server uses the malicious data set and the normal training set to carry out simulation model training and initiates a GAN attack to obtain simulation model update gradient data with correct classification and malicious simulation model update gradient data containing wrong classification information.
5. The method for detecting and generating an attack on an anti-network in combined deep learning according to claim 1, wherein the specific steps of constructing and training a GAN attack detection classifier are as follows:
step 3.1: the server marks the updated gradient data obtained in the simulated GAN attack in the step 2 into different labels according to the normal updated gradient data and the malicious gradient data, trains two classification tasks and distinguishes attackers and normal participants;
step 3.2: normalizing the update gradients of the simulation models uploaded after training of normal participants and attackers;
step 3.3: constructing a layer classifier for the weight parameter of each layer of the simulation model updating gradient, wherein the input dimension of the layer classifier is the dimension of the weight parameter of the layer corresponding to the simulation model updating gradient, and the output of the layer classifier is a score which represents the probability that the simulation model updating gradient is malicious data containing wrong classification information uploaded by a participant;
step 3.4: for the weight parameter of each layer of the simulation model updating gradient, performing feature extraction by using the layer classifier trained in the step 2, and taking the output of the layer classifier as the weight feature of the layer corresponding to the simulation model updating gradient; taking the bias of the layer corresponding to the updated gradient of the simulation model as a bias characteristic;
the aggregation simulation model updates the weight characteristics and the bias characteristics of each layer of the gradient to obtain the overall characteristics of the updated gradient;
constructing a general classifier for detecting GAN attack by using a neural network, wherein the input of the general classifier is the dimensionality of general characteristics, the output of the general classifier is a score, and the score represents the probability that a model updating parameter is malicious data which is uploaded by a participant and contains wrong classification information;
and inputting the overall characteristics of the updated gradient and the corresponding labels into a general classifier to train the overall characteristics.
6. The method for detecting and generating the attack on the countermeasure network in the joint deep learning according to claim 1, wherein the training process of the joint deep learning model specifically comprises:
the participator downloads the latest parameters of the joint deep learning model from the server, carries out local training and uploads the updated gradient loss parameters; and the server extracts the characteristics of the updated gradient in the training process of the combined deep learning model through a layer classifier, inputs the updated gradient into a trained global classifier for detecting GAN attack to predict, and obtains the probability of malicious data containing wrong classification information uploaded by participants.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011325674.0A CN112600794A (en) | 2020-11-23 | 2020-11-23 | Method for detecting GAN attack in combined deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011325674.0A CN112600794A (en) | 2020-11-23 | 2020-11-23 | Method for detecting GAN attack in combined deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112600794A true CN112600794A (en) | 2021-04-02 |
Family
ID=75184444
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011325674.0A Pending CN112600794A (en) | 2020-11-23 | 2020-11-23 | Method for detecting GAN attack in combined deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112600794A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113360896A (en) * | 2021-06-03 | 2021-09-07 | 哈尔滨工业大学 | Free Rider attack detection method under horizontal federated learning architecture |
CN114287009A (en) * | 2021-12-02 | 2022-04-05 | 东莞理工学院 | Inference method, device, equipment and storage medium for collaborative training data attribute |
CN114330514A (en) * | 2021-12-14 | 2022-04-12 | 深圳大学 | Data reconstruction method and system based on depth features and gradient information |
CN114493808A (en) * | 2022-01-28 | 2022-05-13 | 中山大学 | Privacy protection incentive mechanism training method based on reverse auction in federal learning |
CN115277073A (en) * | 2022-06-20 | 2022-11-01 | 北京邮电大学 | Channel transmission method, device, electronic equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110460600A (en) * | 2019-08-13 | 2019-11-15 | 南京理工大学 | The combined depth learning method generated to network attacks can be resisted |
CN111447212A (en) * | 2020-03-24 | 2020-07-24 | 哈尔滨工程大学 | Method for generating and detecting APT (advanced persistent threat) attack sequence based on GAN (generic antigen network) |
-
2020
- 2020-11-23 CN CN202011325674.0A patent/CN112600794A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110460600A (en) * | 2019-08-13 | 2019-11-15 | 南京理工大学 | The combined depth learning method generated to network attacks can be resisted |
CN111447212A (en) * | 2020-03-24 | 2020-07-24 | 哈尔滨工程大学 | Method for generating and detecting APT (advanced persistent threat) attack sequence based on GAN (generic antigen network) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113360896A (en) * | 2021-06-03 | 2021-09-07 | 哈尔滨工业大学 | Free Rider attack detection method under horizontal federated learning architecture |
CN113360896B (en) * | 2021-06-03 | 2022-09-20 | 哈尔滨工业大学 | Free Rider attack detection method under horizontal federated learning architecture |
CN114287009A (en) * | 2021-12-02 | 2022-04-05 | 东莞理工学院 | Inference method, device, equipment and storage medium for collaborative training data attribute |
CN114330514A (en) * | 2021-12-14 | 2022-04-12 | 深圳大学 | Data reconstruction method and system based on depth features and gradient information |
CN114330514B (en) * | 2021-12-14 | 2024-04-05 | 深圳大学 | Data reconstruction method and system based on depth features and gradient information |
CN114493808A (en) * | 2022-01-28 | 2022-05-13 | 中山大学 | Privacy protection incentive mechanism training method based on reverse auction in federal learning |
CN115277073A (en) * | 2022-06-20 | 2022-11-01 | 北京邮电大学 | Channel transmission method, device, electronic equipment and medium |
CN115277073B (en) * | 2022-06-20 | 2024-02-06 | 北京邮电大学 | Channel transmission method, device, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112600794A (en) | Method for detecting GAN attack in combined deep learning | |
Zhang et al. | Gan enhanced membership inference: A passive local attack in federated learning | |
Yan et al. | A method of information protection for collaborative deep learning under GAN model attack | |
Chen et al. | Beyond model-level membership privacy leakage: an adversarial approach in federated learning | |
CN107566387B (en) | Network defense action decision method based on attack and defense evolution game analysis | |
CN111310814A (en) | Method and device for training business prediction model by utilizing unbalanced positive and negative samples | |
CN110290120B (en) | Time sequence evolution network security early warning method of cloud platform | |
Adhao et al. | Feature selection using principal component analysis and genetic algorithm | |
Naik et al. | Intelligent secure ecosystem based on metaheuristic and functional link neural network for edge of things | |
CN114547415A (en) | Attack simulation method based on network threat information in industrial Internet of things | |
CN114417427A (en) | Deep learning-oriented data sensitivity attribute desensitization system and method | |
CN113014566A (en) | Malicious registration detection method and device, computer readable medium and electronic device | |
CN108197561A (en) | Human face recognition model optimal control method, device, equipment and storage medium | |
CN117272306A (en) | Federal learning half-target poisoning attack method and system based on alternate minimization | |
CN112148997A (en) | Multi-modal confrontation model training method and device for disaster event detection | |
CN115049397A (en) | Method and device for identifying risk account in social network | |
CN110874638B (en) | Behavior analysis-oriented meta-knowledge federation method, device, electronic equipment and system | |
Jain et al. | Cyber-bullying detection in social media platform using machine learning | |
CN115883261A (en) | ATT and CK-based APT attack modeling method for power system | |
CN115952343A (en) | Social robot detection method based on multi-relation graph convolutional network | |
CN115758337A (en) | Back door real-time monitoring method based on timing diagram convolutional network, electronic equipment and medium | |
Chen et al. | Adversarial learning from crowds | |
CN114398685A (en) | Government affair data processing method and device, computer equipment and storage medium | |
CN112613032A (en) | Host intrusion detection method and device based on system call sequence | |
CN112131587A (en) | Intelligent contract pseudo-random number security inspection method, system, medium and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20230516 |