CN112560059B - Vertical federal model stealing defense method based on neural pathway feature extraction - Google Patents
Vertical federal model stealing defense method based on neural pathway feature extraction Download PDFInfo
- Publication number
- CN112560059B CN112560059B CN202011499140.XA CN202011499140A CN112560059B CN 112560059 B CN112560059 B CN 112560059B CN 202011499140 A CN202011499140 A CN 202011499140A CN 112560059 B CN112560059 B CN 112560059B
- Authority
- CN
- China
- Prior art keywords
- model
- loss
- edge
- sample
- sample set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Bioethics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computer Hardware Design (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computer Security & Cryptography (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a vertical federal model stealing defense method based on neural pathway feature extraction, which comprises the following steps: (1) dividing each sample in the data set into two parts to form a sample set DAAnd DBAnd only the sample set DBContaining a sample tag; (2) according to DAFor edge terminal PBEdge model M ofATraining is carried out according to DBFor edge terminal PBEdge model M ofBPerforming training, PASending the characteristic data generated in the training process to PB,PBCalculating a loss function, P, using the received characteristic data and the data of the activated neuron pathsAAnd PBEncrypting the respective loss function masks and uploading the encrypted loss function masks to a server; (3) the server decrypts the uploaded loss function mask, aggregates the decrypted loss function mask, and then solves the aggregated loss function to obtain MAAnd MBAnd returning the gradient information to PAAnd PBTo update the edge model network parameters. To improve the information security of the edge model under the vertical federal scenario.
Description
Technical Field
The invention belongs to the field of safety defense, and particularly relates to a vertical federal model stealing defense method based on neural pathway feature extraction.
Background
In recent years, deep learning models have been widely used for various realistic tasks and have achieved good results. Meanwhile, data islanding and privacy disclosure in the model training and application process become main problems which hinder the development of artificial intelligence technology at present. To address this problem, federal learning has emerged as an efficient means of privacy protection. The federal learning is a distributed machine learning method, namely a learning method that a participant uploads updated parameters to a server after training local data, and the server aggregates the updated parameters to obtain overall parameters, and a lossless learning model is trained through local training and parameter transmission of the participant.
Federal learning can be roughly divided into three categories according to different situations of data distribution: horizontal federal learning, vertical federal learning, and federal migratory learning. The horizontal federated learning refers to that under the condition that data features are overlapped more and users are overlapped less among different data sets, the data sets are segmented according to user dimensions, and the data with the same data features and not identical users is extracted for training. Longitudinal federated learning refers to that under the condition that users overlap more and data features overlap less among different data sets, the data sets are segmented according to data feature dimensions, and the data with the same users and the data features which are not identical are extracted for training. Federal transfer learning refers to the situation where users of multiple data sets have little overlap with data features, data is not segmented, but transfer learning is utilized to overcome data or tag deficiencies.
Compared with the traditional machine learning technology, the federal learning can improve the learning efficiency, solve the problem of data islands and protect the privacy of local data. However, a plurality of potential safety hazards exist in federal learning, and three main threats to attacks in federal learning are as follows: poisoning attacks, countering attacks, and privacy disclosure. The privacy disclosure problem is the most important problem in the context of federal learning, because federal learning involves model information interaction of a plurality of participants, in the process, the model information interaction is easily attacked maliciously, and the privacy security of the model for the federal learning is greatly threatened.
In a vertical federal scene, in order to protect the privacy security of a depth model, the proposed main privacy protection technology comprises safe multi-party calculation, homomorphic encryption and differential privacy protection, the computation complexity of the safe multi-party calculation and homomorphic encryption technology can be greatly increased, the time cost and the computation cost can be improved, the computation force requirement on equipment is also high, the differential privacy protection technology needs to realize the privacy security protection by adding noise, and the accuracy of the model on the original task can be influenced.
Disclosure of Invention
In order to improve the information security of the edge model under a vertical federal scene and prevent the edge model from being stolen by a malicious attacker in the information transmission process, the invention provides a vertical federal model stealing defense method based on neural pathway feature extraction.
The technical scheme of the invention is as follows:
a vertical federal model stealing defense method based on neural pathway feature extraction comprises the following steps:
(1) dividing each sample in the data set into two parts to form a sample set DAAnd sample set DBAnd only the sample set DBContaining a sample label, sample set DA、DBTo the edge terminal PAAnd an edge terminal PB;
(2) According to sample set DAFor edge terminal PBEdge model M ofATraining is carried out according to a sample set DBFor edge terminal PBEdge model M ofBTraining is performed, edge terminal PASending the characteristic data generated in the training process to PB,PBComputing a loss function using the received feature data and the activated neuron path data, the edge terminal PAAnd PBEncrypting the respective loss function masks and uploading the encrypted loss function masks to a server;
(3) service end to edge terminal PAAnd PBUploadedAfter the loss function mask is decrypted, the aggregated loss function is solved after the loss function is aggregated to obtain MAAnd MBAnd returning the gradient information to the edge terminal PAAnd PBTo update the edge model network parameters.
Compared with the prior art, the invention has the beneficial effects that at least:
according to the stealing and defending method for the model under the vertical federation based on the neural pathway feature extraction, the neural pathway feature is fixed during training, and the loss function is encrypted and uploaded, so that a malicious attacker is prevented from stealing the model under the vertical federation scene, and the information security of the edge model under the vertical federation scene is prevented.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a vertical federal model theft defense method based on neural pathway feature extraction according to an embodiment of the present invention;
fig. 2 is a schematic diagram of training of a vertical federal model provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problem that a model of a lower edge end in a vertical federal scene is vulnerable to a malicious attacker in the process of model information interaction, after the attacker steals model information of the edge end, the model of the edge end is stolen through calculation of gradients and loss values. In order to prevent the stealing of the edge model, the embodiment of the invention provides a model stealing defense scheme under the vertical federation based on neural pathway feature extraction, a neural pathway feature extraction step is added in a training stage of the edge model, and a model parameter transmission process in the training stage is encrypted by a method for fixedly activating neurons, so that a malicious party is effectively prevented from stealing the privacy information of the depth model in the process of model parameter exchange of different edge terminals under the vertical federation scene, and an attacker cannot restore the training process of the model even if the transmission information of the edge model is stolen under the premise of not unlocking the fixed neural pathway, thereby achieving the purposes of protecting the model information and defending against model stealing attack.
Fig. 1 is a flowchart of a vertical federal model theft defense method based on neural pathway feature extraction according to an embodiment of the present invention. As shown in fig. 1, the method for protecting from stealing of a model under a vertical federation based on neural pathway feature extraction provided by the embodiment includes the following steps:
step 1, data set division and alignment.
In an embodiment, an MNIST dataset, a CIFAR-10 dataset, and an ImageNet dataset are employed. The MNIST data set comprises ten types of training sets, 6000 samples of each type, ten types of testing sets and 1000 samples of each type; the training set of the CIFAR-10 data set comprises ten types, 5000 samples of each type, ten types of the test set and 1000 samples of each type; the ImageNet data sets are 1000 types, each type comprises 1000 samples, 30% of pictures in each type are randomly extracted to serve as a test set, and the rest pictures serve as training sets.
In the present invention, two edge terminals P are used under the vertical federationAAnd PBIn the vertical federal scenario, two edge terminals PAAnd PBThe data of (2) have different data characteristics, so that the preprocessed data set needs to be subjected to characteristic segmentation. Averagely dividing each sample image in the MNIST data set, the CIFAR-10 data set and the ImageNet data set into two parts which are respectively used as a sample set DAAnd sample set DBWherein the sample set DBA sample classmark containing the sample image.
In the examples, the samples were subjected toDividing to obtain a sample set DAAnd sample set DBThen, the sample set D is also neededAAnd sample set DBIn which the partial samples derived from the same sample are aligned, i.e. the edge model M is guaranteedAAnd edge model MBThe partial samples of the same input are derived from the same sample.
Due to edge termination P under vertical federal scenarioAAnd PBAre different, while ensuring a sample set D of different edge terminalsAAnd DBThe method has the advantages that the original data of the two partial images belonging to the same sample image are aligned by adopting an encryption-based user ID alignment technology, so that the partial image data used each time come from the same sample image in the training process of the two terminals, and the users of any edge terminal cannot be exposed in the process of data entity alignment.
And 2, the edge terminal trains respective edge models by using respective sample sets, encrypts respective loss functions by masks and uploads the loss functions to the server.
In an embodiment, according to sample set DAFor edge terminal PBEdge model M ofATraining is carried out according to a sample set DBFor edge terminal PBEdge model M ofBTraining is performed, edge terminal PASending the characteristic data generated in the training process to PB,PBComputing a loss function using the received feature data and the activated neuron path data, the edge terminal PAAnd PBAnd the respective loss function masks are encrypted and uploaded to the server.
Aiming at different data sets, two edge ends are trained by using the same model structure, and for the ImageNet data set, the model pre-trained by ImageNet is used for training and setting unified hyper-parameters: using a random gradient descent (SGD), adam optimizer, learning rate of η, regularization parameter of λ, data set Where i denotes a certain sample data, yiThe original label representing the corresponding sample,andthe feature spaces respectively representing data, and the model parameters related to the feature spaces are represented by thetaAAnd ΘBThe model training target is expressed as:
in particular, according to the sample set DAFor edge model MAIn training, the edge model MALoss function Loss ofAComprises the following steps:
wherein, thetaARepresenting an edge model MAThe model parameters of (a) are determined,represents the ith sample belonging to the sample set A, | · | | non-calculation2Representing the square of the norm of L1.
According to sample set DBFor edge model MBIn training, the edge model MBTotal Loss function Loss ofsumComprises the following steps:
losssum=lossB+λ*losstopk+lossAB
therein, lossBRepresenting an edge model MBLoss oftopkIndicating neural pathway loss, lossABDenotes the common loss, and λ denotes the adaptive adjustment coefficient as a partial factor of the neural pathway encryption, ΘBRepresenting an edge model MBThe model parameters of (a) are determined,denotes the i-th sample, y, belonging to the sample set BiTo representCorresponding label, | · | | non-conducting phosphor2Denotes the square of the L1 norm, i denotes the sample index, N is the number of samples, NUPathl(T, N) represents the activation values of a plurality of maximum activation neurons of the L-th layer of the edge model, L represents the total number of layers of the edge model, T is the number of samples input each time, and N represents the number of neurons of each layer.
In the embodiment, a neural pathway is defined by taking any neuron in an input layer in the neural network as a starting point, taking any neuron in an output layer as an end point, taking information flow of data as a direction, and passing through communication paths of a plurality of neurons in a hidden layer. The neural pathway represents the connection relationship between neurons, and when a sample is input into the model to activate a specific neuron, the pathway formed by the neurons in the activated state is called an activated neural pathway.
When the neural pathway is fixed, in the training process of the edge end model, after each round of training is finished, randomly selecting samples from the test set of the data set selected in the step 1 as samples to be input into the training model, and obtaining the maximum activation neural pathway of the model at the moment: let N be { N ═ N1,n2,...nnIs a set of neurons of the deep learning model;let T ═ x1,x2,...xnIs the input of a set of test sets;in order to be a function of the function,representing given input samplesIn the first layer and the input sampleCorresponding neuron niActivation value of, maxk(. cndot.) represents the extraction of activation values for k neurons k large before the activation value in each layer. The maximum activation neural pathway is defined as follows:
during training, a maximum activation neural channel composed of activation values of a plurality of maximum activation neurons is fixed, namely the activation values of the neurons are unchanged, and the activation values of k neurons in each neural layer are accumulated to form a path loss function.
Step 3, the service end carries out the edge terminal PAAnd PBAfter the uploaded loss function mask is decrypted, the aggregation loss function obtains gradient information and returns the gradient information to the edge terminal PAAnd PBTo update the edge model network parameters.
In this embodiment, the service end is to the edge terminal PAAnd PBAfter the uploaded loss function mask is decrypted, the loss function is aggregated, and then the aggregated loss function is solved to obtain MAAnd MBAnd returns the gradient information to the edge terminal PAAnd PB. Specifically, the server side adopts random gradient descent to solve gradient information of the aggregated loss function. The Loss function Loss of the server side aggregation is as follows:
Loss=lossB+λ*losstopk+lossAB+lossA
edge terminal PAAnd PBAfter gradient information returned by the server is received, updating M of each edge model according to the gradient informationAAnd MBBased on the updated new network parameters, the training is resumed.
Aiming at model stealing attack in a vertical federal scene, the method for preventing model stealing in the vertical federal based on neural pathway feature extraction provided by the embodiment fixes and encrypts the neural pathway feature in the training process of the edge end model, so that the model is prevented from being stolen by a malicious attacker in the gradient and loss information transmission process of the edge model, and the model is stolen. The information of the model is encrypted and protected from the aspect of feature extraction, so that the privacy and the safety of the model are protected while the model training efficiency is improved.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.
Claims (8)
1. A vertical federal model stealing defense method based on neural pathway feature extraction is characterized by comprising the following steps:
(1) dividing each sample in the data set into two parts to form a sample set DAAnd sample set DBAnd only the sample set DBContaining a sample label, sample set DA、DBDistribution to edge terminalsPAAnd an edge terminal PB;
(2) According to sample set DAFor edge terminal PBEdge model M ofATraining is carried out according to a sample set DBFor edge terminal PBEdge model M ofBTraining is performed, edge terminal PASending the characteristic data generated in the training process to PB,PBComputing a loss function using the received feature data and the activated neuron path data, the edge terminal PAAnd PBEncrypting the respective loss function masks and uploading the encrypted loss function masks to a server;
(3) service end to edge terminal PAAnd PBAfter the uploaded loss function mask is decrypted, the loss function is aggregated, and then the aggregated loss function is solved to obtain MAAnd MBAnd returning the gradient information to the edge terminal PAAnd PBTo update the edge model network parameters.
2. The vertical federal model theft defense method based on neural pathway feature extraction as claimed in claim 1, wherein the method is based on a sample set DAFor edge model MAIn training, the edge model MALoss function Loss ofAComprises the following steps:
3. The vertical federal model theft defense method based on neural pathway feature extraction as claimed in claim 1, wherein the method is based on a sample set DBFor edge model MBCarry out trainingTime, edge model MBTotal Loss function Loss ofsumComprises the following steps:
losssum=lossB+λ*losstopk+lossAB
therein, lossBRepresenting an edge model MBLoss oftopkIndicating neural pathway loss, lossABDenotes the common loss, and λ denotes the adaptive adjustment coefficient as a partial factor of the neural pathway encryption, ΘBRepresenting an edge model MBThe model parameters of (a) are determined,denotes the i-th sample, y, belonging to the sample set BiTo representCorresponding label, | · | | non-conducting phosphor2Denotes the square of the L1 norm, i denotes the sample index, N is the number of samples, NUPathl(T, N) represents the activation values of a plurality of maximum activation neurons of the L-th layer of the edge model, L represents the total number of layers of the edge model, T is the number of samples input each time, and N represents the number of neurons of each layer.
4. The method for vertical federal model theft defense based on neural pathway feature extraction as claimed in claim 3, wherein NUPathlThe formula for calculating (T, N) is:
wherein the content of the first and second substances,in order to be a function of the function,representing given input samplesIn the first layer and the input sampleCorresponding neuron niActivation value of, maxk(. cndot.) represents the extraction of activation values for k neurons k large before the activation value in each layer.
6. the vertical federal model theft defense method based on neural pathway feature extraction as claimed in claim 1, wherein the edge terminal P isAAnd PBLadder receiving return of serverAfter the degree information, updating M of each edge model according to the gradient informationAAnd MBBased on the updated new network parameters, the training is resumed.
7. The vertical federal model theft defense method based on neural pathway feature extraction as claimed in claim 1, wherein samples are divided to obtain a sample set DAAnd sample set DBThen, the sample set D is also neededAAnd sample set DBIn which the partial samples derived from the same sample are aligned, i.e. the edge model M is guaranteedAAnd edge model MBThe partial samples of the same input are derived from the same sample.
8. The vertical federal model theft defense method based on neural pathway feature extraction as claimed in claim 1, wherein the server side adopts random gradient descent to solve gradient information of aggregated loss functions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011499140.XA CN112560059B (en) | 2020-12-17 | 2020-12-17 | Vertical federal model stealing defense method based on neural pathway feature extraction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011499140.XA CN112560059B (en) | 2020-12-17 | 2020-12-17 | Vertical federal model stealing defense method based on neural pathway feature extraction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112560059A CN112560059A (en) | 2021-03-26 |
CN112560059B true CN112560059B (en) | 2022-04-29 |
Family
ID=75063447
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011499140.XA Active CN112560059B (en) | 2020-12-17 | 2020-12-17 | Vertical federal model stealing defense method based on neural pathway feature extraction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112560059B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113297575B (en) * | 2021-06-11 | 2022-05-17 | 浙江工业大学 | Multi-channel graph vertical federal model defense method based on self-encoder |
CN113362216A (en) * | 2021-07-06 | 2021-09-07 | 浙江工业大学 | Deep learning model encryption method and device based on backdoor watermark |
CN113792890B (en) * | 2021-09-29 | 2024-05-03 | 国网浙江省电力有限公司信息通信分公司 | Model training method based on federal learning and related equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1806337A (en) * | 2004-01-10 | 2006-07-19 | HVVi半导体股份有限公司 | Power semiconductor device and method therefor |
US9614670B1 (en) * | 2015-02-05 | 2017-04-04 | Ionic Security Inc. | Systems and methods for encryption and provision of information security using platform services |
CN109525384A (en) * | 2018-11-16 | 2019-03-26 | 成都信息工程大学 | The DPA attack method and system, terminal being fitted using neural network |
CN110782042A (en) * | 2019-10-29 | 2020-02-11 | 深圳前海微众银行股份有限公司 | Method, device, equipment and medium for combining horizontal federation and vertical federation |
CN111401552A (en) * | 2020-03-11 | 2020-07-10 | 浙江大学 | Federal learning method and system based on batch size adjustment and gradient compression rate adjustment |
CN111598143A (en) * | 2020-04-27 | 2020-08-28 | 浙江工业大学 | Credit evaluation-based defense method for federal learning poisoning attack |
CN111625820A (en) * | 2020-05-29 | 2020-09-04 | 华东师范大学 | Federal defense method based on AIoT-oriented security |
CN111723946A (en) * | 2020-06-19 | 2020-09-29 | 深圳前海微众银行股份有限公司 | Federal learning method and device applied to block chain |
CN111783853A (en) * | 2020-06-17 | 2020-10-16 | 北京航空航天大学 | Interpretability-based method for detecting and recovering neural network confrontation sample |
CN111860832A (en) * | 2020-07-01 | 2020-10-30 | 广州大学 | Method for enhancing neural network defense capacity based on federal learning |
CN111931242A (en) * | 2020-09-30 | 2020-11-13 | 国网浙江省电力有限公司电力科学研究院 | Data sharing method, computer equipment applying same and readable storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101014958A (en) * | 2004-07-09 | 2007-08-08 | 松下电器产业株式会社 | System and method for managing user authentication and service authorization to achieve single-sign-on to access multiple network interfaces |
US20200202243A1 (en) * | 2019-03-05 | 2020-06-25 | Allegro Artificial Intelligence Ltd | Balanced federated learning |
-
2020
- 2020-12-17 CN CN202011499140.XA patent/CN112560059B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1806337A (en) * | 2004-01-10 | 2006-07-19 | HVVi半导体股份有限公司 | Power semiconductor device and method therefor |
US9614670B1 (en) * | 2015-02-05 | 2017-04-04 | Ionic Security Inc. | Systems and methods for encryption and provision of information security using platform services |
CN109525384A (en) * | 2018-11-16 | 2019-03-26 | 成都信息工程大学 | The DPA attack method and system, terminal being fitted using neural network |
CN110782042A (en) * | 2019-10-29 | 2020-02-11 | 深圳前海微众银行股份有限公司 | Method, device, equipment and medium for combining horizontal federation and vertical federation |
CN111401552A (en) * | 2020-03-11 | 2020-07-10 | 浙江大学 | Federal learning method and system based on batch size adjustment and gradient compression rate adjustment |
CN111598143A (en) * | 2020-04-27 | 2020-08-28 | 浙江工业大学 | Credit evaluation-based defense method for federal learning poisoning attack |
CN111625820A (en) * | 2020-05-29 | 2020-09-04 | 华东师范大学 | Federal defense method based on AIoT-oriented security |
CN111783853A (en) * | 2020-06-17 | 2020-10-16 | 北京航空航天大学 | Interpretability-based method for detecting and recovering neural network confrontation sample |
CN111723946A (en) * | 2020-06-19 | 2020-09-29 | 深圳前海微众银行股份有限公司 | Federal learning method and device applied to block chain |
CN111860832A (en) * | 2020-07-01 | 2020-10-30 | 广州大学 | Method for enhancing neural network defense capacity based on federal learning |
CN111931242A (en) * | 2020-09-30 | 2020-11-13 | 国网浙江省电力有限公司电力科学研究院 | Data sharing method, computer equipment applying same and readable storage medium |
Non-Patent Citations (3)
Title |
---|
Real-Time Systems Implications in the Blockchain-Based Vertical Integration;C. T. B. Garrocho et al;《Computer》;20200907;全文 * |
一种支持隐私与权益保护的数据联合利用系统方案;李铮;《信息与电脑(理论版)》;20200725(第14期);全文 * |
深度学习模型的中毒攻击与防御综述;陈晋音等;《信息安全学报》;20200715(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112560059A (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112560059B (en) | Vertical federal model stealing defense method based on neural pathway feature extraction | |
Blanco-Justicia et al. | Achieving security and privacy in federated learning systems: Survey, research challenges and future directions | |
CN111860832A (en) | Method for enhancing neural network defense capacity based on federal learning | |
Hu et al. | Federated learning: a distributed shared machine learning method | |
CN112668044B (en) | Privacy protection method and device for federal learning | |
CN112185395B (en) | Federal voiceprint recognition method based on differential privacy | |
CN115549888A (en) | Block chain and homomorphic encryption-based federated learning privacy protection method | |
CN111625820A (en) | Federal defense method based on AIoT-oriented security | |
CN114363043B (en) | Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network | |
CN112738035B (en) | Block chain technology-based vertical federal model stealing defense method | |
CN114330750B (en) | Method for detecting federated learning poisoning attack | |
CN113298268A (en) | Vertical federal learning method and device based on anti-noise injection | |
CN115310625A (en) | Longitudinal federated learning reasoning attack defense method | |
CN113537400A (en) | Branch neural network-based edge computing node allocation and exit method | |
Shen et al. | Privacy-preserving federated learning against label-flipping attacks on non-iid data | |
Yang et al. | A general steganographic framework for neural network models | |
Zhang et al. | Visual object detection for privacy-preserving federated learning | |
Nguyen et al. | Blockchain-based secure client selection in federated learning | |
CN115310120A (en) | Robustness federated learning aggregation method based on double trapdoors homomorphic encryption | |
Masuda et al. | Model fragmentation, shuffle and aggregation to mitigate model inversion in federated learning | |
CN111581663B (en) | Federal deep learning method for protecting privacy and facing irregular users | |
CN114492828A (en) | Block chain technology-based vertical federal learning malicious node detection and reinforcement method and application | |
Chen et al. | Edge-based protection against malicious poisoning for distributed federated learning | |
Ma et al. | MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving Quantized Federated Learning | |
Degang et al. | A covert communication method based on gradient model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |