CN111860832A - Method for enhancing neural network defense capacity based on federal learning - Google Patents

Method for enhancing neural network defense capacity based on federal learning Download PDF

Info

Publication number
CN111860832A
CN111860832A CN202010618973.7A CN202010618973A CN111860832A CN 111860832 A CN111860832 A CN 111860832A CN 202010618973 A CN202010618973 A CN 202010618973A CN 111860832 A CN111860832 A CN 111860832A
Authority
CN
China
Prior art keywords
model
neural network
data
federal learning
federal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010618973.7A
Other languages
Chinese (zh)
Inventor
顾钊铨
李鉴明
仇晶
王乐
唐可可
韩伟红
贾焰
方滨兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202010618973.7A priority Critical patent/CN111860832A/en
Publication of CN111860832A publication Critical patent/CN111860832A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Bioethics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Signal Processing (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for enhancing the defense capacity of a neural network based on federal learning, which comprises the following steps: s1, using federal learning to make data stay in local and prevent data privacy from leaking, cooperating with each party to carry out distributed model training, encrypting the intermediate result to protect data safety, and fusing multi-party models after summarizing to obtain a federal model. And S2, establishing a confrontation sample, and quickly searching the confrontation sample by adopting an algorithm. The method combines the federal learning and the training process of the neural network model, solves the difficulty that the data set cannot circulate due to the consideration of privacy protection and the limitation of laws and regulations, saves the trouble of data collection, simultaneously enriches and is more independent of the training set of the neural network model, and overcomes the capability that the neural network model is easily attacked by countermeasures due to the imperfection of the training set.

Description

Method for enhancing neural network defense capacity based on federal learning
Technical Field
The invention relates to the technical field of artificial intelligence safety, in particular to a method for enhancing the defense capacity of a neural network based on federal learning.
Background
Countersample (adaptive algorithms) refers to adding subtle artifacts that are difficult to artificially distinguish to the input samples, resulting in a machine learning model (e.g., neural network) giving a false output with high confidence. The existence of the confrontation sample proves that the existing machine learning model still has safety problems, so that the application and the development of Artificial Intelligence (AI) in fields with higher safety requirements (such as automatic driving) are limited. The paper "guiding properties of Neural Networks" (Christian szegdy, Wojciech Zarmeba, oral. guiding properties of Neural Networks. in ICLR,2014.) suggests the concept of challenge samples and demonstrates the popularity of challenge samples with models trained on different structures as well as different training sets. From then on, counterattack and defense against the image field have caused a hot tide of research.
In order to enhance the defense capability of the neural network, one idea is to improve a neural network model architecture, so that the neural network model architecture has certain characteristics, for example, the paper "concerned networks" proposes a robust network, and inserts an automatic encoder at a key position of the model to perform denoising, thereby reducing the attack of a countersample on the neural network model, and enhancing the defense capability of the neural network; another idea is to search as large a data set as possible for training to enhance the safety and robustness of the machine learning model. The most classical way is to retrain the generated countermeasure sample, i.e. expand the countermeasure sample in the original data set, and the method is called countermeasure Learning (e.g. Learning with a strong Learning), which can effectively defend the attack of the countermeasure sample and enhance the defense ability of the neural network model.
The first method has the defects that the neural network model architecture is complex, the neural network model architecture modification difficulty is high, meanwhile, the running mechanism of the neural network model lacks interpretability, and even if the neural network model is modified later, the possibility of being attacked still exists due to the unexplainable effect of the neural network model; the second method is relatively easy to implement, and the defense capability of the neural network model is enhanced by learning through expanding the data set.
However, the second method suffers from one of the difficulties described below: in order to expand the data set, data can be acquired from more users and enterprises for training, but as the awareness of public on privacy protection is enhanced, data collection becomes difficult; the network space security law of the people's republic of china requires that enterprises cannot reveal or tamper with collected personal information of users, and must fulfill data protection obligations when conducting data transactions with third parties. The electronic commerce law of the people's republic of China puts higher requirements on personal information protection in the electronic commerce environment, and the general data protection regulations of the European Union put the most severe requirements on history on the collection and use of user information. Therefore, the traditional data collection and use method is not suitable any more, the artificial intelligence driven by big data faces a data crisis, the data set is simply expanded, and the method for retraining the neural network model faces potential safety hazards such as data sharing, privacy disclosure and the like.
Disclosure of Invention
The invention aims to provide a method for enhancing the defense capability of a neural network based on federal learning, which solves the problems in the prior art.
In order to achieve the purpose, the invention is realized by the following technical scheme: a method for enhancing the defense capacity of a neural network based on federal learning comprises the following steps:
s1, using federal learning to make data stay in local and prevent data privacy from leaking, cooperating with each party to carry out distributed model training, encrypting the intermediate result to protect data safety, and fusing multi-party models after summarizing to obtain a federal model.
And S2, establishing a confrontation sample, and quickly searching the confrontation sample by adopting an algorithm.
Further, in the operation step in S1, the method further includes the steps of:
101. and selecting a credible server as a credible third party, and downloading the shared initial model from the server by the terminals participating in model training.
And S102, each participant trains the downloaded sharing model by using the locally stored picture data.
S103, each participant encrypts the intermediate result of the model and uploads the encrypted intermediate result to the server through a security protocol.
And S104, the server fuses the intermediate results of all the participants through a federal model fusion algorithm to obtain an optimized sharing model.
Further, in the operation step in S2, the method further includes the steps of:
s201, collecting and sorting data, and collecting to form a data set.
S202, storing the image in a computer by using binary system by taking the pixel as a unit.
And S203, solving the optimization problem to obtain a confrontation sample.
And S204, a symmetric positive definite iteration matrix method or a fast gradient sign method is adopted, so that the confrontation sample of the picture can be quickly searched.
Further, in the operation step in S102, a corresponding countermeasure sample is generated by using the local picture and is input into the training model, so as to improve the defense capability of the countermeasure sample.
Further, in the operation step in S103, the encryption algorithm of the intermediate result includes, but is not limited to, a homomorphic encryption algorithm.
Further, in the operation step in S104, the algorithm of the shared model being better than the initial model includes, but is not limited to, the FedAvg algorithm.
Further, steps S102-S104 are repeated until the result converges and the objective condition is achieved.
The invention provides a method for enhancing the defense capacity of a neural network based on federal learning. The method has the following beneficial effects:
The method for enhancing the defense capacity of the neural network based on the federal learning combines the federal learning with the training process of the neural network model, solves the problem that the data set cannot circulate due to the consideration of privacy protection and the restriction of laws and regulations, saves the trouble of data collection, simultaneously enriches and is more independent of the training set of the neural network model, overcomes the problem that the neural network model is easy to attack by the countermeasure sample due to the incomplete training set, improves the capacity of the model in defending the countermeasure sample, simultaneously improves the learning capacity and the capacity of defending the countermeasure sample attack of the neural network model, reduces the effectiveness of the attack of the countermeasure sample, and enhances the defense capacity and the safety of the neural network model.
Drawings
FIG. 1 is a diagram illustrating a dilemma of picture data non-circulation in the prior art;
FIG. 2 is a general flow chart of the present invention;
FIG. 3 is a flow chart of a method of enhancing neural network defense capabilities of the present invention;
FIG. 4 is a flowchart of challenge sample generation according to the present invention;
FIG. 5 is a schematic diagram of a method of enhancing the defense capacity of a neural network according to the present invention.
Detailed Description
Example 1: referring to FIGS. 1-5: the invention provides a method for enhancing the defense capacity of a neural network based on federal learning, which comprises the following steps:
The method comprises the following steps: by means of federal learning, the trouble of data collection is omitted, data privacy can not be revealed when the data are left locally, distributed model training is performed by cooperation of all parties, intermediate results are encrypted to protect data safety, and finally a better federal model is obtained by gathering and fusing multi-party models, so that the richness of data sets participating in training is increased, and the effectiveness of confrontation samples is reduced.
The method comprises the following specific implementation steps:
1) selecting a credible server as a credible third party, and downloading a shared initial model, such as a neural network model inceptontionV 3 for image classification, from the server by a terminal (participant, such as an enterprise, a college, a scientific research institute, a personal user and the like) participating in model training;
2) each participant trains the downloaded sharing model by using the locally stored picture data, and can generate corresponding confrontation samples by using local pictures to input the confrontation samples into the training model in order to improve the defense capacity against the confrontation samples;
3) each participant encrypts an intermediate result (such as a weight matrix) of the model (such as a dynamic encryption and the like), and uploads the encrypted intermediate result to the server through a security protocol;
4) the server passes the intermediate results of each participant through a federal model fusion algorithm (such as Federa average algorithm FedAvg, Keith Bonawitz, Vladimir Ivanov, et al ]2016) to get an optimized sharing model. Let fi (w) be l (xi, y)i(ii) a w) represents the predicted loss of model parameters w to data instance (xi, yi), k represents the total number of participants, Pk represents the index value of the data point on participant k, nk=|PkIf | represents the total number of data samples for party k, then it may be ordered
Figure BDA0002564541080000051
Wherein
Figure BDA0002564541080000052
For the t-th iteration, participant k computes a local gradient update
Figure BDA0002564541080000053
Wherein
Figure BDA0002564541080000054
η represents the learning rate. Each participant obtains the gradient update after local encryption by homomorphic encryption
Figure BDA0002564541080000055
Server pass computation
Figure BDA0002564541080000056
And obtaining the gradient update of the t-th round federal model. The use of additive homomorphic encryption can make the server unnecessary to the server
Figure BDA0002564541080000057
Performing decryption is operable because the good property of homomorphic encryption is that the post-encryption computation and post-computation encryption results are consistent, i.e.
Figure BDA0002564541080000058
This prevents the server from getting intermediate results, thereby retrograding the training data of the participants. And when the optimal iteration is reached, the server sends the updated model parameters w to each participant so as to continue the subsequent training.
And repeating the steps 2) -4) until the result is converged or the target condition is achieved.
Specifically, the defense capability of the inceptionV3 model against the anti-sample is improved.
A server which is trusted by all participants is selected, or a party is randomly selected by all participants as a trusted third party in each round, an inception V3 is deployed in the server in advance as required according to business requirements, an encryption method for encrypting an intermediate result and a transmission security protocol are negotiated, and the data privacy is not leaked from the intermediate result. Each participant downloads the initial model inception v3 from the server at the beginning of the training (it may not go to the server to download if the participant has inception v3 locally at the beginning). After each model is prepared, local picture data is used for training, intermediate results such as locally generated weight parameters and the like are encrypted by a negotiated encryption method, and the intermediate results are transmitted to a server by a negotiated protocol. The server adopts algorithms such as FedAvg to fuse all the half models to obtain an inception V3 which is better than the inception V3.
Step two: establishing a confrontation sample, and quickly searching the confrontation sample by adopting an algorithm.
The method comprises the following specific implementation steps:
1) the data is collected and sorted, and the data is collected to form a data set. Such as a handwritten data set MNIST from NIST (national institute of standards and technology) in the field of machine learning images. The MNIST training set collected 250 numbers handwritten by different people, 50% of which were high school students and 50% of which were from the staff of the census bureau of population. The test set is also handwritten digital data of the same scale. And in the field of images, the CIFAR-10 is used for identifying a small data set of a universal object and is formed by arranging AlexKrizhevsky and Ilya Sutskeeper. The process of forming these data sets has the process of concentrating the raw data sets.
2) The image is stored in a computer by using binary system with pixel as unit, the color image is the result of superposition of a plurality of color channels at the pixel point, such as RGB (red, green and blue) three color channels, the range of each channel is [0,255], and the result of superposition of numerical values of the three color channels is displayed. A simple image itself contains a large amount of binary information. Taking the handwritten digit data set MNIST (Yann lecun. http:// Yann. lect. com/exdb/MNIST /) as an example, MINIST contains 60000 training data sets and 10000 test data sets, each picture being a handwritten digit and containing 28 x 28 pixels. Each picture is expanded to obtain a vector of 1 x 784 as an input of the model. The MNIST data set picture corresponds to ten numbers from 0 to 9, and data on the picture can be converted into corresponding labels through one-hot vectors (single-hot coding) of 10 dimensions, wherein the picture label of the number 0 is [1,0,0,0,0,0,0,0,0,0], namely, the corresponding bit of the number on the picture is 1, and the numbers of other dimensions are 0. Setting the picture as "x" and the corresponding label as "y", that is, each picture is represented as [ x, y ], so that a training set matrix of [60000,10] can be obtained.
3) Solving optimization problem Minimze c | r | + lossf(x + r, l) a challenge sample can be obtained, where f represents a classifier that maps a vector of pixel values of an image to a set of discrete labels, lossfIs the loss function associated with the classifier, and x + r is the image closest to x, and is classified as l by f, i.e. the challenge sample to be obtained in the challenge attack. c is a parameter controlling the magnitude of | r |. Solving the above problem can find a countermeasure sample corresponding to one picture, such as adding a disturbance r to the digital 0 picture x of the MNIST, so that the model recognizes 0 as 1.
4) The occurrence probability of the confrontation sample is low, so the confrontation sample rarely appears in a training set and a test set; however, the distribution of the antagonistic sample is dense, and the antagonistic sample of a picture can be quickly found by some algorithms, such as L-BFGS (finite memory symmetric positive definite iterative matrix method, Christian szegdy, Wojciech Zarmeba, et al.
According to the method, the federal learning and the training process of the neural network model are combined, the dilemma that the data set cannot circulate due to the consideration of privacy protection and the limitation of laws and regulations is solved, the trouble of data collection is omitted, meanwhile, the training set of the neural network model is richer and more independent, the capability that the neural network model is easily attacked by the countermeasure sample due to the incomplete training set is overcome, the capability of the model in defending the countermeasure sample is improved, meanwhile, the learning capability of the neural network model and the capability of defending the countermeasure sample attack are improved, the attack effectiveness of the countermeasure sample is reduced, and the defense capability and the safety of the neural network model are enhanced.
The above is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, many variations and modifications can be made without departing from the inventive concept of the present invention, which falls into the protection scope of the present invention.

Claims (7)

1. A method for enhancing the defense capacity of a neural network based on federal learning is characterized by comprising the following steps:
s1, using federal learning to enable data to be left locally and prevent data privacy from being leaked, performing distributed model training in cooperation with each party, encrypting intermediate results to protect data safety, and fusing multi-party models after summarizing to obtain a federal model;
and S2, establishing a confrontation sample, and quickly searching the confrontation sample by adopting an algorithm.
2. The method for enhancing neural network defense capacity based on federal learning as claimed in claim 1, wherein the operating step in S1 further comprises the following steps:
101. selecting a credible server as a credible third party, and downloading a shared initial model from the server by a terminal participating in model training;
s102, each participant trains the downloaded sharing model by using locally stored picture data;
S103, encrypting the intermediate result of the model by each participant, and uploading the encrypted intermediate result to a server through a security protocol;
and S104, the server fuses the intermediate results of all the participants through a federal model fusion algorithm to obtain an optimized sharing model.
3. The method for enhancing neural network defense capacity based on federal learning as claimed in claim 1, wherein the operating step in S2 further comprises the following steps:
s201, collecting and sorting data, and collecting the data to form a data set;
s202, storing the image in a computer by using a binary system by taking a pixel as a unit;
s203, solving the optimization problem to obtain a confrontation sample;
and S204, a symmetric positive definite iteration matrix method or a fast gradient sign method is adopted, so that the confrontation sample of the picture can be quickly searched.
4. The method for enhancing neural network defense capacity based on federal learning as claimed in claim 2, wherein in the operation step in S102, the corresponding confrontation sample generated by using the local picture is input into the training model for improving the defense capacity of the confrontation sample.
5. The method for enhancing neural network defense capability based on federal learning as claimed in claim 2, wherein in the operation step in S103, the intermediate result encryption algorithm includes but is not limited to a homomorphic encryption algorithm.
6. The method of claim 2, wherein in the step of operating in S104, the algorithm of the shared model that is better than the initial model includes but is not limited to FedAvg algorithm.
7. The method for enhancing neural network defense capacity based on federal learning as claimed in claim 2, further comprising the step of repeating steps S102-S104 until the result converges and meets the condition of purpose.
CN202010618973.7A 2020-07-01 2020-07-01 Method for enhancing neural network defense capacity based on federal learning Pending CN111860832A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010618973.7A CN111860832A (en) 2020-07-01 2020-07-01 Method for enhancing neural network defense capacity based on federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010618973.7A CN111860832A (en) 2020-07-01 2020-07-01 Method for enhancing neural network defense capacity based on federal learning

Publications (1)

Publication Number Publication Date
CN111860832A true CN111860832A (en) 2020-10-30

Family

ID=72989695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010618973.7A Pending CN111860832A (en) 2020-07-01 2020-07-01 Method for enhancing neural network defense capacity based on federal learning

Country Status (1)

Country Link
CN (1) CN111860832A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364908A (en) * 2020-11-05 2021-02-12 浙江大学 Decision tree-oriented longitudinal federal learning method
CN112364943A (en) * 2020-12-10 2021-02-12 广西师范大学 Federal prediction method based on federal learning
CN112507219A (en) * 2020-12-07 2021-03-16 中国人民大学 Personalized search system based on federal learning enhanced privacy protection
CN112560059A (en) * 2020-12-17 2021-03-26 浙江工业大学 Vertical federal model stealing defense method based on neural pathway feature extraction
CN112632620A (en) * 2020-12-30 2021-04-09 支付宝(杭州)信息技术有限公司 Federal learning method and system for enhancing privacy protection
CN112653752A (en) * 2020-12-18 2021-04-13 重庆大学 Block chain industrial Internet of things data sharing method based on federal learning
CN112668044A (en) * 2020-12-21 2021-04-16 中国科学院信息工程研究所 Privacy protection method and device for federal learning
CN113143286A (en) * 2021-04-30 2021-07-23 广州大学 Electrocardiosignal identification method, system, device and medium based on distributed learning
CN113204766A (en) * 2021-05-25 2021-08-03 华中科技大学 Distributed neural network deployment method, electronic device and storage medium
CN113268758A (en) * 2021-06-17 2021-08-17 上海万向区块链股份公司 Data sharing system, method, medium and device based on federal learning
CN113344221A (en) * 2021-05-10 2021-09-03 上海大学 Federal learning method and system based on neural network architecture search
CN113468521A (en) * 2021-07-01 2021-10-01 哈尔滨工程大学 Data protection method for federal learning intrusion detection based on GAN
CN113515812A (en) * 2021-07-09 2021-10-19 东软睿驰汽车技术(沈阳)有限公司 Automatic driving method, device, processing equipment and storage medium
CN113726561A (en) * 2021-08-18 2021-11-30 西安电子科技大学 Business type recognition method for training convolutional neural network by using federal learning
CN113792873A (en) * 2021-08-24 2021-12-14 浙江数秦科技有限公司 Neural network model trusteeship training system based on block chain
CN113807157A (en) * 2020-11-27 2021-12-17 京东科技控股股份有限公司 Method, device and system for training neural network model based on federal learning
CN114978654A (en) * 2022-05-12 2022-08-30 北京大学 End-to-end communication system attack defense method based on deep learning
CN117808694A (en) * 2023-12-28 2024-04-02 中国人民解放军总医院第六医学中心 Painless gastroscope image enhancement method and painless gastroscope image enhancement system under deep neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446765A (en) * 2018-02-11 2018-08-24 浙江工业大学 The multi-model composite defense method of sexual assault is fought towards deep learning
CN109698822A (en) * 2018-11-28 2019-04-30 众安信息技术服务有限公司 Combination learning method and system based on publicly-owned block chain and encryption neural network
US20190227980A1 (en) * 2018-01-22 2019-07-25 Google Llc Training User-Level Differentially Private Machine-Learned Models
CN110443367A (en) * 2019-07-30 2019-11-12 电子科技大学 A kind of method of strength neural network model robust performance
CN110719158A (en) * 2019-09-11 2020-01-21 南京航空航天大学 Edge calculation privacy protection system and method based on joint learning
US20200125739A1 (en) * 2018-10-19 2020-04-23 International Business Machines Corporation Distributed learning preserving model security

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190227980A1 (en) * 2018-01-22 2019-07-25 Google Llc Training User-Level Differentially Private Machine-Learned Models
CN108446765A (en) * 2018-02-11 2018-08-24 浙江工业大学 The multi-model composite defense method of sexual assault is fought towards deep learning
US20200125739A1 (en) * 2018-10-19 2020-04-23 International Business Machines Corporation Distributed learning preserving model security
CN109698822A (en) * 2018-11-28 2019-04-30 众安信息技术服务有限公司 Combination learning method and system based on publicly-owned block chain and encryption neural network
CN110443367A (en) * 2019-07-30 2019-11-12 电子科技大学 A kind of method of strength neural network model robust performance
CN110719158A (en) * 2019-09-11 2020-01-21 南京航空航天大学 Edge calculation privacy protection system and method based on joint learning

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364908A (en) * 2020-11-05 2021-02-12 浙江大学 Decision tree-oriented longitudinal federal learning method
CN113807157A (en) * 2020-11-27 2021-12-17 京东科技控股股份有限公司 Method, device and system for training neural network model based on federal learning
CN112507219A (en) * 2020-12-07 2021-03-16 中国人民大学 Personalized search system based on federal learning enhanced privacy protection
CN112507219B (en) * 2020-12-07 2023-06-02 中国人民大学 Personalized search system for enhancing privacy protection based on federal learning
CN112364943A (en) * 2020-12-10 2021-02-12 广西师范大学 Federal prediction method based on federal learning
CN112364943B (en) * 2020-12-10 2022-04-22 广西师范大学 Federal prediction method based on federal learning
CN112560059A (en) * 2020-12-17 2021-03-26 浙江工业大学 Vertical federal model stealing defense method based on neural pathway feature extraction
CN112560059B (en) * 2020-12-17 2022-04-29 浙江工业大学 Vertical federal model stealing defense method based on neural pathway feature extraction
CN112653752A (en) * 2020-12-18 2021-04-13 重庆大学 Block chain industrial Internet of things data sharing method based on federal learning
CN112668044A (en) * 2020-12-21 2021-04-16 中国科学院信息工程研究所 Privacy protection method and device for federal learning
CN112632620A (en) * 2020-12-30 2021-04-09 支付宝(杭州)信息技术有限公司 Federal learning method and system for enhancing privacy protection
CN112632620B (en) * 2020-12-30 2022-08-26 支付宝(杭州)信息技术有限公司 Federal learning method and system for enhancing privacy protection
CN113143286A (en) * 2021-04-30 2021-07-23 广州大学 Electrocardiosignal identification method, system, device and medium based on distributed learning
CN113344221A (en) * 2021-05-10 2021-09-03 上海大学 Federal learning method and system based on neural network architecture search
CN113204766A (en) * 2021-05-25 2021-08-03 华中科技大学 Distributed neural network deployment method, electronic device and storage medium
CN113204766B (en) * 2021-05-25 2022-06-17 华中科技大学 Distributed neural network deployment method, electronic device and storage medium
CN113268758B (en) * 2021-06-17 2022-11-04 上海万向区块链股份公司 Data sharing system, method, medium and device based on federal learning
CN113268758A (en) * 2021-06-17 2021-08-17 上海万向区块链股份公司 Data sharing system, method, medium and device based on federal learning
CN113468521B (en) * 2021-07-01 2022-04-05 哈尔滨工程大学 Data protection method for federal learning intrusion detection based on GAN
CN113468521A (en) * 2021-07-01 2021-10-01 哈尔滨工程大学 Data protection method for federal learning intrusion detection based on GAN
CN113515812A (en) * 2021-07-09 2021-10-19 东软睿驰汽车技术(沈阳)有限公司 Automatic driving method, device, processing equipment and storage medium
CN113726561A (en) * 2021-08-18 2021-11-30 西安电子科技大学 Business type recognition method for training convolutional neural network by using federal learning
CN113792873A (en) * 2021-08-24 2021-12-14 浙江数秦科技有限公司 Neural network model trusteeship training system based on block chain
CN114978654A (en) * 2022-05-12 2022-08-30 北京大学 End-to-end communication system attack defense method based on deep learning
CN114978654B (en) * 2022-05-12 2023-03-10 北京大学 End-to-end communication system attack defense method based on deep learning
CN117808694A (en) * 2023-12-28 2024-04-02 中国人民解放军总医院第六医学中心 Painless gastroscope image enhancement method and painless gastroscope image enhancement system under deep neural network
CN117808694B (en) * 2023-12-28 2024-05-24 中国人民解放军总医院第六医学中心 Painless gastroscope image enhancement method and painless gastroscope image enhancement system under deep neural network

Similar Documents

Publication Publication Date Title
CN111860832A (en) Method for enhancing neural network defense capacity based on federal learning
Liu et al. Coverless information hiding based on generative adversarial networks
Guesmi et al. Hash key-based image encryption using crossover operator and chaos
Lerch-Hostalot et al. Unsupervised steganalysis based on artificial training sets
CN112560059B (en) Vertical federal model stealing defense method based on neural pathway feature extraction
Ambika et al. Encryption-based steganography of images by multiobjective whale optimal pixel selection
Cui et al. An adaptive LeNet-5 model for anomaly detection
CN112615974A (en) Carrier-free covert communication method and system based on depth discriminator
CN115758422A (en) File encryption method and system
Pan et al. A novel image encryption algorithm based on hybrid chaotic mapping and intelligent learning in financial security system
CN109413068B (en) Wireless signal encryption method based on dual GAN
CN114282692A (en) Model training method and system for longitudinal federal learning
Wang et al. Data hiding during image processing using capsule networks
Yu et al. Flexible and robust real-time intrusion detection systems to network dynamics
Shiomoto Network intrusion detection system based on an adversarial auto-encoder with few labeled training samples
Kanzariya et al. Coverless information hiding: a review
Liu et al. Spatial‐Temporal Feature with Dual‐Attention Mechanism for Encrypted Malicious Traffic Detection
Delei et al. An image encryption algorithm based on knight's tour and slip encryption-filter
Bai et al. Reconstruction of chaotic grayscale image encryption based on deep learning
Stock et al. Lessons Learned: Defending Against Property Inference Attacks.
Chiu et al. An XOR-based progressive visual cryptography with meaningful shares
Jin et al. Efficient blind face recognition in the cloud
CN115001654A (en) Chaos-based thread pool and GPU combined optimization batch image encryption method
Kich et al. Image steganography scheme using dilated convolutional network
Li et al. Unsupervised steganalysis over social networks based on multi-reference sub-image sets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220628

Address after: No. 230, Waihuan West Road, Guangzhou University Town, Panyu, Guangzhou City, Guangdong Province, 510006

Applicant after: Guangzhou University

Applicant after: National University of Defense Technology

Address before: No. 230, Waihuan West Road, Guangzhou University Town, Panyu, Guangzhou City, Guangdong Province, 510006

Applicant before: Guangzhou University

TA01 Transfer of patent application right