CN110443063B - Adaptive privacy-protecting federal deep learning method - Google Patents

Adaptive privacy-protecting federal deep learning method Download PDF

Info

Publication number
CN110443063B
CN110443063B CN201910563455.7A CN201910563455A CN110443063B CN 110443063 B CN110443063 B CN 110443063B CN 201910563455 A CN201910563455 A CN 201910563455A CN 110443063 B CN110443063 B CN 110443063B
Authority
CN
China
Prior art keywords
data
data attribute
model
contribution
privacy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910563455.7A
Other languages
Chinese (zh)
Other versions
CN110443063A (en
Inventor
李洪伟
刘小源
徐国文
刘森
龚丽
姜文博
成艺
任彦之
李双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910563455.7A priority Critical patent/CN110443063B/en
Publication of CN110443063A publication Critical patent/CN110443063A/en
Application granted granted Critical
Publication of CN110443063B publication Critical patent/CN110443063B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a federal deep learning method for self-adaptive privacy protection, which is used for protecting the original data of a user in the federal deep learning from being known by a curious server and protecting the parameters of a learning model from leaking the information of the original data of the user. Each participant negotiates a network frame with a cloud server in advance, then the cloud server obtains an initialized model, and the cloud server broadcasts the model parameters to each participant; the method comprises the following steps that participants download initialized model parameters and update local models of the participants, then training is carried out by combining local data sets, different privacy protection operations are carried out on different data characteristics based on different contribution degrees of data attributes to model output, and the participants send local gradients obtained by respective training to a cloud server; finally, the cloud server updates the model of each participant after collecting the gradient information of each participant to perform subsequent training. The invention greatly improves the accuracy of the learning model on the premise of meeting the privacy protection.

Description

Adaptive privacy-protecting federal deep learning method
Technical Field
The invention relates to an artificial intelligence technology.
Background
Traditional centralized deep learning requires that user data is concentrated in one data center, and a user loses control over own data, and the data can be abused or guessed by a data user to obtain more private information of the user. Joint Deep Learning (fed Deep Learning) proposed by Google can solve the problems of privacy, location, right of use, etc. of user data.
Joint Deep Learning (fed Deep Learning) allows multiple participants to jointly learn a common model without disclosing their own data sets. This requires the participants to train with their own local data sets to obtain a local model and share their respective training gradients with other participants; the cloud server or a user aggregates the training gradients from the participants and then derives a "common" model that also prevents local overfitting of the user's local model.
A Differential Privacy Mechanism (Differential Privacy Mechanism) is a cryptographic technique commonly used in statistics, and is used to remove individual features to protect the Privacy of user data on the premise of keeping the statistical features of the data. Laplace mechanisms are commonly used to implement E-differential privacy. Specifically, it is to inject laplacian distribution compliant noise into the data item such that it satisfies differential privacy with privacy budget e. The larger the privacy budget, the higher the level of privacy protection. In the actual use process, the serial and parallel combination characteristics of differential privacy are combined frequently, so that the application of differential privacy is more flexible.
At present, there is a privacy protection mechanism based on technologies such as secure multi-party computation, homomorphic encryption, differential privacy and the like. However, considering the future data growth trend, the differential privacy mechanism has good efficiency performance compared with the secure multi-party calculation which sacrifices communication overhead and the homomorphic encryption which requires calculation overhead. However, the differential privacy mechanism requires a trade-off between data privacy and model accuracy.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a federal deep learning method which can ensure the accuracy of a training model and the high efficiency in a large-scale user scene and simultaneously prevent a server from deducing the model parameters and the self-adaptability of the privacy of user training data.
The technical scheme adopted by the invention for solving the technical problems is that the federal deep learning method for adaptively protecting privacy comprises the following steps:
1) Initializing a system: each participant negotiates a deep learning model with a cloud server, and then the cloud server selects public data of the same type as user data as training data to train the deep learning model to obtain a model parameter w of the deep learning model global (ii) a The training data comprises a plurality of data items and corresponding labels, and the data items are composed of a plurality of data attributes;
2) The participants initialize the local model: the server stores the model parameters w of the deep learning model global Broadcast to participants who download the model parameters w global As the initial model parameters of the local deep learning model;
3) The participants update the deep learning model with local user data:
3-1) calculating the contribution degree of the data item to the output of the model by utilizing a layer-by-layer correlation propagation algorithm;
3-2) aggregating and averaging the contribution degrees of the data attributes of the same data attribute class to obtain the contribution degree of the data attribute class;
3-3) carrying out differential privacy protection on the contribution degree: injecting Laplace noise into the contribution degree of the data attribute class, so that the contribution degree of the data attribute class after the Laplace noise is injected meets the privacy budget epsilon of the contribution degree c Differential privacy of (1);
4) Participants run a small batch gradient descent algorithm to optimize the local model:
4-1) the participant selects a plurality of data items and corresponding labels as training data;
4-2) carrying out differential privacy protection on data attributes: adaptively injecting Laplace noise into the data attribute according to the contribution degree of the data attribute class where the data attribute is located, wherein the injected Laplace noise is positively correlated with the contribution degree of the data attribute class where the data attribute is located, and the data attribute after the injection of Laplace noise meets the privacy budget epsilon of the data attribute l The difference of (2) is private;
4-3) carrying out differential privacy protection on the labels of the training data attributes: expanding a loss function into a polynomial form through Taylor, and then injecting Laplacian noise into a polynomial coefficient to enable the loss function to meet a label privacy budget epsilon f Differential privacy of (1); contribution privacy budget epsilon c Data attribute privacy budget ε l And tag privacy budget ε f The sum of the three is a preset total privacy budget;
4-4) calculating the gradient of the model by derivation of a loss function, and updating the local model;
4-5) uploading the model gradient to a cloud server by a participant;
5) Cloud server aggregation: the cloud server collects the model gradient sent by each participant to update the global model on the cloud server.
By combining the federal deep learning and differential privacy technology and using a layer-by-layer correlation propagation algorithm, a user can calculate the contribution of data attributes to the output of the model; participants can use a random privacy protection adjustment technique to self-define the privacy level of the data, and then adaptively inject laplacian noise into the data attributes of the user according to the contribution degree. For the data attribute with small contribution degree, a large noise value is used for disturbance, and the privacy of the system is improved; for the data attribute with larger contribution degree, small noise is injected, and the accuracy of the model can be effectively increased.
The invention has the beneficial effect that the accuracy of the system is improved to the maximum extent on the premise of ensuring the privacy protection level of the system.
Drawings
FIG. 1 is a schematic diagram of a system;
FIG. 2 is a schematic diagram of a layer-by-layer correlation propagation algorithm.
Detailed Description
1. The system model of the invention is shown in figure 1:
2. system initialization, comprising the steps of:
1) Participant U g The participants U need to be jointly found to be a more accurate learning model that does not result in local overfitting g A deep learning network is negotiated with the cloud server in advance, such as a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN) and the like;
2) The server trains by using some public data according to the user data type to obtain an initialized deep learning model parameter w global
3. The participant initializes the local model, characterized by comprising the steps of:
1) The server broadcasts its own deep learning model parameters w global
2) Each participant U g Downloading initialized model parameters w global And updating the local learning model w of the user local
4. Participant preprocessing of local data, characterized by the following steps:
1) Data normalization: each participant U g Having a local data set D g G is a data set sequence number variable, data set D g In which n data items x are included i And a corresponding label y i Each data item comprising u data attributes x i,j Composition of:x i,1 ,x i,2 ,...,x i,u (ii) a Label y i ∈[1,v]Data item x i The middle 1 data attribute corresponds to 1 data attribute type; i is an element of [1, n ]],j∈[1,u]Through the data normalization operation, the range of the data attribute value can be limited:
Figure BDA0002108903040000031
this operation can speed up the training;
2) Through a layer-by-layer dependency propagation algorithm, a participant can compute a data attribute x in a single data item i,j Contribution to model output: input layer l input Number of neurons on (Input layer) and data attribute x i,j The same number, layer-by-layer correlation propagation is shown in fig. 2.
With initialized local model parameters w local And local data set
Figure BDA0002108903040000032
The predicted result y = f (x) of a model is obtained by feedforward network training i ). As can be seen, data item x i Through the output layer l o Superior neuron a b Degree of contribution to the output of the model pick>
Figure BDA0002108903040000033
As output value f (x) of the model i ) B and c are variables representing the neuron numbers:
Figure BDA0002108903040000034
then the output value is transmitted back layer by layer, and the data attribute xi can be calculated through the linear relation between network layers k Upper neuron a c To the k-1 st layer l k-1 Neuron a b Degree of contribution between
Figure BDA0002108903040000035
The following:
Figure BDA0002108903040000041
w b,c to connect neurons a b And neuron a c The weight of the other(s) between,
Figure BDA0002108903040000042
is a neuron a b Mu is a number close to 0, where 10 is taken -6
From the above equation, the data item x can be calculated i Neurons a through the k-th layer b Degree of contribution to model output
Figure BDA0002108903040000043
For data item x i Neuron a through the k-th layer b Connected neurons a on the k +1 th layer c ∈l k+1 The sum of the contribution degrees. The method comprises the following specific steps:
Figure BDA0002108903040000044
3) The participants calculate the contribution degree of the data attribute class to the output of the model: input layer l input Number of upper neurons and data attributes x i,j The number is the same, and one neuron corresponds to one data attribute (one data attribute class). And the participants can calculate the contribution degree of the data attribute class by combining the contribution degrees of the data attributes of the same data attribute class in the n data items. Since the neuron of the input layer functions to convert received data (pictures, sounds, etc.) into numerical values, the input layer l input Neuron a of b The output of (2) represents the data attribute x i,j When is coming into contact with
Figure BDA0002108903040000045
Input layer l input Number of upper neurons and data attribute x i,j B is equal to [1, u ] if the number is the same]. Extracting input layers when calculating contribution of data attributes according to a layer-by-layer propagation algorithmRepresents the contribution size of the data attribute class. The participator combines the contribution degree of the same data attribute class in the n data items to calculate the contribution degree C of each data attribute class j j . The method comprises the following specific steps: />
Figure BDA0002108903040000046
4) Privacy preserving operations on contribution calculation process: in the process of calculating the contribution degree, attribute classes which are more important to the output of the learning model can be distinguished. In order to protect the original data of the user from being leaked or speculated, a differential privacy mechanism is adopted to disturb the contribution degree, namely: laplacian noise is injected into the contribution to the data attribute class. The method comprises the following specific steps:
Figure BDA0002108903040000047
wherein Lap represents the probability density function of Laplace distribution, GS c The sensitivity of the preset data attribute class to contribution degree of the model output reflects the maximum difference of data between adjacent data set pairs, and the sensitivity is a fixed value under a determined neural network structure; epsilon c Is a privacy budget of contribution degree, and the larger value of the privacy budget indicates that the noise value is smaller, which leads to higher system accuracy and provides weaker privacy protection level. The probability density function of the laplace distribution is:
Figure BDA0002108903040000051
a is a scale parameter, order
Figure BDA0002108903040000052
||·|| 1 Is 1-norm;
5. the participant trains the local model, including the following steps:
1) The participant selects t data tuples: at each timeDuring training, the participants U g Randomly selecting local data sets
Figure BDA0002108903040000053
In small-batch data sets with a number t>
Figure BDA0002108903040000054
As training data of this time;
2) For data attribute x i,j Privacy protection is carried out: with a random privacy preserving adjustment technique, we introduce two adjustment factors: f, p. Wherein f is a user-defined threshold value, which can define the privacy level of the user; p is a probability value. The contribution ratio is:
Figure BDA0002108903040000055
if the contribution ratio of the user attribute class is greater than the threshold, the attribute class contribution is defined to be greater. To improve the privacy level of the model, laplace noise is injected into all attributes of this type:
Figure BDA0002108903040000056
and defining the user attribute with the contribution ratio smaller than the threshold value f as a small contribution, and performing probabilistic noise injection on the attribute to improve the accuracy of the model. The method comprises the following specific steps:
Figure BDA0002108903040000057
wherein the way of injecting the noise is adaptive. Let the privacy budget ε of each data attribute class j Comprises the following steps:
ε j =β*ε l
the privacy budget ε j Privacy budget for data attributes ε l And distributing according to the contribution ratio. The adaptivity injects the following noise:
Figure BDA0002108903040000058
wherein GS is l Is the data attribute sensitivity.
3) Label y for training data i Privacy protection is carried out: the protection for the label is realized by injecting Laplace noise into a loss function; in round r training, when we choose sigmoid function:
Figure BDA0002108903040000059
the taylor expansion in combination with a cross-entropy (cross-entropy) cost function as an activation function for neurons is: />
Figure BDA00021089030400000510
Figure BDA0002108903040000061
Where, two parts in the expression representing the cross entropy cost function, y i A tag indicating the ith data item, k a variable indicating the tag type number, v the total number of classes of the tag, F 1,k (z)=y i *log(1+e -z )),F 2,k (z)=(1-y i )* log(1+e -z ),
Figure BDA0002108903040000062
In (2), 0 represents a variable of 0, and superscripts (0), (1) and (2) represent 0, 1 and 2 derivations, respectively. And, in conjunction with>
Figure BDA00021089030400000611
Processing data items x for a neural network i When the output vector of the last layer of the hidden layer is processed, due to the characteristics of the neural network, the input vector of the layer is the output vector of the previous layer except the input layer.
To protect labels y of training data i For polynomial coefficient
Figure BDA0002108903040000063
Separately injected Laplace noise
Figure BDA0002108903040000064
Let the tag sensitivity be GS f Privacy budget of epsilon f Make the loss function satisfy the privacy budget as epsilon f The difference of (2) is secret; epsilon clf = preset total privacy budget;
4) Optimizing a local model, and improving the accuracy of the system: calculating model gradients by derivation of a loss function
Figure RE-GDA0002218386380000066
Figure BDA0002108903040000066
Let η be the learning rate of the local model, and update the local model:
Figure BDA0002108903040000067
5) Uploading gradient information by participants: order:
Figure BDA0002108903040000068
participant send vector
Figure BDA0002108903040000069
Sending the data to a cloud server;
6. the cloud server aggregation comprises the following steps:
the cloud server receives gradient information vectors sent by all parties, and updates a local model: let eta be global Learning rates for the server learning model. Root of herbaceous plantsThe model is updated according to the following formula:
Figure BDA00021089030400000610
in summary, the invention provides a federal deep learning method with adaptive privacy protection, which can protect the original data of a user in the federal deep learning from being known by a curious server and protect the parameters of a learning model from leaking the information of the original data of the user.

Claims (2)

1. An adaptive privacy-preserving federal deep learning method is characterized by comprising the following steps:
1) Initializing a system: each participant negotiates a deep learning model with a cloud server, and then the cloud server selects public data of the same type as user data as training data to train the deep learning model to obtain a model parameter w of the deep learning model global (ii) a The training data comprises a plurality of data items and corresponding labels, and the data items are composed of a plurality of data attributes;
2) The participants initialize the local model: the server stores the model parameters w of the deep learning model global Broadcast to participants who download the model parameters w global As the initialization model parameters of the local deep learning model;
3) The participants update the deep learning model with local user data:
3-1) calculating the contribution degree of the data item to the output of the model by utilizing a layer-by-layer correlation propagation algorithm;
3-2) aggregating and averaging the contribution degrees of the data attributes of the same data attribute class to obtain the contribution degree of the data attribute class;
3-3) carrying out differential privacy protection on the contribution degree: injecting Laplace noise into the contribution degree of the data attribute class, so that the contribution degree of the data attribute class after the Laplace noise is injected meets the privacy budget epsilon of the contribution degree c Differential privacy of (1);
4) Participants run a small batch gradient descent algorithm to optimize the local model:
4-1) the participant selects a plurality of data items and corresponding labels as training data;
4-2) carrying out differential privacy protection on data attributes: according to the contribution degree of the data attribute class where the data attribute is located, the Laplace noise is adaptively injected into the data attribute, the injected Laplace noise is positively correlated with the contribution degree of the data attribute class where the data attribute is located, and the data attribute after the Laplace noise is injected meets the privacy budget epsilon of the data attribute l Differential privacy of (1);
the specific method for adaptively injecting the laplacian noise into the data attribute according to the contribution degree of the data attribute class where the data attribute is located is as follows:
calculating the contribution ratio beta of the data attribute class, and injecting Laplace noise into all the data attributes in the data attribute class when the contribution ratio of the data attribute class is larger than or equal to a preset threshold value; when the contribution ratio of the data attribute class is smaller than a preset threshold value, injecting Laplace noise into the data attribute class according to a preset probability; the noise injection mode is that the privacy budget corresponding to the data attribute class is allocated to the data attribute, and the privacy budget of the data attribute class is the privacy budget epsilon of the data attribute l Multiplying the result of the contribution ratio of the data attribute class; the contribution ratio of the data attribute class is: the ratio of the absolute value of the contribution of the data attribute class after the laplacian noise is injected to the absolute value of the contribution of all the data attribute classes after the laplacian noise is injected is 4-3) the label of the training data attribute is subjected to differential privacy protection: expanding a loss function into a polynomial form through Taylor, and then injecting Laplacian noise into a polynomial coefficient to enable the loss function to meet a label privacy budget epsilon f Differential privacy of (1); contribution privacy budget epsilon c Data attribute privacy budget ε l And sign privacy budget ε f The sum of the three is a preset total privacy budget;
4-4) calculating the gradient of the model by derivation of a loss function, and updating the local model;
4-5) uploading the model gradient to a cloud server by a participant;
5) Cloud server aggregation: the cloud server collects the model gradient sent by each participant to update the global model on the cloud server.
2. The method of claim 1, wherein step 3-1) is preceded by a data normalization process on the data attributes.
CN201910563455.7A 2019-06-26 2019-06-26 Adaptive privacy-protecting federal deep learning method Active CN110443063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910563455.7A CN110443063B (en) 2019-06-26 2019-06-26 Adaptive privacy-protecting federal deep learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910563455.7A CN110443063B (en) 2019-06-26 2019-06-26 Adaptive privacy-protecting federal deep learning method

Publications (2)

Publication Number Publication Date
CN110443063A CN110443063A (en) 2019-11-12
CN110443063B true CN110443063B (en) 2023-03-28

Family

ID=68428977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910563455.7A Active CN110443063B (en) 2019-06-26 2019-06-26 Adaptive privacy-protecting federal deep learning method

Country Status (1)

Country Link
CN (1) CN110443063B (en)

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079977B (en) * 2019-11-18 2023-06-20 中国矿业大学 Heterogeneous federal learning mine electromagnetic radiation trend tracking method based on SVD algorithm
CN111222646B (en) * 2019-12-11 2021-07-30 深圳逻辑汇科技有限公司 Design method and device of federal learning mechanism and storage medium
CN111046433B (en) * 2019-12-13 2021-03-05 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device
CN111091199B (en) * 2019-12-20 2023-05-16 哈尔滨工业大学(深圳) Federal learning method, device and storage medium based on differential privacy
CN111079022B (en) * 2019-12-20 2023-10-03 深圳前海微众银行股份有限公司 Personalized recommendation method, device, equipment and medium based on federal learning
CN111143878B (en) * 2019-12-20 2021-08-03 支付宝(杭州)信息技术有限公司 Method and system for model training based on private data
CN111190487A (en) * 2019-12-30 2020-05-22 中国科学院计算技术研究所 Method for establishing data analysis model
CN111209478A (en) * 2020-01-03 2020-05-29 京东数字科技控股有限公司 Task pushing method and device, storage medium and electronic equipment
CN111241580B (en) * 2020-01-09 2022-08-09 广州大学 Trusted execution environment-based federated learning method
CN111241582B (en) * 2020-01-10 2022-06-10 鹏城实验室 Data privacy protection method and device and computer readable storage medium
CN113191479A (en) * 2020-01-14 2021-07-30 华为技术有限公司 Method, system, node and storage medium for joint learning
CN111245610B (en) * 2020-01-19 2022-04-19 浙江工商大学 Data privacy protection deep learning method based on NTRU homomorphic encryption
CN111310932A (en) * 2020-02-10 2020-06-19 深圳前海微众银行股份有限公司 Method, device and equipment for optimizing horizontal federated learning system and readable storage medium
CN113312543A (en) * 2020-02-27 2021-08-27 华为技术有限公司 Personalized model training method based on joint learning, electronic equipment and medium
CN111428881B (en) * 2020-03-20 2021-12-07 深圳前海微众银行股份有限公司 Recognition model training method, device, equipment and readable storage medium
CN111428885B (en) * 2020-03-31 2021-06-04 深圳前海微众银行股份有限公司 User indexing method in federated learning and federated learning device
CN111581648B (en) * 2020-04-06 2022-06-03 电子科技大学 Method of federal learning to preserve privacy in irregular users
CN111177791B (en) * 2020-04-10 2020-07-17 支付宝(杭州)信息技术有限公司 Method and device for protecting business prediction model of data privacy joint training by two parties
CN111177768A (en) * 2020-04-10 2020-05-19 支付宝(杭州)信息技术有限公司 Method and device for protecting business prediction model of data privacy joint training by two parties
CN111581663B (en) * 2020-04-30 2022-05-03 电子科技大学 Federal deep learning method for protecting privacy and facing irregular users
US11651292B2 (en) * 2020-06-03 2023-05-16 Huawei Technologies Co., Ltd. Methods and apparatuses for defense against adversarial attacks on federated learning systems
CN111783142B (en) 2020-07-06 2021-10-08 北京字节跳动网络技术有限公司 Data protection method, device, server and medium
CN111985650B (en) * 2020-07-10 2022-06-28 华中科技大学 Activity recognition model and system considering both universality and individuation
CN112101403B (en) * 2020-07-24 2023-12-15 西安电子科技大学 Classification method and system based on federal few-sample network model and electronic equipment
CN111935168A (en) * 2020-08-19 2020-11-13 四川大学 Industrial information physical system-oriented intrusion detection model establishing method
CN112185395B (en) * 2020-09-04 2021-04-27 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Federal voiceprint recognition method based on differential privacy
CN114257386B (en) * 2020-09-10 2023-03-21 华为技术有限公司 Training method, system, equipment and storage medium for detection model
CN112329940A (en) * 2020-11-02 2021-02-05 北京邮电大学 Personalized model training method and system combining federal learning and user portrait
CN114580651A (en) * 2020-11-30 2022-06-03 华为技术有限公司 Federal learning method, device, equipment, system and computer readable storage medium
CN112600697B (en) * 2020-12-07 2023-03-14 中山大学 QoS prediction method and system based on federal learning, client and server
CN112507312B (en) * 2020-12-08 2022-10-14 电子科技大学 Digital fingerprint-based verification and tracking method in deep learning system
CN112487481B (en) * 2020-12-09 2022-06-10 重庆邮电大学 Verifiable multi-party k-means federal learning method with privacy protection
CN112487479B (en) * 2020-12-10 2023-10-13 支付宝(杭州)信息技术有限公司 Method for training privacy protection model, privacy protection method and device
CN112611080A (en) * 2020-12-10 2021-04-06 浙江大学 Intelligent air conditioner control system and method based on federal learning
CN112487482B (en) * 2020-12-11 2022-04-08 广西师范大学 Deep learning differential privacy protection method of self-adaptive cutting threshold
CN112668044B (en) * 2020-12-21 2022-04-12 中国科学院信息工程研究所 Privacy protection method and device for federal learning
CN112910624B (en) * 2021-01-14 2022-05-10 东北大学 Ciphertext prediction method based on homomorphic encryption
CN112949865B (en) * 2021-03-18 2022-10-28 之江实验室 Joint learning contribution degree evaluation method based on SIGMA protocol
CN113222211B (en) * 2021-03-31 2023-12-12 中国科学技术大学先进技术研究院 Method and system for predicting pollutant emission factors of multi-region diesel vehicle
CN112799708B (en) * 2021-04-07 2021-07-13 支付宝(杭州)信息技术有限公司 Method and system for jointly updating business model
CN113434873A (en) * 2021-06-01 2021-09-24 内蒙古大学 Federal learning privacy protection method based on homomorphic encryption
CN113268772B (en) * 2021-06-08 2022-12-20 北京邮电大学 Joint learning security aggregation method and device based on differential privacy
CN113902122A (en) * 2021-08-26 2022-01-07 杭州城市大脑有限公司 Federal model collaborative training method and device, computer equipment and storage medium
WO2023082787A1 (en) * 2021-11-10 2023-05-19 新智我来网络科技有限公司 Method and apparatus for determining contribution degree of participant in federated learning, and federated learning training method and apparatus
CN114548373B (en) * 2022-02-17 2024-03-26 河北师范大学 Differential privacy deep learning method based on feature region segmentation
CN114912624A (en) * 2022-04-12 2022-08-16 支付宝(杭州)信息技术有限公司 Longitudinal federal learning method and device for business model
CN114463601B (en) * 2022-04-12 2022-08-05 北京云恒科技研究院有限公司 Big data-based target identification data processing system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704925A (en) * 2017-10-16 2018-02-16 清华大学 The visual analysis system and method for deep neural network training process
CN108446568A (en) * 2018-03-19 2018-08-24 西北大学 A kind of histogram data dissemination method going trend analysis difference secret protection
CN108712260A (en) * 2018-05-09 2018-10-26 曲阜师范大学 The multi-party deep learning of privacy is protected to calculate Proxy Method under cloud environment
CN109299436A (en) * 2018-09-17 2019-02-01 北京邮电大学 A kind of ordering of optimization preference method of data capture meeting local difference privacy
CN109495476A (en) * 2018-11-19 2019-03-19 中南大学 A kind of data flow difference method for secret protection and system based on edge calculations
CN109684855A (en) * 2018-12-17 2019-04-26 电子科技大学 A kind of combined depth learning training method based on secret protection technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704925A (en) * 2017-10-16 2018-02-16 清华大学 The visual analysis system and method for deep neural network training process
CN108446568A (en) * 2018-03-19 2018-08-24 西北大学 A kind of histogram data dissemination method going trend analysis difference secret protection
CN108712260A (en) * 2018-05-09 2018-10-26 曲阜师范大学 The multi-party deep learning of privacy is protected to calculate Proxy Method under cloud environment
CN109299436A (en) * 2018-09-17 2019-02-01 北京邮电大学 A kind of ordering of optimization preference method of data capture meeting local difference privacy
CN109495476A (en) * 2018-11-19 2019-03-19 中南大学 A kind of data flow difference method for secret protection and system based on edge calculations
CN109684855A (en) * 2018-12-17 2019-04-26 电子科技大学 A kind of combined depth learning training method based on secret protection technology

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
A generic framework for privacty preserving deep learning;Theo Ryffel et al.;《machine learning》;20181109;全文 *
Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning;NhatHai Phan et al.;《2017 IEEE international conference on data mining(ICDM)》;20171218;全文 *
On Pixel-Wise explanations for Non-linear classifier Decisions by Layer-Wise Relevance Propagation;Sebastian Bach et al.;《PLOS ONE》;20150710;全文 *
Privacy-Preserving Deep Learning;Reza Shokri et al.;《Proceedings of the 22nd ACM SIGSAC conference on computer and communications security》;20151031;全文 *
The LRP Toolbox for Artificial Neural Networks;Sebastian Lapuschkin et al.;《Journal of Machine Research》;20161231;全文 *
Towards Efficient and Privacy-Preserving Federated Deep Learning;Meng Hao et al.;《ICC 2019-2019 IEEE international conference on communications(ICC)》;20190524;全文 *
基于DCGAN反馈的深度差分隐私保护方法;毛典辉等;《北京工业大学学报》;20180424;全文 *
机器学习安全及隐私保护研究进展;宋蕾等;《网络与信息安全学报》;20180815;全文 *

Also Published As

Publication number Publication date
CN110443063A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110443063B (en) Adaptive privacy-protecting federal deep learning method
Thapa et al. Splitfed: When federated learning meets split learning
Wang et al. Beyond inferring class representatives: User-level privacy leakage from federated learning
CN112364943B (en) Federal prediction method based on federal learning
Chandra Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time-series prediction
CN113420232B (en) Privacy protection-oriented federated recommendation method for neural network of graph
CN111242290B (en) Lightweight privacy protection generation countermeasure network system
CN112862001A (en) Decentralized data modeling method under privacy protection
CN113298268B (en) Vertical federal learning method and device based on anti-noise injection
CN107609630A (en) A kind of depth confidence network parameter optimization method and system based on artificial bee colony
CN106528586A (en) Human behavior video identification method
CN113435592A (en) Privacy-protecting neural network multi-party cooperative lossless training method and system
CN114362948B (en) Federated derived feature logistic regression modeling method
CN115952532A (en) Privacy protection method based on federation chain federal learning
Mao et al. A novel user membership leakage attack in collaborative deep learning
CN115510472B (en) Multi-difference privacy protection method and system for cloud edge aggregation system
CN112101555A (en) Method and device for multi-party combined training model
CN110991462B (en) Privacy protection CNN-based secret image identification method and system
Miyajima et al. A proposal of profit sharing method for secure multiparty computation
Reiffers-Masson et al. Opinion-based centrality in multiplex networks: A convex optimization approach
CN116865938A (en) Multi-server federation learning method based on secret sharing and homomorphic encryption
Miyajima et al. Fast and secure back-propagation learning using vertically partitioned data with IoT
WO2023029324A1 (en) Marketing arbitrage underground industry identification method based on dynamic attention graph network
Wu et al. Efficient privacy-preserving federated learning for resource-constrained edge devices
Zhang et al. Privacy-preserving federated learning on partitioned attributes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant