CN110443063A - The method of the federal deep learning of self adaptive protection privacy - Google Patents
The method of the federal deep learning of self adaptive protection privacy Download PDFInfo
- Publication number
- CN110443063A CN110443063A CN201910563455.7A CN201910563455A CN110443063A CN 110443063 A CN110443063 A CN 110443063A CN 201910563455 A CN201910563455 A CN 201910563455A CN 110443063 A CN110443063 A CN 110443063A
- Authority
- CN
- China
- Prior art keywords
- data
- data attribute
- model
- participant
- contribution degree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The method that the present invention proposes a kind of federal deep learning of self adaptive protection privacy to protect the initial data of user in federal deep learning not known by curious server, while protecting the parameter of learning model not reveal the information of user's initial data.Each participant negotiates a network frame with Cloud Server in advance, and then Cloud Server obtains the model of an initialization, which is broadcast to each participant by Cloud Server;Participant downloads the model parameter of initialization and updates the local model of oneself; it is trained then in conjunction with local data sets; and the different contribution degrees that model is exported based on data attribute; discrepant privacy preserving operations are implemented to different data feature, the local gradient that each self-training obtains is sent to Cloud Server by participant;Finally, Cloud Server updates the model of oneself after collecting the gradient information of each participant to carry out subsequent training.The accuracy of learning model is greatly improved under the premise of meeting secret protection in the present invention.
Description
Technical field
The present invention relates to artificial intelligence technologys.
Background technique
The deep learning of conventional center requires user data to concentrate in a data center, and user loses to oneself data
Control, and this partial data may also can be abused by data consumer or speculate more user privacy informations.By Google
The combined depth study (Federated Deep Learning) of proposition can solve privacy, position, the right to use of user data
The problems such as.
Combined depth learns (Federated Deep Learning) under the premise of oneself underground data set, permits
Many one common models of a participant's combination learning.This requires participant to be trained with the local data sets of oneself
To a local model, and share respective trained gradient with other participants;Cloud Server or some user aggregation come from
The training gradient of each participant, then obtains the model of one " common ", which occurs prevented also from the local model of user
The phenomenon that local over-fitting.
It is statistical close that difference privacy mechanism (Differential Privacy Mechanism) is that one kind is commonly used in
Code technology removes personal feature under the premise of retaining data statistical characteristics the method for protecting user data privacy.Often
∈-difference privacy is realized with Laplce's mechanism.Specifically, it is exactly injected to data item and obeys laplacian distribution
Noise makes it meet the difference privacy of privacy budget ∈.Privacy budget is bigger, and secret protection rank is higher.It is actually using
In the process, the serial and concurrent combined characteristic for usually combining difference privacy keeps the application of difference privacy more flexible.
Has the Privacy Preservation Mechanism based on technologies such as multi-party computations, homomorphic cryptography and difference privacies at present.But
The data that look to the future growth trend sacrifices the multi-party computations of communication overhead and requires the homomorphic cryptography phase of computing cost
Than difference privacy mechanism has good efficiency performance.However, difference privacy mechanism needs to weigh data privacy and model essence
Exactness.
Summary of the invention
The technical problem to be solved by the invention is to provide accuracies and extensive use that one kind can guarantee training pattern
High efficiency under the scene of family, while preventing the connection of the adaptivity of server deduction model parameter and user's training data privacy
Nation's deep learning method.
The present invention is to solve above-mentioned technical problem the technical scheme adopted is that a kind of federation of self adaptive protection privacy is deep
Spend the method for study, comprising the following steps:
1) system initialization: each participant and Cloud Server first negotiate deep learning model, then Cloud Server select with
The public data of user data same type carries out the training of the deep learning model as training data, obtains deep learning
The model parameter w of modelglobal;Training data includes several data item and corresponding label, and data item is by several data attributes
Composition;
2) participant initializes local model: server is by the model parameter w of deep learning modelglobalIt is broadcast to each ginseng
With person, participant's download model parameter wglobalInitialization model parameter as local deep learning model;
3) participant utilizes local user data update deep learning model:
3-1) contribution degree that data item exports model is calculated using layer-by-layer relevance propagation algorithm;
3-2) it polymerize the contribution degree of the data attribute of identical data Attribute class and is averaged, obtains the tribute of data attribute class
Degree of offering;
Difference secret protection 3-3) is carried out to contribution degree: Laplce's noise being injected to the contribution degree of data Attribute class, is made
The contribution degree of data attribute class after Laplce's noise must be injected meets contribution degree privacy budget εcDifference privacy;
4) participant runs small lot gradient descent algorithm to optimize local model:
4-1) participant chooses several data items and corresponding label as training data;
4-2) carry out difference secret protection to data attribute: the contribution degree according to the data attribute class where data attribute is big
It is small to inject Laplce's noise adaptive in data attribute, inject the size of Laplce's noise and the data at place
The contribution degree size of Attribute class is positively correlated, and the data attribute after injection Laplce's noise is made to meet data attribute privacy
Budget εlDifference privacy;
4-3) carry out difference secret protection to the label of training data attribute: it is multinomial that loss function, which is passed through Taylor expansion,
Then formula form injects Laplce's noise to multinomial coefficient, the loss function is made to meet label privacy budget εfDifference
Privacy;Contribution degree privacy budget εc, data attribute privacy budget εlAnd label privacy budget εfThe sum of three is preset total
Privacy budget;
4-4) by loss function derived function model gradient, and update local model;
4-5) participant uploads model gradient to Cloud Server;
5) Cloud Server polymerize: Cloud Server collects the overall situation on the model gradient updating Cloud Server that each participant is sent
Model.
Federal deep learning and difference privacy technology are combined, using layer-by-layer relevance propagation algorithm, user can be calculated
The contribution degree size that data attribute exports model out;Participant can be using random secret protection adjustment technology come customized
Laplce's noise adaptive is injected into the data category of user then according to contribution degree size by the privacy class of data
In property.Data attribute lesser for contribution degree, is disturbed with big level of noise, increases the privacy of system;For contribution
Biggish data attribute is spent, small noise is injected, can effectively increase the accuracy of model.
The invention has the advantages that improving system to the maximum extent under the premise of guaranteeing system privacy level of protection
Accuracy.
Detailed description of the invention
Fig. 1 is system schematic;
Fig. 2 is layer-by-layer relevance propagation algorithm schematic diagram.
Specific embodiment
1. the system model of the invention is as shown in Figure 1:
2. system initialization, comprising the following steps:
1) participant UgIt needs to combine to obtain a learning model that is more accurate and not will lead to local over-fitting, participates in
Person UgThe network for negotiating what a deep learning in advance between Cloud Server recycles nerve net such as convolutional neural networks CNN
Network RNN etc.;
2) server is trained according to user data type with some disclosed data, obtains the depth of an initialization
Spend learning model parameter wglobal;
3. participant initializes local model, it is characterised in that the following steps are included:
1) the deep learning model parameter w of server broadcast oneselfglobal;
2) each participant UgDownload the model parameter w of initializationglobal, and update oneself local learning model wlocal;
4. participant pre-processes local data, it is characterised in that the following steps are included:
1) data normalization: each participant UgPossess a local data sets Dg, g is data set serial number variable, data set
DgIn include n data item xiAnd corresponding label yi, each data item includes u data attribute xI, jComposition: xI, 1,
xI, 2..., xI, u;Label yi∈ [1, v], data item xiIn the corresponding 1 data attribute type of 1 data attribute;I ∈ [1, n], j
∈ [1, u], is operated by data normalization, can limit the range of data attribute value:The operation can add
Fast training speed;
2) by layer-by-layer relevance propagation algorithm, participant can calculate data attribute x in individual data itemI, jIt is defeated to model
Contribution degree out: input layer linputThe number and data attribute x of (Input layer) epineural memberI, jNumber is identical, layer-by-layer phase
It spreads through sex intercourse as shown in Figure 2 pass.
With the local model parameter w of initializationlocalAnd local data setsIn data, it is trained by feedforward network
To the prediction result y=f (x of a modeli).It is found that data item xiThrough output layer loOn neuron abTo model output
Contribution degreeFor the output valve f (x of modeli), b, c are the variable for indicating neuron serial number:
Then output valve is successively back transmitted, and by the linear relationship between network level, can calculate data attribute
Xi is through kth layer lkEpineural member acTo -1 layer of l of kthk-1Neuron abBetween contribution degreeIt is as follows:
wB, cTo connect neuron abWith neuron acBetween weight,For neuron abInput value, μ is to level off to 0
A number, take 10 here-6;
By above formula, data item x can be calculatediNeuron a through kth layerbTo the contribution degree of model outputFor data
Item xiThrough the neuron a with kth layerbThe upper each neuron a of connected+1 layer of kthc∈lk+1The sum of contribution degree.It is specific as follows:
3) participant calculates the contribution degree that data attribute class exports model: input layer linputThe number and number of epineural member
According to attribute xI, jNumber is identical, and a kind of corresponding data attribute (data attribute class) of a neuron.Participant combines n item
The contribution degree of the data attribute of identical data Attribute class in data item, can calculate the contribution degree of data attribute class.Due to input
The function of the neuron of layer is exactly that the data (picture, sound etc.) received are converted into numerical value, so input layer linputMind
Through first abOutput indicate be data attribute xI, j, whenInput layer linputThe number and data attribute of epineural member
xI, jThe identical then b ∈ [1, u] of number.When calculating the contribution degree of data attribute according to layer-by-layer propagation algorithm, input layer is extracted
The contribution degree of neuron represent the contribution degree size of data attribute class.Participant combines identical data category in n data item
The contribution degree of property class, can calculate the contribution degree C of every kind of data attribute class jj.It is specific as follows:
4) to the privacy preserving operations of contribution degree calculating process: during calculating contribution degree, can distinguish to
Practise the Attribute class that " important " is compared in model output.It is hidden using difference to protect the initial data of user to be not leaked or being speculated
Private mechanism disturbs contribution degree, it may be assumed that injects Laplce's noise to the contribution degree of data Attribute class.It is specific as follows:
Wherein, Lap indicates the probability density function of laplacian distribution, GScIt is that preset data attribute class is defeated to model
The susceptibility of contribution degree out reflects data in the maximum difference closed between data set pair, in determining neural network
It is fixed value under structure;εcIt is the privacy budget of contribution degree, the bigger expression level of noise of value is smaller, will lead to higher system
It is horizontal to provide weaker secret protection for accuracy.The probability density function of laplacian distribution are as follows:
A is scale parameter, is enabled||·||1For 1- norm;
5. the local model of participant's training, comprising the following steps:
1) participant chooses t data tuple: in each training, participant UgRandomly choose local data setsMiddle number
Amount is the small lot data set of tAs this training data;
2) to data attribute xI, jIt carries out secret protection: utilizing random secret protection adjustment technology, we introduce two tune
Integral divisor: f, p.Wherein, f is the customized threshold value of user, which can define the privacy class of user;P is probability value.Tribute
Degree of offering ratio are as follows:If the contribution degree ratio of user property class is greater than the threshold value, the Attribute class is defined
It contributes larger.For the privacy level for improving model, Laplce's noise is all injected to the generic attribute:
User property of the contribution degree ratio less than threshold value f is defined as that contribution degree is smaller, for the accuracy for improving model, to this
Generic attribute carries out probability noise injection.It is specific as follows:
Wherein, the mode for injecting noise is adaptivity.Enable the privacy budget ε of each data attribute classjAre as follows:
εj=β * εl
Privacy budget εjFor data attribute privacy budget εlIt is distributed by contribution degree ratio.Adaptivity injects following noise:
Wherein, GSlFor data attribute susceptibility.
3) to the label y of training dataiCarry out secret protection: the protection for label is general by drawing loss function injection
Lars noise is realized;In the training of r wheel, when we select sigmoid function:As neuron
When activation primitive, in conjunction with the Taylor expansion of cross entropy (cross-entropy) cost function are as follows:
Wherein, two parts in the expression formula of cross entropy cost function, y are indicatediIndicate the label of the i-th data item, k table
Show that the variable of tag types serial number, v are total class number of label, F1, k(z)=yi*log(1+e-z)), F2, k(z)=(1-yi)*
log(1+e-z),In 0 indicate variable be 0, subscript (0), (1), (2) respectively indicate 0 time, 1 time, 2 derivations.And
AndFor Processing with Neural Network data item xiWhen, the output vector of hidden layer the last layer, due to the characteristic of neural network,
In addition to input layer, the input vector of this layer is upper one layer of output vector.
For the label y for protecting training datai, to multinomial coefficient
It is injected separately into Laplce
Noise
Enabling label susceptibility is GSf, privacy budget is εf, the loss function is made to meet privacy budget εf
Difference privacy;εc+εl+εf=preset total privacy budget;
4) optimize local model, improve the accuracy rate of system: by loss function derived function model gradient
Enabling η is the learning rate of local model, and updates local model:
5) participant uploads gradient information: it enables:
Participant sends vectorTo Cloud Server;
6. Cloud Server polymerize, comprising the following steps:
Cloud Server receives the gradient information vector that each side sends, and updates local model: enabling ηglobalLearn mould for server
The learning rate of type.Carry out more new model according to the following formula:
To sum up, the method that the present invention proposes a kind of federal deep learning of adaptivity protection privacy, the program can protect
The initial data for protecting user in federal deep learning is not known by curious server, while protecting the parameter of learning model not
Reveal the information of user's initial data.
Claims (3)
1. the method for the federal deep learning of self adaptive protection privacy, which comprises the following steps:
1) system initialization: each participant and Cloud Server first negotiate deep learning model, then Cloud Server selection and user
The public data of data same type carries out the training of the deep learning model as training data, obtains deep learning model
Model parameter wglobal;Training data includes several data item and corresponding label, and data item is made of several data attributes;
2) participant initializes local model: server is by the model parameter w of deep learning modelglobalIt is broadcast to each participant,
Participant's download model parameter wglobalInitialization model parameter as local deep learning model;
3) participant utilizes local user data update deep learning model:
3-1) contribution degree that data item exports model is calculated using layer-by-layer relevance propagation algorithm;
3-2) it polymerize the contribution degree of the data attribute of identical data Attribute class and is averaged, obtains the contribution of data attribute class
Degree;
Difference secret protection 3-3) is carried out to contribution degree: Laplce's noise being injected to the contribution degree of data Attribute class, so that note
The contribution degree of data attribute class after entering Laplce's noise meets contribution degree privacy budget εcDifference privacy;
4) participant runs small lot gradient descent algorithm to optimize local model:
4-1) participant chooses several data items and corresponding label as training data;
Difference secret protection 4-2) is carried out to data attribute: will according to the contribution degree size of the data attribute class where data attribute
It injects in data attribute to Laplce's noise adaptive, injects the size of Laplce's noise and the data attribute class at place
Contribution degree size be positively correlated, and make inject Laplce's noise after data attribute meet data attribute privacy budget εl's
Difference privacy;
4-3) carry out difference secret protection to the label of training data attribute: it is multinomial shape that loss function, which is passed through Taylor expansion,
Then formula injects Laplce's noise to multinomial coefficient, the loss function is made to meet label privacy budget εfDifference privacy;
Contribution degree privacy budget εc, data attribute privacy budget εlAnd label privacy budget εfThe sum of three is preset total privacy budget;
4-4) by loss function derived function model gradient, and update local model;
4-5) participant uploads model gradient to Cloud Server;
5) Cloud Server polymerize: Cloud Server collects the global mould on the model gradient updating Cloud Server that each participant is sent
Type.
2. method as described in claim 1, which is characterized in that according to the contribution degree size of the data attribute class where data attribute
The specific method that Laplce's noise adaptive is injected in data attribute is:
The contribution degree ratio β for calculating data attribute class is right when the contribution degree ratio of data attribute class is more than or equal to preset threshold
Total data attribute injects Laplce's noise in data attribute class;When the contribution degree ratio of data attribute class is less than default threshold
When value, injection Laplce's noise is carried out by predetermined probabilities to data attribute in data Attribute class;Wherein, the mode of noise is injected
By with the privacy budget allocation of corresponding data Attribute class, to data attribute, the privacy budget of data attribute class is data attribute privacy
Budget εlMultiplier according to the contribution degree ratio of Attribute class result;The contribution degree ratio of data attribute class are as follows: injection Laplce makes an uproar
The contribution of data attribute class after the absolute value of the contribution degree of the data attribute class after sound and all injection Laplce's noises
The ratio between absolute value of degree.
3. method as described in claim 1, which is characterized in that step 3-1) first data attribute is carried out at data normalization before
Reason.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910563455.7A CN110443063B (en) | 2019-06-26 | 2019-06-26 | Adaptive privacy-protecting federal deep learning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910563455.7A CN110443063B (en) | 2019-06-26 | 2019-06-26 | Adaptive privacy-protecting federal deep learning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110443063A true CN110443063A (en) | 2019-11-12 |
CN110443063B CN110443063B (en) | 2023-03-28 |
Family
ID=68428977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910563455.7A Active CN110443063B (en) | 2019-06-26 | 2019-06-26 | Adaptive privacy-protecting federal deep learning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110443063B (en) |
Cited By (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046433A (en) * | 2019-12-13 | 2020-04-21 | 支付宝(杭州)信息技术有限公司 | Model training method based on federal learning |
CN111079977A (en) * | 2019-11-18 | 2020-04-28 | 中国矿业大学 | Heterogeneous federated learning mine electromagnetic radiation trend tracking method based on SVD algorithm |
CN111079022A (en) * | 2019-12-20 | 2020-04-28 | 深圳前海微众银行股份有限公司 | Personalized recommendation method, device, equipment and medium based on federal learning |
CN111091199A (en) * | 2019-12-20 | 2020-05-01 | 哈尔滨工业大学(深圳) | Federal learning method and device based on differential privacy and storage medium |
CN111125779A (en) * | 2019-12-17 | 2020-05-08 | 山东浪潮人工智能研究院有限公司 | Block chain-based federal learning method and device |
CN111143878A (en) * | 2019-12-20 | 2020-05-12 | 支付宝(杭州)信息技术有限公司 | Method and system for model training based on private data |
CN111177768A (en) * | 2020-04-10 | 2020-05-19 | 支付宝(杭州)信息技术有限公司 | Method and device for protecting business prediction model of data privacy joint training by two parties |
CN111177791A (en) * | 2020-04-10 | 2020-05-19 | 支付宝(杭州)信息技术有限公司 | Method and device for protecting business prediction model of data privacy joint training by two parties |
CN111190487A (en) * | 2019-12-30 | 2020-05-22 | 中国科学院计算技术研究所 | Method for establishing data analysis model |
CN111209478A (en) * | 2020-01-03 | 2020-05-29 | 京东数字科技控股有限公司 | Task pushing method and device, storage medium and electronic equipment |
CN111222646A (en) * | 2019-12-11 | 2020-06-02 | 深圳逻辑汇科技有限公司 | Design method and device of federal learning mechanism and storage medium |
CN111245610A (en) * | 2020-01-19 | 2020-06-05 | 浙江工商大学 | Data privacy protection deep learning method based on NTRU homomorphic encryption |
CN111241580A (en) * | 2020-01-09 | 2020-06-05 | 广州大学 | Trusted execution environment-based federated learning method |
CN111241582A (en) * | 2020-01-10 | 2020-06-05 | 鹏城实验室 | Data privacy protection method and device and computer readable storage medium |
CN111310932A (en) * | 2020-02-10 | 2020-06-19 | 深圳前海微众银行股份有限公司 | Method, device and equipment for optimizing horizontal federated learning system and readable storage medium |
CN111428881A (en) * | 2020-03-20 | 2020-07-17 | 深圳前海微众银行股份有限公司 | Recognition model training method, device, equipment and readable storage medium |
CN111581663A (en) * | 2020-04-30 | 2020-08-25 | 电子科技大学 | Federal deep learning method for protecting privacy and facing irregular users |
CN111581648A (en) * | 2020-04-06 | 2020-08-25 | 电子科技大学 | Method of federal learning to preserve privacy in irregular users |
CN111783142A (en) * | 2020-07-06 | 2020-10-16 | 北京字节跳动网络技术有限公司 | Data protection method, device, server and medium |
CN111935168A (en) * | 2020-08-19 | 2020-11-13 | 四川大学 | Industrial information physical system-oriented intrusion detection model establishing method |
CN111985650A (en) * | 2020-07-10 | 2020-11-24 | 华中科技大学 | Activity recognition model and system considering both universality and individuation |
CN112101403A (en) * | 2020-07-24 | 2020-12-18 | 西安电子科技大学 | Method and system for classification based on federate sample network model and electronic equipment |
CN112329940A (en) * | 2020-11-02 | 2021-02-05 | 北京邮电大学 | Personalized model training method and system combining federal learning and user portrait |
CN112487482A (en) * | 2020-12-11 | 2021-03-12 | 广西师范大学 | Deep learning differential privacy protection method of self-adaptive cutting threshold |
CN112487481A (en) * | 2020-12-09 | 2021-03-12 | 重庆邮电大学 | Verifiable multi-party k-means federal learning method with privacy protection |
CN112487479A (en) * | 2020-12-10 | 2021-03-12 | 支付宝(杭州)信息技术有限公司 | Method for training privacy protection model, privacy protection method and device |
CN112507312A (en) * | 2020-12-08 | 2021-03-16 | 电子科技大学 | Digital fingerprint-based verification and tracking method in deep learning system |
CN112600697A (en) * | 2020-12-07 | 2021-04-02 | 中山大学 | QoS prediction method and system based on federal learning, client and server |
CN112611080A (en) * | 2020-12-10 | 2021-04-06 | 浙江大学 | Intelligent air conditioner control system and method based on federal learning |
CN112668044A (en) * | 2020-12-21 | 2021-04-16 | 中国科学院信息工程研究所 | Privacy protection method and device for federal learning |
CN112765559A (en) * | 2020-12-29 | 2021-05-07 | 平安科技(深圳)有限公司 | Method and device for processing model parameters in federal learning process and related equipment |
CN112799708A (en) * | 2021-04-07 | 2021-05-14 | 支付宝(杭州)信息技术有限公司 | Method and system for jointly updating business model |
CN112910624A (en) * | 2021-01-14 | 2021-06-04 | 东北大学 | Ciphertext prediction method based on homomorphic encryption |
CN112949865A (en) * | 2021-03-18 | 2021-06-11 | 之江实验室 | Sigma protocol-based federal learning contribution degree evaluation method |
CN113191479A (en) * | 2020-01-14 | 2021-07-30 | 华为技术有限公司 | Method, system, node and storage medium for joint learning |
CN113222211A (en) * | 2021-03-31 | 2021-08-06 | 中国科学技术大学先进技术研究院 | Multi-region diesel vehicle pollutant emission factor prediction method and system |
CN113268772A (en) * | 2021-06-08 | 2021-08-17 | 北京邮电大学 | Joint learning security aggregation method and device based on differential privacy |
CN113312543A (en) * | 2020-02-27 | 2021-08-27 | 华为技术有限公司 | Personalized model training method based on joint learning, electronic equipment and medium |
CN113434873A (en) * | 2021-06-01 | 2021-09-24 | 内蒙古大学 | Federal learning privacy protection method based on homomorphic encryption |
WO2021197388A1 (en) * | 2020-03-31 | 2021-10-07 | 深圳前海微众银行股份有限公司 | User indexing method in federated learning and federated learning device |
WO2021244035A1 (en) * | 2020-06-03 | 2021-12-09 | Huawei Technologies Co., Ltd. | Methods and apparatuses for defense against adversarial attacks on federated learning systems |
CN113836322A (en) * | 2021-09-27 | 2021-12-24 | 平安科技(深圳)有限公司 | Article duplicate checking method and device, electronic equipment and storage medium |
CN113902122A (en) * | 2021-08-26 | 2022-01-07 | 杭州城市大脑有限公司 | Federal model collaborative training method and device, computer equipment and storage medium |
WO2022048143A1 (en) * | 2020-09-04 | 2022-03-10 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Differential privacy-based federated voiceprint recognition method |
CN114257386A (en) * | 2020-09-10 | 2022-03-29 | 华为技术有限公司 | Training method, system, equipment and storage medium for detection model |
CN114463601A (en) * | 2022-04-12 | 2022-05-10 | 北京云恒科技研究院有限公司 | Big data-based target identification data processing system |
CN114548373A (en) * | 2022-02-17 | 2022-05-27 | 河北师范大学 | Differential privacy deep learning method based on feature region segmentation |
WO2022111639A1 (en) * | 2020-11-30 | 2022-06-02 | 华为技术有限公司 | Federated learning method and apparatus, device, system, and computer-readable storage medium |
CN114912624A (en) * | 2022-04-12 | 2022-08-16 | 支付宝(杭州)信息技术有限公司 | Longitudinal federal learning method and device for business model |
WO2023082787A1 (en) * | 2021-11-10 | 2023-05-19 | 新智我来网络科技有限公司 | Method and apparatus for determining contribution degree of participant in federated learning, and federated learning training method and apparatus |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704925A (en) * | 2017-10-16 | 2018-02-16 | 清华大学 | The visual analysis system and method for deep neural network training process |
CN108446568A (en) * | 2018-03-19 | 2018-08-24 | 西北大学 | A kind of histogram data dissemination method going trend analysis difference secret protection |
CN108712260A (en) * | 2018-05-09 | 2018-10-26 | 曲阜师范大学 | The multi-party deep learning of privacy is protected to calculate Proxy Method under cloud environment |
CN109299436A (en) * | 2018-09-17 | 2019-02-01 | 北京邮电大学 | A kind of ordering of optimization preference method of data capture meeting local difference privacy |
CN109495476A (en) * | 2018-11-19 | 2019-03-19 | 中南大学 | A kind of data flow difference method for secret protection and system based on edge calculations |
CN109684855A (en) * | 2018-12-17 | 2019-04-26 | 电子科技大学 | A kind of combined depth learning training method based on secret protection technology |
-
2019
- 2019-06-26 CN CN201910563455.7A patent/CN110443063B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704925A (en) * | 2017-10-16 | 2018-02-16 | 清华大学 | The visual analysis system and method for deep neural network training process |
CN108446568A (en) * | 2018-03-19 | 2018-08-24 | 西北大学 | A kind of histogram data dissemination method going trend analysis difference secret protection |
CN108712260A (en) * | 2018-05-09 | 2018-10-26 | 曲阜师范大学 | The multi-party deep learning of privacy is protected to calculate Proxy Method under cloud environment |
CN109299436A (en) * | 2018-09-17 | 2019-02-01 | 北京邮电大学 | A kind of ordering of optimization preference method of data capture meeting local difference privacy |
CN109495476A (en) * | 2018-11-19 | 2019-03-19 | 中南大学 | A kind of data flow difference method for secret protection and system based on edge calculations |
CN109684855A (en) * | 2018-12-17 | 2019-04-26 | 电子科技大学 | A kind of combined depth learning training method based on secret protection technology |
Non-Patent Citations (8)
Title |
---|
MENG HAO ET AL.: "Towards Efficient and Privacy-Preserving Federated Deep Learning", 《ICC 2019-2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS(ICC)》 * |
NHATHAI PHAN ET AL.: "Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning", 《2017 IEEE INTERNATIONAL CONFERENCE ON DATA MINING(ICDM)》 * |
REZA SHOKRI ET AL.: "Privacy-Preserving Deep Learning", 《PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY》 * |
SEBASTIAN BACH ET AL.: "On Pixel-Wise explanations for Non-linear classifier Decisions by Layer-Wise Relevance Propagation", 《PLOS ONE》 * |
SEBASTIAN LAPUSCHKIN ET AL.: "The LRP Toolbox for Artificial Neural Networks", 《JOURNAL OF MACHINE RESEARCH》 * |
THEO RYFFEL ET AL.: "A generic framework for privacty preserving deep learning", 《MACHINE LEARNING》 * |
宋蕾等: "机器学习安全及隐私保护研究进展", 《网络与信息安全学报》 * |
毛典辉等: "基于DCGAN反馈的深度差分隐私保护方法", 《北京工业大学学报》 * |
Cited By (72)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111079977A (en) * | 2019-11-18 | 2020-04-28 | 中国矿业大学 | Heterogeneous federated learning mine electromagnetic radiation trend tracking method based on SVD algorithm |
CN111222646A (en) * | 2019-12-11 | 2020-06-02 | 深圳逻辑汇科技有限公司 | Design method and device of federal learning mechanism and storage medium |
CN111046433A (en) * | 2019-12-13 | 2020-04-21 | 支付宝(杭州)信息技术有限公司 | Model training method based on federal learning |
CN111125779A (en) * | 2019-12-17 | 2020-05-08 | 山东浪潮人工智能研究院有限公司 | Block chain-based federal learning method and device |
CN111143878A (en) * | 2019-12-20 | 2020-05-12 | 支付宝(杭州)信息技术有限公司 | Method and system for model training based on private data |
CN111079022B (en) * | 2019-12-20 | 2023-10-03 | 深圳前海微众银行股份有限公司 | Personalized recommendation method, device, equipment and medium based on federal learning |
CN111079022A (en) * | 2019-12-20 | 2020-04-28 | 深圳前海微众银行股份有限公司 | Personalized recommendation method, device, equipment and medium based on federal learning |
CN111091199A (en) * | 2019-12-20 | 2020-05-01 | 哈尔滨工业大学(深圳) | Federal learning method and device based on differential privacy and storage medium |
CN111190487A (en) * | 2019-12-30 | 2020-05-22 | 中国科学院计算技术研究所 | Method for establishing data analysis model |
CN111209478A (en) * | 2020-01-03 | 2020-05-29 | 京东数字科技控股有限公司 | Task pushing method and device, storage medium and electronic equipment |
CN111241580A (en) * | 2020-01-09 | 2020-06-05 | 广州大学 | Trusted execution environment-based federated learning method |
CN111241580B (en) * | 2020-01-09 | 2022-08-09 | 广州大学 | Trusted execution environment-based federated learning method |
CN111241582A (en) * | 2020-01-10 | 2020-06-05 | 鹏城实验室 | Data privacy protection method and device and computer readable storage medium |
CN111241582B (en) * | 2020-01-10 | 2022-06-10 | 鹏城实验室 | Data privacy protection method and device and computer readable storage medium |
CN113191479A (en) * | 2020-01-14 | 2021-07-30 | 华为技术有限公司 | Method, system, node and storage medium for joint learning |
CN111245610A (en) * | 2020-01-19 | 2020-06-05 | 浙江工商大学 | Data privacy protection deep learning method based on NTRU homomorphic encryption |
CN111245610B (en) * | 2020-01-19 | 2022-04-19 | 浙江工商大学 | Data privacy protection deep learning method based on NTRU homomorphic encryption |
CN111310932A (en) * | 2020-02-10 | 2020-06-19 | 深圳前海微众银行股份有限公司 | Method, device and equipment for optimizing horizontal federated learning system and readable storage medium |
CN113312543A (en) * | 2020-02-27 | 2021-08-27 | 华为技术有限公司 | Personalized model training method based on joint learning, electronic equipment and medium |
CN111428881A (en) * | 2020-03-20 | 2020-07-17 | 深圳前海微众银行股份有限公司 | Recognition model training method, device, equipment and readable storage medium |
WO2021197388A1 (en) * | 2020-03-31 | 2021-10-07 | 深圳前海微众银行股份有限公司 | User indexing method in federated learning and federated learning device |
CN111581648A (en) * | 2020-04-06 | 2020-08-25 | 电子科技大学 | Method of federal learning to preserve privacy in irregular users |
CN111581648B (en) * | 2020-04-06 | 2022-06-03 | 电子科技大学 | Method of federal learning to preserve privacy in irregular users |
CN111177768A (en) * | 2020-04-10 | 2020-05-19 | 支付宝(杭州)信息技术有限公司 | Method and device for protecting business prediction model of data privacy joint training by two parties |
CN111177791A (en) * | 2020-04-10 | 2020-05-19 | 支付宝(杭州)信息技术有限公司 | Method and device for protecting business prediction model of data privacy joint training by two parties |
CN111177791B (en) * | 2020-04-10 | 2020-07-17 | 支付宝(杭州)信息技术有限公司 | Method and device for protecting business prediction model of data privacy joint training by two parties |
CN111581663B (en) * | 2020-04-30 | 2022-05-03 | 电子科技大学 | Federal deep learning method for protecting privacy and facing irregular users |
CN111581663A (en) * | 2020-04-30 | 2020-08-25 | 电子科技大学 | Federal deep learning method for protecting privacy and facing irregular users |
WO2021244035A1 (en) * | 2020-06-03 | 2021-12-09 | Huawei Technologies Co., Ltd. | Methods and apparatuses for defense against adversarial attacks on federated learning systems |
US11651292B2 (en) | 2020-06-03 | 2023-05-16 | Huawei Technologies Co., Ltd. | Methods and apparatuses for defense against adversarial attacks on federated learning systems |
US11755691B2 (en) | 2020-07-06 | 2023-09-12 | Beijing Bytedance Network Technology Co., Ltd. | Data protection method and apparatus, and server and medium |
CN111783142A (en) * | 2020-07-06 | 2020-10-16 | 北京字节跳动网络技术有限公司 | Data protection method, device, server and medium |
CN111985650A (en) * | 2020-07-10 | 2020-11-24 | 华中科技大学 | Activity recognition model and system considering both universality and individuation |
CN112101403A (en) * | 2020-07-24 | 2020-12-18 | 西安电子科技大学 | Method and system for classification based on federate sample network model and electronic equipment |
CN112101403B (en) * | 2020-07-24 | 2023-12-15 | 西安电子科技大学 | Classification method and system based on federal few-sample network model and electronic equipment |
CN111935168A (en) * | 2020-08-19 | 2020-11-13 | 四川大学 | Industrial information physical system-oriented intrusion detection model establishing method |
WO2022048143A1 (en) * | 2020-09-04 | 2022-03-10 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Differential privacy-based federated voiceprint recognition method |
CN114257386A (en) * | 2020-09-10 | 2022-03-29 | 华为技术有限公司 | Training method, system, equipment and storage medium for detection model |
CN112329940A (en) * | 2020-11-02 | 2021-02-05 | 北京邮电大学 | Personalized model training method and system combining federal learning and user portrait |
WO2022111639A1 (en) * | 2020-11-30 | 2022-06-02 | 华为技术有限公司 | Federated learning method and apparatus, device, system, and computer-readable storage medium |
CN112600697A (en) * | 2020-12-07 | 2021-04-02 | 中山大学 | QoS prediction method and system based on federal learning, client and server |
CN112507312A (en) * | 2020-12-08 | 2021-03-16 | 电子科技大学 | Digital fingerprint-based verification and tracking method in deep learning system |
CN112507312B (en) * | 2020-12-08 | 2022-10-14 | 电子科技大学 | Digital fingerprint-based verification and tracking method in deep learning system |
CN112487481B (en) * | 2020-12-09 | 2022-06-10 | 重庆邮电大学 | Verifiable multi-party k-means federal learning method with privacy protection |
CN112487481A (en) * | 2020-12-09 | 2021-03-12 | 重庆邮电大学 | Verifiable multi-party k-means federal learning method with privacy protection |
CN112487479B (en) * | 2020-12-10 | 2023-10-13 | 支付宝(杭州)信息技术有限公司 | Method for training privacy protection model, privacy protection method and device |
CN112611080A (en) * | 2020-12-10 | 2021-04-06 | 浙江大学 | Intelligent air conditioner control system and method based on federal learning |
CN112487479A (en) * | 2020-12-10 | 2021-03-12 | 支付宝(杭州)信息技术有限公司 | Method for training privacy protection model, privacy protection method and device |
CN112487482A (en) * | 2020-12-11 | 2021-03-12 | 广西师范大学 | Deep learning differential privacy protection method of self-adaptive cutting threshold |
CN112487482B (en) * | 2020-12-11 | 2022-04-08 | 广西师范大学 | Deep learning differential privacy protection method of self-adaptive cutting threshold |
CN112668044B (en) * | 2020-12-21 | 2022-04-12 | 中国科学院信息工程研究所 | Privacy protection method and device for federal learning |
CN112668044A (en) * | 2020-12-21 | 2021-04-16 | 中国科学院信息工程研究所 | Privacy protection method and device for federal learning |
CN112765559A (en) * | 2020-12-29 | 2021-05-07 | 平安科技(深圳)有限公司 | Method and device for processing model parameters in federal learning process and related equipment |
CN112910624A (en) * | 2021-01-14 | 2021-06-04 | 东北大学 | Ciphertext prediction method based on homomorphic encryption |
CN112910624B (en) * | 2021-01-14 | 2022-05-10 | 东北大学 | Ciphertext prediction method based on homomorphic encryption |
CN112949865A (en) * | 2021-03-18 | 2021-06-11 | 之江实验室 | Sigma protocol-based federal learning contribution degree evaluation method |
CN112949865B (en) * | 2021-03-18 | 2022-10-28 | 之江实验室 | Joint learning contribution degree evaluation method based on SIGMA protocol |
CN113222211B (en) * | 2021-03-31 | 2023-12-12 | 中国科学技术大学先进技术研究院 | Method and system for predicting pollutant emission factors of multi-region diesel vehicle |
CN113222211A (en) * | 2021-03-31 | 2021-08-06 | 中国科学技术大学先进技术研究院 | Multi-region diesel vehicle pollutant emission factor prediction method and system |
CN112799708B (en) * | 2021-04-07 | 2021-07-13 | 支付宝(杭州)信息技术有限公司 | Method and system for jointly updating business model |
CN112799708A (en) * | 2021-04-07 | 2021-05-14 | 支付宝(杭州)信息技术有限公司 | Method and system for jointly updating business model |
CN113434873A (en) * | 2021-06-01 | 2021-09-24 | 内蒙古大学 | Federal learning privacy protection method based on homomorphic encryption |
CN113268772B (en) * | 2021-06-08 | 2022-12-20 | 北京邮电大学 | Joint learning security aggregation method and device based on differential privacy |
CN113268772A (en) * | 2021-06-08 | 2021-08-17 | 北京邮电大学 | Joint learning security aggregation method and device based on differential privacy |
CN113902122A (en) * | 2021-08-26 | 2022-01-07 | 杭州城市大脑有限公司 | Federal model collaborative training method and device, computer equipment and storage medium |
CN113836322A (en) * | 2021-09-27 | 2021-12-24 | 平安科技(深圳)有限公司 | Article duplicate checking method and device, electronic equipment and storage medium |
WO2023082787A1 (en) * | 2021-11-10 | 2023-05-19 | 新智我来网络科技有限公司 | Method and apparatus for determining contribution degree of participant in federated learning, and federated learning training method and apparatus |
CN114548373A (en) * | 2022-02-17 | 2022-05-27 | 河北师范大学 | Differential privacy deep learning method based on feature region segmentation |
CN114548373B (en) * | 2022-02-17 | 2024-03-26 | 河北师范大学 | Differential privacy deep learning method based on feature region segmentation |
CN114912624A (en) * | 2022-04-12 | 2022-08-16 | 支付宝(杭州)信息技术有限公司 | Longitudinal federal learning method and device for business model |
CN114463601A (en) * | 2022-04-12 | 2022-05-10 | 北京云恒科技研究院有限公司 | Big data-based target identification data processing system |
CN114463601B (en) * | 2022-04-12 | 2022-08-05 | 北京云恒科技研究院有限公司 | Big data-based target identification data processing system |
Also Published As
Publication number | Publication date |
---|---|
CN110443063B (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443063A (en) | The method of the federal deep learning of self adaptive protection privacy | |
Thapa et al. | Splitfed: When federated learning meets split learning | |
Singh et al. | Layer-specific adaptive learning rates for deep networks | |
Paquet et al. | One-class collaborative filtering with random graphs | |
CN114357067B (en) | Personalized federal element learning method aiming at data isomerism | |
Tang et al. | Automatic sparse connectivity learning for neural networks | |
CN110334757A (en) | Secret protection clustering method and computer storage medium towards big data analysis | |
Imtiaz et al. | Synthetic and private smart health care data generation using GANs | |
CN108763954A (en) | Linear regression model (LRM) multidimensional difference of Gaussian method for secret protection, information safety system | |
Wang et al. | DNN-DP: Differential privacy enabled deep neural network learning framework for sensitive crowdsourcing data | |
CN106709566A (en) | Deep learning-based data missing value refilling method | |
CN109933720A (en) | A kind of dynamic recommendation method based on user interest Adaptive evolution | |
CN116340996A (en) | Model fine tuning method for privacy protection and risk control method | |
Su et al. | Nonlinear statistical learning with truncated gaussian graphical models | |
CN116629376A (en) | Federal learning aggregation method and system based on no data distillation | |
Wang et al. | Federated semi-supervised learning with class distribution mismatch | |
Braun et al. | Convergence rates for shallow neural networks learned by gradient descent | |
Zhao et al. | A pruning method of refining recursive reduced least squares support vector regression | |
Zeng et al. | Fedpia: Parameter importance-based optimized federated learning to efficiently process non-iid data on consumer electronic devices | |
Lai et al. | Stochastic approximation: from statistical origin to big-data, multidisciplinary applications | |
Wan et al. | Online frank-wolfe with arbitrary delays | |
CN112101555A (en) | Method and device for multi-party combined training model | |
CN117574421A (en) | Federal data analysis system and method based on gradient dynamic clipping | |
Miyajima et al. | A proposal of profit sharing method for secure multiparty computation | |
Tao et al. | Communication efficient federated learning via channel-wise dynamic pruning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |