CN112668726B - Personalized federal learning method with efficient communication and privacy protection - Google Patents

Personalized federal learning method with efficient communication and privacy protection Download PDF

Info

Publication number
CN112668726B
CN112668726B CN202011568563.2A CN202011568563A CN112668726B CN 112668726 B CN112668726 B CN 112668726B CN 202011568563 A CN202011568563 A CN 202011568563A CN 112668726 B CN112668726 B CN 112668726B
Authority
CN
China
Prior art keywords
federal learning
personalized
client
central server
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011568563.2A
Other languages
Chinese (zh)
Other versions
CN112668726A (en
Inventor
梅媛
肖丹阳
吴维刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011568563.2A priority Critical patent/CN112668726B/en
Publication of CN112668726A publication Critical patent/CN112668726A/en
Application granted granted Critical
Publication of CN112668726B publication Critical patent/CN112668726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a personalized federal learning method with efficient communication and privacy protection, which comprises the following steps: s1: pulling a current global model W from a central server t Initializing a local model of each client among all clients
Figure DDA0002861764770000011
S2: performing E-round local training to obtain a new local model
Figure DDA0002861764770000012
S3: will be
Figure DDA0002861764770000013
The model parameters of (2) are sent to a central server; s4: the received model parameters are aggregated in a central server to obtain an aggregation result W t+1 The method comprises the steps of carrying out a first treatment on the surface of the S5: according to W t+1 Updating the local model of all clients to
Figure DDA0002861764770000014
S6: judging whether the preset iteration times are completed or not; if yes, personalized federal learning is completed; if not, let t=t+1, and return to step S2 to perform the next round of personalized federal learning. The invention provides a personalized federal learning method with efficient communication and privacy protection, which solves the problem that the existing personalized federal learning method does not realize the balance between a local model and a global model of a personalized client.

Description

Personalized federal learning method with efficient communication and privacy protection
Technical Field
The invention relates to the technical field of federal learning, in particular to a personalized federal learning method with efficient communication and privacy protection.
Background
Machine learning has gained tremendous success in the fields of computer vision, speech recognition, natural language processing, and the like. In order to complete a large-scale machine learning training task using massive amounts of data, distributed machine learning has been proposed and attracted attention. Federal learning is a novel distributed machine learning approach. In federal learning, because the central server utilizes the federal average (FedAvg) algorithm to aggregate model updates from various clients, all parties to federal training get a unified global model after training is completed. Most of the existing federal learning algorithms focus on improving the global model effect of federal learning. However, in a federal environment, the owned data on many clients is often non-independent and co-distributed, so that the trained global model is difficult to adapt to each client, i.e., the global model is likely not as well performing as a model trained on an individual client. So federal learning in conjunction with client training would be meaningless. Personalized federal learning is therefore necessary. However, the existing personalized federal learning method only emphasizes the importance of the personalized client model, and the balance between the local model and the global model of the personalized client is not realized, so that the effect loss of the global model is obvious.
In the prior art, as disclosed in China patent No. 08 and 28 in 2020, a decentralizing federation machine learning method under privacy protection has a publication number of CN111600707A, and solves the defects that the existing federation learning is vulnerable to DoS attack, single-point faults of a parameter central server and the like; the PVSS is combined to verify the secret distribution protocol to protect the parameters of the participant model from model inversion attack and data membership reasoning attack. Meanwhile, the parameter aggregation is carried out by different participants in each training task, and when an untrusted aggregator or an attack occurs, the aggregator can automatically recover to be normal, so that the robustness of federal learning is improved; meanwhile, the performance of federal learning is guaranteed, the safe training environment of federal learning is effectively improved, but the balance between the local model and the global model of the personalized client is not realized.
Disclosure of Invention
The invention provides a personalized federal learning method with efficient communication and privacy protection, which overcomes the technical defect that the existing personalized federal learning method does not realize the balance between a local model and a global model of a personalized client.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a personalized federal learning method for efficient communication and privacy protection, comprising the steps of:
s1: pulling a current global model W from a central server t Initializing a local model of each client among all clients
Figure BDA0002861764750000021
Wherein i is the client serial number, t is the current number of rounds of personalized federal learning;
s2: performing E-round local training in a client i to obtain a new local model
Figure BDA0002861764750000022
S3: based on the mode of variable frequency updating of the hierarchical parameter combination
Figure BDA0002861764750000023
The model parameters of (2) are sent to a central server;
s4: the received model parameters are aggregated in a central server to obtain an aggregation result W t+1
S5: based on the mode of variable frequency updating of the hierarchical parameter combination, according to W t+1 Updating the local model of all clients to
Figure BDA0002861764750000024
S6: judging whether the preset iteration times are completed or not;
if yes, personalized federal learning is completed;
if not, let t=t+1, and return to step S2 to perform the next round of personalized federal learning.
Preferably, in step S2, only k clients are selected in each round of personalized federal learning to perform local training; wherein the total number of clients is K, and the duty ratio of K in K is C.
Preferably, in step S2, the data on the client i is divided into data pieces according to a predetermined data batch size B
Figure BDA0002861764750000025
Individual batches and set as the set->
Figure BDA0002861764750000026
For the following
Figure BDA0002861764750000027
Performing local training according to the following formula to obtain local model +.>
Figure BDA0002861764750000028
Figure BDA0002861764750000029
Wherein N is i B is the data volume on client i i Is a collection
Figure BDA00028617647500000210
Element of (a)>
Figure BDA00028617647500000211
Model parameters before local training are executed on the client i, eta is learning rate, and l is a loss function on the client.
Preferably, when the training object of personalized federal learning is a deep neural network model, the deep neural network model is regarded as a combination of a global layer and a personalized layer;
the shallow layer network part of the deep neural network model is defined as a global layer and is responsible for extracting global features of client data; the deep network part of the deep neural network model is defined as a personalized layer and is responsible for capturing personalized features of the client data.
Preferably, in step S3, the manner of performing variable frequency update on the layer parameter combination specifically includes:
if it is currently in the early stage of personalized federal learning, i.e. 0<T is less than or equal to T.p, and T% f earlier Not equal to 0 or currently in the late stage of personalized federal learning, i.e. t×p<T is less than or equal to T, and T% f latet If not equal to 0, only the shallow layer portionThe model parameters of (2) are sent to a central server;
if it is currently in the early stage of personalized federal learning, i.e. 0<T is less than or equal to T.p, and T% f earlier At=0 or currently in the late stage of personalized federal learning, i.e. t×p<T is less than or equal to T, and T% f later When the model parameters are=0, the model parameters of all layers are sent to a central server;
wherein T is the current number of rounds of personalized federal learning, T is the total number of rounds of personalized federal learning, p is the number of rounds of personalized federal learning in the earlier stage of the cycle ratio, f earlier For personalizing the period of the federal learning pre-sending model parameters of all layers to the central server, f later And (3) sending model parameters of all layers to a period of a central server for the later stage of personalized federal learning.
Preferably, the method further comprises: gaussian noise is added to model parameters sent from the client to the central server.
Preferably, only model parameters of the shallow part are sent to the central server in the client i according to the following formula, and gaussian noise is added:
Figure BDA0002861764750000031
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002861764750000032
model parameters sent from client i to the central server; m is a masking matrix used for masking deep parameters to participate in aggregation; dp E (0, 1)]For controlling the degree of influence of noise; RN-N (0, sigma) 2 ) I.e. RN obeys a mean of 0 and variance of σ 2 Is a normal distribution of (c).
Preferably, the model parameters of all layers are sent to the central server in the client i according to the following formula, and gaussian noise is added:
Figure BDA0002861764750000033
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002861764750000034
model parameters sent from client i to the central server; dp E (0, 1)]For controlling the degree of influence of noise; RN-N (0, sigma) 2 ) I.e. RN obeys a mean of 0 and variance of σ 2 Is a normal distribution of (c).
Preferably, the client sends model parameters of all layers to the server at a higher frequency in the early stage than later stage in personalized federal learning, i.e.
Figure BDA0002861764750000035
Preferably, in step S4,
in each round of personalized federal learning, the model parameters received by the central server are subjected to aggregation operation through the following formula to obtain an aggregation result W t+1
Figure BDA0002861764750000036
Wherein K is the total number of clients, C is the duty ratio of each round of clients participating in personalized federal learning, N i For the amount of data on client i, N is the amount of data on all clients participating in the personalized federation learning per round,
Figure BDA0002861764750000037
for model parameters sent to the central server.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides a personalized federal learning method with high-efficiency communication and privacy protection, which is used for personalized federal learning based on a mode of carrying out variable frequency update on hierarchical parameter combination, and can effectively realize the balance of a global model and a personalized model; meanwhile, the parameter communication traffic in personalized federal learning can be reduced, and the light and efficient communication efficiency is realized.
Drawings
FIG. 1 is a schematic diagram of the implementation steps of the technical scheme of the present invention;
FIG. 2 is a schematic diagram of an image classification task using a deep neural network model in the present invention;
FIG. 3 is a variable frequency schematic of personalized federal learning in accordance with the present invention;
FIG. 4 is a schematic diagram of hierarchical parameters for personalized federal learning in accordance with the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a personalized federal learning method with efficient communication and privacy protection includes the following steps:
s1: pulling a current global model W from a central server t Initializing a local model of each client among all clients
Figure BDA0002861764750000041
Wherein i is the client serial number, t is the current number of rounds of personalized federal learning;
s2: performing E-round local training in a client i to obtain a new local model
Figure BDA0002861764750000042
S3: based on the mode of variable frequency updating of the hierarchical parameter combination
Figure BDA0002861764750000043
The model parameters of (2) are sent to a central server;
s4: the received model parameters are aggregated in a central server to obtain an aggregation result W t+1
S5: based on the mode of variable frequency updating of the hierarchical parameter combination, according to W t+1 Updating the local model of all clients to
Figure BDA0002861764750000044
S6: judging whether the preset iteration times are completed or not;
if yes, personalized federal learning is completed;
if not, let t=t+1, and return to step S2 to perform the next round of personalized federal learning.
More specifically, in step S2, only k clients are selected in each round of personalized federal learning to perform local training; wherein the total number of clients is K, and the duty ratio of K in K is C.
In the implementation process, considering the bandwidth and delay limitation of the communication between the client and the central server, selecting the client with the duty ratio of C from K clients each time, and forming a set St to participate in the personalized federal learning of the current t-th round. The selected client needs to complete the local training of the preset number of rounds E.
More specifically, in step S2, the data on client i is divided into data pieces according to a predetermined data batch size B
Figure BDA0002861764750000051
Individual batches and set as the set->
Figure BDA0002861764750000052
For the following
Figure BDA0002861764750000053
Performing local training according to the following formula to obtain local model +.>
Figure BDA0002861764750000054
Figure BDA0002861764750000055
Wherein N is i B is the data volume on client i i Is a collection
Figure BDA0002861764750000056
Element of (a)>
Figure BDA0002861764750000057
Model parameters before local training are executed on the client i, eta is learning rate, and l is a loss function on the client.
More specifically, when the training object of personalized federal learning is a deep neural network model, taking the deep neural network model as an example, performing an image classification task: the common and generic features contained in the image are typically captured by the shallow network portion of the deep neural network model, while the more advanced and unique features are identified by the deep network portion. As shown in fig. 2, the shallow network portion near the input picture extracts often low-order features, and the deep network portion near the output extracts high-order features. In personalized federal learning, according to the definition of the global model and the client local model: the global model focuses mainly on the general low-order features of the data on the client, while the local model focuses mainly on the specific high-order features of the data on the client. We therefore consider the deep neural network model for personalized federal learning as a combination of a global model and a personalized model, where the shallow network portion of the deep neural network model is defined as the global layer and the deep network portion is defined as the personalized layer.
More specifically, in step S3, the manner of performing variable frequency update on the layer parameter combination specifically includes:
if it is currently in the early stage of personalized federal learning, i.e. 0<T is less than or equal to T.p, and T% f earlier Not equal to 0 or currently in the late stage of personalized federal learning, i.e. t×p<T is less than or equal to T, and T% f later When not equal to 0, only model parameters of the shallow part are sent toA central server;
if it is currently in the early stage of personalized federal learning, i.e. 0<T is less than or equal to T.p, and T% f earlier At=0 or currently in the late stage of personalized federal learning, i.e. t×p<T is less than or equal to T, and T% f later When the model parameters are=0, the model parameters of all layers are sent to a central server;
wherein T is the current number of rounds of personalized federal learning, T is the total number of rounds of personalized federal learning, p is the number of rounds of personalized federal learning in the earlier stage of the cycle ratio, f earlier For personalizing the period of the federal learning pre-sending model parameters of all layers to the central server, f later And (3) sending model parameters of all layers to a period of a central server for the later stage of personalized federal learning.
In the specific implementation process, as shown in fig. 3-4, the personalized federal learning is performed by adopting a layering parameter mode based on variable frequency, and only when t% f is in the early and late stages of the personalized federal learning earlier =0 or t%f later When=0, the client transmits all layer parameters of the model to the central server. In other cases, the client masks the deep parameters and only sends the shallow parameters to the central server, so that the traffic is obviously reduced, and the communication cost is effectively reduced. In fig. 3, the update period of the personalized federal learning in the early stage is set to 4, and the update period in the later stage is set to 8. I.e. in the early stage, an aggregate average of all layer parameters is performed every 4 rounds; at a later stage, the aggregate average of all layer parameters is performed once every 8 rounds.
More specifically, the method further comprises: gaussian noise is added to model parameters sent from the client to the central server.
In the implementation process, gaussian noise is added to model parameters sent to a central server by the client according to layers based on differential privacy, and real parameters are encrypted, so that the privacy of the client is further protected.
More specifically, only model parameters of the shallow part are sent to the central server in the client i according to the following formula, and gaussian noise is added:
Figure BDA0002861764750000061
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002861764750000062
model parameters sent from client i to the central server; m is a masking matrix used for masking deep parameters to participate in aggregation; dp E (0, 1)]For controlling the degree of influence of noise; RN-N (0, sigma) 2 ) I.e. RN obeys a mean of 0 and variance of σ 2 Is a normal distribution of (c).
More specifically, the model parameters of all layers are sent to the central server in the client i according to the following formula, and gaussian noise is added:
Figure BDA0002861764750000063
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002861764750000064
model parameters sent from client i to the central server; dp E (0, 1)]For controlling the degree of influence of noise; RN-N (0, sigma) 2 ) I.e. RN obeys a mean of 0 and variance of σ 2 Is a normal distribution of (c).
More specifically, the frequency of the client sending model parameters of all layers to the server in the early stage is higher than that of the later stage in personalized federal learning, namely
Figure BDA0002861764750000065
In the specific implementation process, according to the accumulated learning strategy, in the early stage of personalized federal learning, the center of gravity is the global feature on the extracted client; at the later stage of personalized federal learning, the center of gravity is the local model on the personalized client. Therefore, in this embodiment, the aggregation frequency of all layer parameters in the later stage of personalized federal learning is lower than that in the earlier stage, so that the local model on the client has the capability of personalization, thereby balancing the effects of the global model and the personalized model in personalized federal learning.
More specifically, in step S4,
in each round of personalized federal learning, the model parameters received by the central server are subjected to aggregation operation through the following formula to obtain an aggregation result W t+1
Figure BDA0002861764750000071
Wherein K is the total number of clients, C is the duty ratio of each round of clients participating in personalized federal learning, N i For the amount of data on client i, N is the amount of data on all clients participating in the personalized federation learning per round,
Figure BDA0002861764750000072
for model parameters sent to the central server.
In the specific implementation process, when 0<T is less than or equal to T.p and T% f earlier Not equal to 0 or T p<T is less than or equal to T and T% f later When not equal to 0, the client only sends parameters of the shallow layer part of the deep neural network model to aggregate all layer parameters, so that after the central server executes operation, each client only needs to update the parameters of the shallow layer part of the local model in step S5, and the parameters of the deep layer part are kept unchanged, namely, the parameters of the personalized layer on the client only depend on own data. And when 0<T is less than or equal to T.p and T% f earlier When=0 or t×p<T is less than or equal to T and T% f later When=0, the client transmits all parameters of the network model to the central server, and the central server transmits the parameters to the central server according to the preset periods (f earlier And f later ) On which a periodic aggregate average is performed, each client will need to update the parameters of all layers in step S5.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (7)

1. A personalized federal learning method for efficient communication and privacy protection, comprising the steps of:
s1: pulling a current global model W from a central server t Initializing a local model of each client among all clients
Figure FDA0004248228060000011
Wherein i is the client serial number, t is the current number of rounds of personalized federal learning;
s2: performing E-round local training in a client i to obtain a new local model
Figure FDA0004248228060000012
S3: based on the mode of variable frequency updating of the hierarchical parameter combination
Figure FDA0004248228060000013
The model parameters of (2) are sent to a central server; the method for performing variable frequency update on the hierarchical parameter combination specifically comprises the following steps:
if the learning is currently in the early stage of personalized federal learning, namely, T is more than 0 and less than or equal to Txp, and t% f earlier When not equal to 0 or at the later stage of personalized federal learning, namely T is less than T and less than or equal to T, and t% f later If not equal to 0, only sending model parameters of the shallow part to a central server;
if the learning is currently in the early stage of personalized federal learning, namely, T is more than 0 and less than or equal to Txp, and t% f earlier At=0 or currently in the late stage of personalized federal learning, i.e. t×p < t+.ltoreq.t, and T% f later When the model parameters are=0, the model parameters of all layers are sent to a central server;
wherein T is the current number of rounds of personalized federal learning, and T isTotal number of rounds of personalized federal learning, p is the ratio of the number of rounds of personalized federal learning in the earlier stage, f earlier For personalizing the period of the federal learning pre-sending model parameters of all layers to the central server, f later Sending model parameters of all layers to a period of a central server for the later stage of personalized federal learning;
adding gaussian noise to model parameters sent from the client to the central server; transmitting only model parameters of the shallow part to a central server in the client i according to the following formula, and adding Gaussian noise:
Figure FDA0004248228060000014
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004248228060000015
model parameters sent from client i to the central server; m is a masking matrix used for masking deep parameters to participate in aggregation; dp E (0, 1)]For controlling the degree of influence of noise; RN-N (0, sigma) 2 ) I.e. RN obeys a mean of 0 and variance of σ 2 Is a normal distribution of (2);
s4: the received model parameters are aggregated in a central server to obtain an aggregation result W t+1
S5: based on the mode of variable frequency updating of the hierarchical parameter combination, according to W t+1 Updating the local model of all clients to
Figure FDA0004248228060000016
S6: judging whether the preset iteration times are completed or not;
if yes, personalized federal learning is completed;
if not, let t=t+1, and return to step S2 to perform the next round of personalized federal learning.
2. The personalized federal learning method for efficient communication and privacy protection according to claim 1, wherein in step S2, only k clients are selected for each round of personalized federal learning to perform local training; wherein the total number of clients is K, and the duty ratio of K in K is C.
3. The personalized federal learning method for efficient communication and privacy protection according to claim 1, wherein in step S2, the data on the client i is divided into data groups according to a predetermined data batch size B
Figure FDA0004248228060000021
Individual batches and set as the set->
Figure FDA0004248228060000022
For the following
Figure FDA0004248228060000023
Performing local training according to the following formula to obtain local model +.>
Figure FDA0004248228060000024
Figure FDA0004248228060000025
Wherein N is i B is the data volume on client i i Is a collection
Figure FDA0004248228060000026
Element of (a)>
Figure FDA0004248228060000027
Model parameters before local training are executed on the client i, eta is learning rate, and l is a loss function on the client.
4. The personalized federal learning method for efficient communication and protection of privacy according to claim 1, wherein when the training object of personalized federal learning is a deep neural network model, the deep neural network model is regarded as a combination of global layer and personalized layer;
the shallow layer network part of the deep neural network model is defined as a global layer and is responsible for extracting global features of client data; the deep network part of the deep neural network model is defined as a personalized layer and is responsible for capturing personalized features of the client data.
5. The personalized federal learning method for efficient communication and privacy protection according to claim 1, wherein model parameters of all layers are transmitted to a central server in the client i according to the following formula, and gaussian noise is added:
Figure FDA0004248228060000028
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004248228060000029
model parameters sent from client i to the central server; dp E (0, 1)]For controlling the degree of influence of noise; RN-N (0, sigma) 2 ) I.e. RN obeys a mean of 0 and variance of σ 2 Is a normal distribution of (c).
6. The personalized federal learning method for efficient communication and privacy protection according to claim 1, wherein the frequency of the client sending model parameters of all layers to the server in the early stage is higher than the frequency of the later stage, namely
Figure FDA00042482280600000210
7. The personalized federal learning method for efficient communication and privacy protection according to claim 1, wherein, in step S4,
in each round of personalized federal learning, the model parameters received by the central server are subjected to aggregation operation through the following formula to obtain an aggregation result W t+1
Figure FDA0004248228060000031
Wherein K is the total number of clients, C is the duty ratio of each round of clients participating in personalized federal learning, N i For the amount of data on client i, N is the amount of data on all clients participating in the personalized federation learning per round,
Figure FDA0004248228060000032
for model parameters sent to the central server.
CN202011568563.2A 2020-12-25 2020-12-25 Personalized federal learning method with efficient communication and privacy protection Active CN112668726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011568563.2A CN112668726B (en) 2020-12-25 2020-12-25 Personalized federal learning method with efficient communication and privacy protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011568563.2A CN112668726B (en) 2020-12-25 2020-12-25 Personalized federal learning method with efficient communication and privacy protection

Publications (2)

Publication Number Publication Date
CN112668726A CN112668726A (en) 2021-04-16
CN112668726B true CN112668726B (en) 2023-07-11

Family

ID=75409693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011568563.2A Active CN112668726B (en) 2020-12-25 2020-12-25 Personalized federal learning method with efficient communication and privacy protection

Country Status (1)

Country Link
CN (1) CN112668726B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967812A (en) * 2021-04-20 2021-06-15 钟爱健康科技(广东)有限公司 Anti-theft attack medical diagnosis model protection method based on federal learning
CN113032835B (en) * 2021-04-21 2024-02-23 支付宝(杭州)信息技术有限公司 Model training method, system and device for privacy protection
CN113095513A (en) * 2021-04-25 2021-07-09 中山大学 Double-layer fair federal learning method, device and storage medium
CN113344221A (en) * 2021-05-10 2021-09-03 上海大学 Federal learning method and system based on neural network architecture search
CN113268920B (en) * 2021-05-11 2022-12-09 西安交通大学 Safe sharing method for sensing data of unmanned aerial vehicle cluster based on federal learning
CN113435604A (en) * 2021-06-16 2021-09-24 清华大学 Method and device for optimizing federated learning
CN113361618A (en) * 2021-06-17 2021-09-07 武汉卓尔信息科技有限公司 Industrial data joint modeling method and system based on federal learning
CN113516249B (en) * 2021-06-18 2023-04-07 重庆大学 Federal learning method, system, server and medium based on semi-asynchronization
CN113361694B (en) * 2021-06-30 2022-03-15 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113378243B (en) * 2021-07-14 2023-09-29 南京信息工程大学 Personalized federal learning method based on multi-head attention mechanism
CN113645197B (en) * 2021-07-20 2022-04-29 华中科技大学 Decentralized federal learning method, device and system
CN113656833A (en) * 2021-08-09 2021-11-16 浙江工业大学 Privacy stealing defense method based on evolutionary computation under vertical federal architecture
CN113642738B (en) * 2021-08-12 2023-09-01 上海大学 Multi-party safety cooperation machine learning method and system based on hierarchical network structure
CN116340959A (en) * 2021-12-17 2023-06-27 新智我来网络科技有限公司 Breakpoint privacy protection-oriented method, device, equipment and medium
CN114595831B (en) * 2022-03-01 2022-11-11 北京交通大学 Federal learning method integrating adaptive weight distribution and personalized differential privacy
CN114357526A (en) * 2022-03-15 2022-04-15 中电云数智科技有限公司 Differential privacy joint training method for medical diagnosis model for resisting inference attack
CN114492847B (en) * 2022-04-18 2022-06-24 奥罗科技(天津)有限公司 Efficient personalized federal learning system and method
CN114863499B (en) * 2022-06-30 2022-12-13 广州脉泽科技有限公司 Finger vein and palm vein identification method based on federal learning
CN116227621B (en) * 2022-12-29 2023-10-24 国网四川省电力公司电力科学研究院 Federal learning model training method based on power data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1664819A (en) * 2004-03-02 2005-09-07 微软公司 Principles and methods for personalizing newsfeeds via an analysis of information dynamics
CN101639852A (en) * 2009-09-08 2010-02-03 中国科学院地理科学与资源研究所 Method and system for sharing distributed geoscience data
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111079977A (en) * 2019-11-18 2020-04-28 中国矿业大学 Heterogeneous federated learning mine electromagnetic radiation trend tracking method based on SVD algorithm
CN111553484A (en) * 2020-04-30 2020-08-18 同盾控股有限公司 Method, device and system for federal learning
CN111611610A (en) * 2020-04-12 2020-09-01 西安电子科技大学 Federal learning information processing method, system, storage medium, program, and terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1664819A (en) * 2004-03-02 2005-09-07 微软公司 Principles and methods for personalizing newsfeeds via an analysis of information dynamics
CN101256591A (en) * 2004-03-02 2008-09-03 微软公司 Principles and methods for personalizing newsfeeds via an analysis of information novelty and dynamics
CN101639852A (en) * 2009-09-08 2010-02-03 中国科学院地理科学与资源研究所 Method and system for sharing distributed geoscience data
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111079977A (en) * 2019-11-18 2020-04-28 中国矿业大学 Heterogeneous federated learning mine electromagnetic radiation trend tracking method based on SVD algorithm
CN111611610A (en) * 2020-04-12 2020-09-01 西安电子科技大学 Federal learning information processing method, system, storage medium, program, and terminal
CN111553484A (en) * 2020-04-30 2020-08-18 同盾控股有限公司 Method, device and system for federal learning

Also Published As

Publication number Publication date
CN112668726A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN112668726B (en) Personalized federal learning method with efficient communication and privacy protection
CN108921764B (en) Image steganography method and system based on generation countermeasure network
Zolotukhin et al. Increasing web service availability by detecting application-layer DDoS attacks in encrypted traffic
CN104113789B (en) On-line video abstraction generation method based on depth learning
CN113362160B (en) Federal learning method and device for credit card anti-fraud
CN113468521B (en) Data protection method for federal learning intrusion detection based on GAN
WO2021189364A1 (en) Method and device for generating adversarial image, equipment, and readable storage medium
CN109639479B (en) Network traffic data enhancement method and device based on generation countermeasure network
CN113179244B (en) Federal deep network behavior feature modeling method for industrial internet boundary safety
CN111625820A (en) Federal defense method based on AIoT-oriented security
CN110598982B (en) Active wind control method and system based on intelligent interaction
CN106295501A (en) The degree of depth based on lip movement study personal identification method
CN111681154A (en) Color image steganography distortion function design method based on generation countermeasure network
US20220318412A1 (en) Privacy-aware pruning in machine learning
CN111192206A (en) Method for improving image definition
CN111832729A (en) Distributed deep learning reasoning deployment method for protecting data privacy
CN112560059B (en) Vertical federal model stealing defense method based on neural pathway feature extraction
CN113076549B (en) Novel U-Net structure generator-based countermeasures network image steganography method
Seo et al. Communication-efficient and personalized federated lottery ticket learning
CN112435034A (en) Marketing arbitrage black product identification method based on multi-network graph aggregation
CN115907029B (en) Method and system for defending against federal learning poisoning attack
CN113194092B (en) Accurate malicious flow variety detection method
CN115310625A (en) Longitudinal federated learning reasoning attack defense method
CN112270233A (en) Mask classification method based on transfer learning and Mobilenet network
CN113837398A (en) Graph classification task poisoning attack method based on federal learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant