CN115115066A - Comparative learning-based federal learning personalization method - Google Patents

Comparative learning-based federal learning personalization method Download PDF

Info

Publication number
CN115115066A
CN115115066A CN202210956833.XA CN202210956833A CN115115066A CN 115115066 A CN115115066 A CN 115115066A CN 202210956833 A CN202210956833 A CN 202210956833A CN 115115066 A CN115115066 A CN 115115066A
Authority
CN
China
Prior art keywords
model
client
local
training
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210956833.XA
Other languages
Chinese (zh)
Inventor
陈晋音
刘涛
李荣昌
李明俊
宣琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202210956833.XA priority Critical patent/CN115115066A/en
Publication of CN115115066A publication Critical patent/CN115115066A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a comparative learning-based federal learning personalization method, which realizes personalization by balancing a global model trained on a whole data set and a local model trained on a local subset. And correcting local updating through the consistency loss of the characteristics learned by the current local model and the characteristics learned by the global model, thereby realizing the individuation of federal learning. The characteristics of the global model, the characteristics of the model obtained by local data training and the characteristics of the local model trained by the current model are calculated at the same time, and then the distances among the three characteristics are normalized through personalized parameters, namely, comparative learning is realized on a model level, so that the global model is corrected to adapt to tasks on different clients. The method and the system can adapt to different tasks of the client, and meet the requirement of federal learning personalized customization.

Description

Comparative learning-based federal learning personalization method
Technical Field
The invention belongs to the field of federal learning model personalization, and particularly relates to a comparative learning-based federal learning personalization method.
Background
Joint learning arises because of data islanding issues, with the goal of co-training data, which are generated by many remote devices or local clients and cannot be shared because of privacy concerns. Federated learning allows multiple parties to jointly learn a machine learning model without exchanging their own local data. In each round of federal learning training, the updated local models of the parties are transmitted to the server, which further aggregates the local models to update the global model. Raw data is not exchanged during the learning process. Federal learning has become an important field of machine learning, attracting much research interest. In addition, it is also applied to many fields such as medical imaging, smart furniture, and the like.
Since federal learning focuses on learning all client data to obtain a high quality global model, personal information of all clients cannot be captured. For the client with higher data quality and larger contribution degree, the global model can be well adapted to the task of the client, but for the client with smaller contribution degree, the global model does not necessarily have good adaptability. When each client updates its local model, its local target may be quite different from the global target. Therefore, federal learning faces the problem of data distribution heterogeneity in real-world deployment, and the trained global model is not necessarily suitable for working on all clients participating in training.
One key challenge of federal learning is dealing with the heterogeneity of local data distribution across parties. The global model obtained by the federal learning training often ignores the personal information of the low-contribution client, so that the global model has low generalization capability. To address this challenge, a simple and effective approach is to personalize on the device or model. And for the global model obtained by broadcasting, the client performs corresponding personalized training to ensure that the final model can adapt to the task.
Disclosure of Invention
The invention aims to provide a comparative learning-based federal learning personalized method aiming at the defects of the prior art. The invention can ensure that the global model can be corrected according to the data distribution of the client.
The purpose of the invention is realized by the following technical scheme: a comparative learning-based federal learning personalization method comprises the following steps:
(1) training out an independent model locally:
(1.1) training an independent model by the local client according to data held by the local client, wherein the independent model only contains data information of a single client and adapts to tasks of the client;
(1.2) building a contrast learning network, and extracting the representation from the independent model by using the contrast learning network;
(2) the client participates in the federal learning training:
(2.1) the client starts normal federal learning training and obtains local update according to the global model and the local data issued by the server;
(2.2) after the training is finished, each client uploads the locally updated local model to the server;
(2.3) aggregating the server to obtain a new global model and carrying out global broadcasting, and extracting global representations from the new global model after the client receives the new global model;
(3) the client carries out personalized correction training:
(3.1) after the client receives the latest global model, the client starts personalized correction training, and three representations are obtained in a training stage respectively: an independent model representation, a global model representation and a local model representation being updated;
(3.2) calculating to obtain model consistency loss through the three representations obtained in the step (3.1), and updating the local model together with the training loss;
and (3.3) repeating the steps (3.1) - (3.2), and performing iterative correction training to finally obtain the personalized local model.
Further, the step (1.1) comprises:
there are N clients, denoted P 1 ,...,P N Client P i Having local data sets D i For local individual training, the training objectives of the independent model are as follows:
Figure BDA0003791690210000021
wherein,
Figure BDA0003791690210000022
is P i Loss of experience of;
Figure BDA0003791690210000023
representing a client P i The independent model weights of (2); x represents D i Y represents the label to which x corresponds; l i Representing a client P i A local penalty function; e represents expectation.
Further, the step (1.2) comprises:
the independent model weight is
Figure BDA0003791690210000024
By using
Figure BDA0003791690210000025
Representing the network before the independent model output layer; putting the trained independent model weight into a comparison learning network to obtain the representation of the independent model
Figure BDA0003791690210000026
Further, the step (2.1) comprises:
for the local client, the training objectives of the federated learning local model are as follows:
Figure BDA0003791690210000031
wherein,
Figure BDA0003791690210000032
is P i Local experience loss L of the t-th round i,t ;w i Representing a client P i Local model weight of (2); x represents D i Y represents the label to which x corresponds; l i,t Representing a client P i Local loss function of the t round; e represents expectation.
Further, the step (2.3) includes:
the server receives the model update of the client and forms a new global model by adopting a FedAvg aggregation algorithm; the FedAvg calculates the average of the local model updates of the clients as global model updates, wherein each client is weighted according to the number of training examples thereof; using D ═ U- i∈[N] D i Representing all the data sets of federal training, the training targets of the t-th round global model are represented as:
Figure BDA0003791690210000033
wherein L is i,t Is represented by P i Defining the experience loss of the t-th round; l is g,t Represents the loss of the global model for the t-th round; w is a g Is the weight of the global model; | represents modulo;
obtaining a new global model after the aggregation is completed
Figure BDA0003791690210000034
The server then maps the global model
Figure BDA0003791690210000035
Broadcasting and sending the broadcast to each client; the client calculates the representation of the global model according to the latest global model
Figure BDA0003791690210000036
Further, the step (3.1) comprises:
the loss of the personalized correction training is composed of two parts, wherein the first part is a loss function
Figure BDA0003791690210000037
The second part is the calculation of a model contrast loss function defined as
Figure BDA0003791690210000038
For each input x and the local model
Figure BDA0003791690210000039
Extracting tokens from independent models
Figure BDA00037916902100000310
Extracting tokens from a global model
Figure BDA00037916902100000311
And tokens extracted from the local model being trained
Figure BDA00037916902100000312
The model-versus-loss function is defined as:
Figure BDA0003791690210000041
further, in step (3.1):
the server determines whether the corresponding client performs the personalized correction training according to whether the personalized application of the client is received or not, judges that the client needs to perform the personalized correction training after the server receives the personalized application of the client, and then starts the personalized correction training; and if the client-side personalized application is not received, skipping to the step (2.1) to perform the next round of federal learning training.
Further, the step (3.2) comprises:
the goal of the updated local model is:
Figure BDA0003791690210000042
wherein, the personalized hyper-parameters of the mu control model for comparing the loss weight are set by the client; w is a g Weights, w, representing the global model i The weights of the local model are represented by,
Figure BDA0003791690210000043
representing independent modelsWeight, x denotes D i Y represents the label to which x corresponds.
The invention has the following beneficial effects:
(1) the correction training is independent of the normal federal training, and the personalized process does not influence the global model, namely the convergence process of the global model is not influenced;
(2) according to the method, the characteristics can be efficiently mapped through the comparison, learning and calculation representation of the model level;
(3) the invention can control the personalized degree of the model through consistency loss and can carry out fine-grained customization.
Drawings
Fig. 1 is a schematic diagram of a comparative learning-based federal learning personalization method of the present invention.
FIG. 2 is a specific flowchart of the comparative learning-based federated learning personalization method of the present invention.
Detailed Description
The following detailed description of embodiments of the invention is provided in connection with the accompanying drawings.
The invention achieves personalization by leveraging global models trained on the entire dataset against local models trained on local subsets. Based on the viewpoint, the comparative learning-based federal learning personalized method corrects local updating through consistency loss of the characteristics learned by the current local model and the characteristics learned by the global model, and further realizes the personalization of federal learning. Specifically, the method calculates the representation of the global model, trains local data to obtain the representation of the model and the representation of the local model trained by the current model at the same time, and then normalizes the distance between the three representations through personalized parameters, namely realizes comparison learning at the model level, so as to modify the global model to adapt to tasks on different clients.
As shown in fig. 1, the present invention specifically includes the following steps:
(1) the independent model is trained locally.
Each client in federal learning has different training data, which is stored locally and cannot be transmitted. Before beginning federal learning training, the present invention requires each client to train out an independent model using local data as one of the subsequent personalized characterizations. Due to the heterogeneity of data among clients, there may be a case where a client lacks data and cannot train a good independent model. But this does not affect the subsequent personalized customization, because the goal of personalized customization is to adapt the global model to a specific task, and the independent model plays a guiding role.
Thus, step (1) comprises the following sub-steps:
(1.1) the local client trains an independent model according to the data held by the local client, and the weight of the independent model is represented as w ind The independent model only contains data information of a single client, and the tasks of the client are adapted.
Assume a total of N clients in federated learning, denoted P 1 ,...,P N Client P i Having local data sets D i For local individual training, the training objectives of the independent model are as follows:
Figure BDA0003791690210000051
wherein,
Figure BDA0003791690210000052
is P i The experience of (2) is lost.
Figure BDA0003791690210000053
Representing a client P i The independent model weights of (2); x represents D i Y represents the label to which x corresponds; l i Representing a client P i A local conventional penalty function; e represents expectation.
And (1.2) building a contrast learning network, and extracting the representation from the independent model by using the contrast learning network.
And typical contrast learning frameworkSimCLR is similar in that the comparative learning network has three components: basic encoder, neural network projection head and output layer. The basis coder is used to extract a representative vector from the input, and the neural network projection head is used to map the representative vector to a space having a fixed dimension. Finally, the output layer will be used to generate a prediction value for each class. Similar to the above, assume that the independent model weights are
Figure BDA0003791690210000054
By using
Figure BDA0003791690210000055
Representing the entire network of independent models, with
Figure BDA0003791690210000056
Representing the network before the independent model output layer. Putting the trained independent model weight into a contrast learning network to obtain the representation of the independent model
Figure BDA0003791690210000057
Where input x represents a training data.
(2) The client participates in federal learning training.
After the independent model is trained separately and the characterization is obtained locally, a formal federal training procedure is started. In order to ensure that the global model can be converged, the training of the global model is separated from the modification training of the personalized model, and two tasks are maintained mutually. Namely, a report needs to be sent to the server before the client performs personalized training, and the server marks the client after receiving a personalized application to ensure that the client does not participate in global model aggregation. Similarly, the client performs modification training after the round of federal learning training is finished, so that the personalization of the client is not influenced on the operation of the global model. Therefore, in step (2), only the normal federal learning training procedure is involved.
Thus, step (2) comprises the following sub-steps:
and (2.1) the client starts normal federal learning training and obtains local update according to the global model and the local data issued by the server.
The server initializes the global model and broadcasts the global model to the various clients. The client receives the global model, updates the local model with local data and performs one or more iterations using a random gradient descent.
For the local client, the training objectives of the federated learning local model are as follows:
Figure BDA0003791690210000061
wherein,
Figure BDA0003791690210000062
is P i Local experience loss L of the t-th round i,t 。w i Representing a client P i The local model weight of (c); x represents D i Y represents the label to which x corresponds; l. the i,t Representing a client P i A t-th round local conventional loss function; e represents expectation. And after the local training is finished, the client updates and uploads the local model to the server.
And (2.2) after the training is finished, each client sends the local model of each client, and uploads the local update to the server, so that the server can be ensured to receive the local update without individuation.
And (2.3) aggregating the server to obtain a new global model and carrying out global broadcasting, and extracting global representations from the new global model after the client receives the new global model.
And the server receives the model update of the client and forms a new global model by adopting a FedAvg aggregation algorithm. The FedAvg calculates the average of the local model updates for the clients as the global model updates, where each client is weighted according to the number of its training examples. Using D ═ U- i∈[N] D i Representing all federate training data sets, the training objectives for the t-th round global model may be represented as:
Figure BDA0003791690210000071
wherein L is i,t Represents P i Defining the experience loss of the t-th round; l is g,t Represents the loss of the global model; w is a g Is the weight of the global model; and | represents modulo.
Obtaining a new global model after the aggregation is completed
Figure BDA0003791690210000072
The server then maps the global model
Figure BDA0003791690210000073
And broadcasting and sending the broadcast to each client. The client can calculate the representation of the global model according to the latest global model
Figure BDA0003791690210000074
Where x represents a training data.
(3) And the client performs personalized correction training.
And after the client obtains the latest global model, performing personalized correction training on the local model. The goal of the individualized correction training is to correct the distance between the local model representation and the global model representation as well as the independent model representations. For example, if the client wants to adapt the local model to the local task, the modification training aims to reduce the distance between the token learned by the local model and the token learned by the global model and increase the distance between the token learned by the local model and the token learned by the independent model.
Thus, as shown in fig. 2, step (3) comprises the following sub-steps:
and (3.1) the client receives the latest global model and sends an application for personalized training to the server. The server determines whether the corresponding client performs the personalized correction training according to whether the personalized application of the client is received or not, judges that the client needs to perform the personalized correction training after the server receives the personalized application of the client, and then starts the personalized correction training. And if the client-side personalized application is not received, skipping to the step (2.1) to perform the next round of federal learning training. Wherein, three characterizations are respectively obtained in the personalized modification training stage: an independent model representation, a global model representation, and a local model representation being updated.
The loss of the correction training is composed of two parts, the first part is a conventional loss function
Figure BDA0003791690210000075
The second part is the calculation of a model contrast loss function defined as
Figure BDA0003791690210000076
For each input x and the local model
Figure BDA0003791690210000077
The tokens can be extracted from the independent model
Figure BDA0003791690210000078
Extracting tokens from a global model
Figure BDA0003791690210000079
And tokens extracted from the local model being trained
Figure BDA00037916902100000710
The model versus loss function can be defined as:
Figure BDA0003791690210000081
(3.2) model consistency loss is calculated through three characteristics, and the local model is updated together with the traditional training loss, and the aim is to minimize the following formula:
Figure BDA0003791690210000082
the mu control model controls the personalized degree of the local model by comparing the personalized hyper-parameters of the loss weight and is set by the client. w is a g Weights, w, representing the global model i The weights of the local model are represented by,
Figure BDA0003791690210000083
weights representing independent models, x representing D i Y represents the label to which x corresponds.
And (3.3) repeating the steps (3.1) - (3.2), and finishing training through multiple times of iterative correction training until the total loss update value of the correction training is less than 0.0001 or reaches a preset number of times, thereby finally obtaining the personalized local model for federal learning.
It should be noted that the correction training in the present invention is separate from the federal learning training, and the global model is updated as in the conventional federal learning method, so that it does not interfere with the normal federal learning training. For the local model, whether to perform correction training does not affect the update of the global model, but only affects the personalized degree of the local model.
Compared with the federal mean learning algorithm, the test accuracy of the model modified by the method is improved by 3.1% (+ -0.4%) averagely in the experiment of the CIFAR-10 ten-degree data set, wherein mu is set to be 5%.
The objective of federal learning is to learn all data features participating in training, but due to the existence of data heterogeneity, some more hidden data features on the client are often ignored, so that the global model is differentially represented on each client. In order to solve the problem, the invention discloses a comparative learning-based federal learning personalized method, which adds a step of correction training after normal local training through a comparative learning technology. The characteristics of the independent model, the global model and the updated local model are calculated to modify the local model, so that the local model can be adapted to different tasks of a client, and the requirement of federal learning personalized customization is met.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (8)

1. A federal learning individualization method based on comparative learning is characterized by comprising the following steps:
(1) training out an independent model locally:
(1.1) training an independent model by the local client according to data held by the local client, wherein the independent model only contains data information of a single client and adapts to tasks of the client;
(1.2) building a contrast learning network, and extracting the representation from the independent model by using the contrast learning network;
(2) the client participates in the federal learning training:
(2.1) the client starts normal federal learning training and obtains local update according to a global model and local data issued by the server;
(2.2) after the training is finished, each client uploads the locally updated local model to the server;
(2.3) aggregating the server to obtain a new global model and carrying out global broadcasting, and extracting global representations from the new global model after the client receives the new global model;
(3) the client carries out personalized correction training:
(3.1) after the client receives the latest global model, the client starts personalized correction training, and three representations are obtained in a training stage respectively: an independent model representation, a global model representation and a local model representation being updated;
(3.2) calculating to obtain model consistency loss through the three representations obtained in the step (3.1), and updating the local model together with the training loss;
and (3.3) repeating the steps (3.1) to (3.2), and performing iterative correction training to finally obtain the personalized local model.
2. The comparative learning-based federal learning personalization method of claim 1, wherein step (1.1) comprises:
there are N clients, denoted P 1 ,...,P N Client P i Having local data sets D i For local individual training, the training objectives of the independent model are as follows:
Figure FDA0003791690200000011
wherein,
Figure FDA0003791690200000012
is P i (ii) a loss of experience;
Figure FDA0003791690200000013
representing a client P i The independent model weights of (a); x represents D i Y represents the label to which x corresponds; l i Representing a client P i A local penalty function; e represents expectation.
3. The comparative learning-based federal learning personalization method of claim 1, wherein step (1.2) comprises:
the independent model weight is
Figure FDA0003791690200000021
By using
Figure FDA0003791690200000022
Representing the network before the independent model output layer; putting the trained independent model weight into a comparative learning network to obtain the representation of the independent model
Figure FDA0003791690200000023
4. The comparative learning-based federal learning personalization method of claim 1, wherein step (2.1) comprises:
for the local client, the training objectives of the federated learning local model are as follows:
Figure FDA0003791690200000024
wherein,
Figure FDA0003791690200000025
is P i Local experience loss L of the t-th round i,t ;w i Representing a client P i Local model weight of (2); x represents D i Y represents the label to which x corresponds; l i,t Representing a client P i Local loss function of the t round; e represents expectation.
5. The comparative learning-based federal learning personalization method of claim 1, wherein step (2.3) comprises:
the server receives the model update of the client and forms a new global model by adopting a FedAvg aggregation algorithm; the FedAvg calculates the average of the local model updates of the clients as global model updates, wherein each client is weighted according to the number of training examples thereof; using D ═ U i∈[N] D i Representing all the data sets of federal training, the training targets of the t-th round global model are represented as:
Figure FDA0003791690200000026
wherein L is i,t Represents P i Defining the experience loss of the t-th round; l is a radical of an alcohol g,t Represents the loss of the global model for the t-th round; w is a g Is the weight of the global model; | represents modulo;
to obtain new compounds at the completion of the polymerizationOffice model
Figure FDA0003791690200000027
The server then maps the global model
Figure FDA0003791690200000028
Broadcasting and sending the broadcast to each client; the client calculates the representation of the global model according to the latest global model
Figure FDA0003791690200000029
6. The comparative learning-based federal learning personalization method of claim 1, wherein step (3.1) comprises:
the loss of the personalized correction training is composed of two parts, wherein the first part is a loss function
Figure FDA00037916902000000210
The second part is the calculation of a model contrast loss function defined as
Figure FDA00037916902000000211
For each input x and the local model
Figure FDA00037916902000000212
Extracting tokens from independent models
Figure FDA0003791690200000031
Extracting tokens from a global model
Figure FDA0003791690200000032
And tokens extracted from the local model being trained
Figure FDA0003791690200000033
Then model contrast loss functionThe number is defined as:
Figure FDA0003791690200000034
7. the comparative learning-based federal learning personalization method of claim 1, wherein in step (3.1):
the server determines whether the corresponding client performs the personalized correction training according to whether the personalized application of the client is received or not, judges that the client needs to perform the personalized correction training after the server receives the personalized application of the client, and then starts the personalized correction training; and if the client-side personalized application is not received, skipping to the step (2.1) to perform the next round of federal learning training.
8. The comparative learning-based federal learning personalization method of claim 1, wherein step (3.2) comprises:
the goal of the updated local model is:
Figure FDA0003791690200000035
wherein, the mu control model compares the individualized hyperparameter of the loss weight, and is set by the client; w is a g Weights, w, representing the global model i The weights of the local model are represented by,
Figure FDA0003791690200000036
weights representing independent models, x representing D i Y represents the label to which x corresponds.
CN202210956833.XA 2022-08-10 2022-08-10 Comparative learning-based federal learning personalization method Pending CN115115066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210956833.XA CN115115066A (en) 2022-08-10 2022-08-10 Comparative learning-based federal learning personalization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210956833.XA CN115115066A (en) 2022-08-10 2022-08-10 Comparative learning-based federal learning personalization method

Publications (1)

Publication Number Publication Date
CN115115066A true CN115115066A (en) 2022-09-27

Family

ID=83335386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210956833.XA Pending CN115115066A (en) 2022-08-10 2022-08-10 Comparative learning-based federal learning personalization method

Country Status (1)

Country Link
CN (1) CN115115066A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344131A (en) * 2021-06-30 2021-09-03 商汤国际私人有限公司 Network training method and device, electronic equipment and storage medium
CN113762524A (en) * 2020-06-02 2021-12-07 三星电子株式会社 Federal learning system and method and client device
US20220114500A1 (en) * 2021-12-22 2022-04-14 Intel Corporation Mechanism for poison detection in a federated learning system
CN114529012A (en) * 2022-02-18 2022-05-24 厦门大学 Double-stage-based personalized federal learning method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762524A (en) * 2020-06-02 2021-12-07 三星电子株式会社 Federal learning system and method and client device
CN113344131A (en) * 2021-06-30 2021-09-03 商汤国际私人有限公司 Network training method and device, electronic equipment and storage medium
US20220114500A1 (en) * 2021-12-22 2022-04-14 Intel Corporation Mechanism for poison detection in a federated learning system
CN114529012A (en) * 2022-02-18 2022-05-24 厦门大学 Double-stage-based personalized federal learning method

Similar Documents

Publication Publication Date Title
CN112804107B (en) Hierarchical federal learning method for self-adaptive control of energy consumption of Internet of things equipment
CN113112027A (en) Federal learning method based on dynamic adjustment model aggregation weight
CN113191484A (en) Federal learning client intelligent selection method and system based on deep reinforcement learning
CN113762530A (en) Privacy protection-oriented precision feedback federal learning method
CN114154643A (en) Federal distillation-based federal learning model training method, system and medium
CN114584581A (en) Federal learning system and federal learning training method for smart city Internet of things and letter fusion
CN113378474B (en) Contribution-based federated learning client selection method, system and medium
CN117994635B (en) Federal element learning image recognition method and system with enhanced noise robustness
CN114997420B (en) Federal learning system and method based on segmentation learning and differential privacy fusion
CN115271101A (en) Personalized federal learning method based on graph convolution hyper-network
CN112836822A (en) Federal learning strategy optimization method and device based on width learning
CN118396082A (en) Personalized federal learning method based on contrast learning and condition calculation
CN115359298A (en) Sparse neural network-based federal meta-learning image classification method
CN118133922A (en) Federal learning global model fine tuning method based on automatic encoder
CN117851842A (en) Double-end clustering federal learning method
CN116665319B (en) Multi-mode biological feature recognition method based on federal learning
CN115115066A (en) Comparative learning-based federal learning personalization method
US20200271769A1 (en) Method for evaluating positioning parameters and system
CN116259057A (en) Method for solving data heterogeneity problem in federal learning based on alliance game
CN116580824A (en) Cross-region medical cooperation prediction method based on federal graph machine learning
CN116719607A (en) Model updating method and system based on federal learning
CN116486150A (en) Uncertainty perception-based regression error reduction method for image classification model
CN115775025A (en) Lightweight federated learning method and system for space-time data heterogeneous scene
CN118228841B (en) Personalized federal learning training method, system and equipment based on consistency modeling
Yin et al. SynCPFL: Synthetic Distribution Aware Clustered Framework for Personalized Federated Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination