CN113554181A - Federal learning training model based on batch increment mode - Google Patents

Federal learning training model based on batch increment mode Download PDF

Info

Publication number
CN113554181A
CN113554181A CN202110768514.1A CN202110768514A CN113554181A CN 113554181 A CN113554181 A CN 113554181A CN 202110768514 A CN202110768514 A CN 202110768514A CN 113554181 A CN113554181 A CN 113554181A
Authority
CN
China
Prior art keywords
model
training
learning
local
batch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110768514.1A
Other languages
Chinese (zh)
Other versions
CN113554181B (en
Inventor
胡凯
陆美霞
吴佳胜
李姚根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110768514.1A priority Critical patent/CN113554181B/en
Publication of CN113554181A publication Critical patent/CN113554181A/en
Application granted granted Critical
Publication of CN113554181B publication Critical patent/CN113554181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a federal learning training model based on a batch increment mode. Belongs to the field of federal learning training, and comprises the following operation steps: putting forward and building a federal learning framework with incremental learning; the historical forgetting problem existing in incremental learning is kept in a federal learning framework; aiming at the historical forgetting problem, a targeted loss function optimization model is selected by constructing local loss update; and then, by building a federal learning local increment self-attention mechanism model, the memory of the federal learning frame with increment learning in the process of training data is enhanced, and the accuracy of classification tasks in the federal learning training model is improved. The method not only adds the average value of the losses of all the current local models to the losses of the local models, but also helps to reduce the influence of quick forgetting of incremental learning; the self-attention mechanism is also used in the most classical convolutional neural network, and key information is kept through the attention mechanism, so that feature extraction and selection are better performed, and incremental learning memory is enhanced.

Description

Federal learning training model based on batch increment mode
Technical Field
The invention belongs to the field of federal learning training, and particularly relates to a federal learning training model based on a batch increment mode.
Background
The technical development in the field of artificial intelligence is continuous, robots are increasingly applied to solving complex problems in modern society, and accurate classification is considered as a premise for the robots to realize autonomous tasks, so various machine learning algorithms are proposed, however, most of the algorithms are batch learning modes, and after a batch of samples are trained, the learning process is terminated, and then the robots cannot continue to learn. Meanwhile, if all data are to be learned again after the training is interrupted, a large amount of time and space are consumed, so that the algorithm for batch learning cannot meet the requirement. To address these issues, incremental learning algorithms for machine learning have emerged. However, the incremental learning algorithm interferes with old knowledge when acquiring new knowledge, and in order to improve the problem, a self-attention mechanism is considered to be added in the network model to help strengthen the memory of the local model for the old knowledge.
However, in the traditional training method of machine learning, data is collected and transmitted to a central point, and old data for incremental learning is directly adopted at the central point with a high-performance cluster to help process new data, so that related information of the data is exposed, the privacy problem of user data is not considered, certain potential safety hazard exists, and therefore the data needs to be encrypted. If a framework exists, the problem of data leakage can be solved, and meanwhile, the average value of losses of all current local models is given to the local, so that the influence of quick forgetting of incremental learning is reduced. Then, an incremental self-attention mechanism is added to the local model to help capture the correlation of local features, effectively extract and select the features, improve the performance of the local model, strengthen incremental learning memory and improve the accuracy of classification tasks.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a federal learning training model based on a batch increment mode; the incremental learning is applied to the framework, so that the knowledge updating of the user can be gradually carried out, the previous knowledge can be corrected and enhanced, the updated knowledge can adapt to newly arrived data, and the privacy safety problem of the user data is ensured while the whole data is not required to be learned again.
The technical scheme is as follows: the invention relates to a federal learning training model based on a batch increment mode, which comprises the following specific operation steps:
(1) the method comprises the steps that a federal learning framework with incremental learning is set up, and aims to solve the problem that time and space loss caused by retraining after training interruption exists in a batch learning algorithm used in a traditional federal learning framework;
the historical forgetting problem existing in incremental learning is reserved in the established federated learning framework with the incremental learning;
(2) aiming at the historical forgetting problem existing in the Federal learning framework, a targeted loss function optimization model is selected by constructing local loss update;
(3) and then, by building a federal learning local increment self-attention mechanism model, the memory of the federal learning frame with increment learning in the process of training data is enhanced, and finally, the accuracy of classification tasks in the federal learning training model is improved.
Further, in the step (1), the specific operation steps of building the federal learning framework with incremental learning by proposing are as follows:
(1.1) before training of the client, checking whether model parameters stored in the last training exist, and if the model parameters stored in the previous training exist, loading the model parameters into a global model;
(1.2) the global model sends the aggregation federation model to each client;
(1.3) each client performs batch incremental training on each client model in the local training process, stores model parameters of fixed batch training, and outputs network model parameters obtained by the last fixed batch training after training;
(1.4) after the client is trained, the model parameters at the moment are sent to the aggregation server;
(1.5) the aggregation server aggregates the received model parameters, and finally uses a weighted average for the received model parameters, which is specifically shown as the following formula:
Figure BDA0003151618510000021
in the formula (1), t represents the time of the polymerization averaging process, and nkRepresenting the local data volume of the kth participant,
Figure BDA0003151618510000022
parameters representing the local model at time t; the parameters comprise a weight parameter and a bias parameter;
the server then combines the aggregated model parameters
Figure BDA0003151618510000023
To all participants.
Further, in the step (2), the specific operation of constructing the local loss update is as follows:
(2.1) during local training, firstly judging whether model parameters stored in the last training exist, and if so, updating the local loss as shown in the following formula;
Figure BDA0003151618510000024
where x represents a sample, y represents a label, and j represents a variable that traverses all labels.
(2.2) during local training, firstly judging whether model parameters stored in the last training exist, and if not, updating the local loss as shown in the following formula;
Figure BDA0003151618510000031
wherein,
Figure BDA0003151618510000032
representing the loss value resulting from the current local training,
Figure BDA0003151618510000033
represents the average of all local losses present, and q represents the variable that traverses all clients present.
Further, in the step (3), the specific operation steps of constructing the federal learning local incremental self-attention mechanism model are as follows:
(3.1) obtaining the latest model update from the Server
Figure BDA0003151618510000034
Data set Dk(ii) a Randomly combining the data set DkDividing the size of the batch B into the size of the batch B, and putting the batch B into a network model for training;
(3.2) building convolutional layers, adopting the most classical convolutional neural network CNN, locally adding a self-attention mechanism in order to help strengthen incremental learning memory, calculating a weight according to the importance of a specific part of input and the correlation of the specific part of the input and output when data is incremented, assigning a correlation score to the input element, and ignoring a noisy part. By means of a self-attention mechanism, key information is reserved, feature extraction and selection are better performed, the correlation of local features is continuously captured, and the performance of a local model is improved; specifically, the formula is shown as follows;
Figure BDA0003151618510000035
wherein Q represents the query matrix, V and K represent the key values of the matrix,
Figure BDA0003151618510000036
a scaling factor is indicated for adjusting to prevent Q, K from having too large an inner product;
(3.3) updating model parameters in a local batch increment mode, and storing the model parameters in fixed batches; and then sending the current local model parameters to the global model.
Has the advantages that: compared with the prior art, the traditional machine learning-based training method collects data and transmits the data to a central point, old incremental learning data are directly adopted at the central point with a high-performance cluster to help process new data, the privacy problem of user data is not considered, and a batch incremental mode-based federal learning training model is provided and has the functions of adding the average value of losses of all local models to the losses of local models and reducing the influence of quick forgetting of incremental learning; in addition, the method applies a self-attention mechanism to the most classical convolutional neural network, retains key information through the attention mechanism, better extracts and selects features and strengthens incremental learning memory.
Drawings
FIG. 1 is an operational schematic diagram of the federated learning framework proposed to build incremental learning in the present invention;
FIG. 2 is a flow chart of the operation of the present invention by building a local loss update;
FIG. 3 is a flow chart of the operation of the present invention by building a model of the federal learning local incremental adaptive attention mechanism;
fig. 4 is a general operational flow diagram of the present invention.
Detailed Description
The invention is further described with reference to the following drawings and specific embodiments; in the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in many ways different from those described herein, and similar modifications may be made by those skilled in the art without departing from the spirit of the present application, and the present application is therefore not limited to the specific implementations disclosed below.
The invention relates to a federal learning training model based on a batch increment mode, which comprises the following specific operation steps:
(1) the method comprises the steps that a federal learning framework with incremental learning is set up, and aims to solve the problem that time and space loss caused by retraining after training interruption exists in a batch learning algorithm used in a traditional federal learning framework;
the historical forgetting problem existing in incremental learning is reserved in the established federated learning framework with the incremental learning;
(2) aiming at the historical forgetting problem existing in the Federal learning framework, a targeted loss function optimization model is selected by constructing local loss update;
(3) and then, by building a federal learning local increment self-attention mechanism model, the memory of the federal learning frame with increment learning in the process of training data is enhanced, and finally, the accuracy of classification tasks in the federal learning training model is improved.
Further, as shown in fig. 1, in step (1), the specific operation steps of building a federal learning framework with incremental learning by proposing are as follows:
(1.1) before training of the client, checking whether model parameters stored in the last training exist, and if model parameters in the previous training exist, loading the model parameters into a global model;
(1.2) the global model sends the aggregation federation model to each client;
(1.3) each client performs batch incremental training on each client model in the local training process, stores model parameters of fixed batch training, and outputs network model parameters obtained by the last fixed batch training after training;
(1.4) after the client is trained, the model parameters at the moment are sent to the aggregation server;
(1.5) the aggregation server aggregates the received model parameters, and finally uses a weighted average for the received model parameters, which is specifically shown as the following formula:
Figure BDA0003151618510000041
wherein t represents the time of the aggregation average process, nkRepresenting the local data volume of the kth participant,
Figure BDA0003151618510000051
parameters (including weight parameters and bias parameters) representing the local model at time t; the server then combines the aggregated model parameters
Figure BDA0003151618510000052
To all participants.
Further, as shown in fig. 2, in the step (2), the specific operation steps of constructing the local loss update are as follows:
(2.1) during local training, firstly judging whether model parameters stored in the last training exist, and if so, updating the local loss as shown in the following formula;
Figure BDA0003151618510000053
wherein x represents a sample, y represents a label, and j represents a variable traversing all labels;
(2.2) during local training, firstly judging whether model parameters stored in the last training exist, and if not, updating the local loss as shown in the following formula;
Figure BDA0003151618510000054
wherein,
Figure BDA0003151618510000055
representing the loss value resulting from the current local training,
Figure BDA0003151618510000056
represents the average of all local losses present, and q represents the variable that traverses all clients present.
Further, as shown in fig. 3, in step (3), the specific operation steps of the local incremental self-attention mechanism model through building the federal learning are as follows:
(3.1) obtaining the latest model update from the Server
Figure BDA0003151618510000057
Data set Dk(CIFAR10 public data set); randomly combining the data set DkDividing into the size of batch B (B is 10), and adopting random gradient descentAs the learning rate, the initial learning rate is 0.1, the momentum is set to 0.1, the weight attenuation coefficient is set to 0.01, and the initial learning rate is put into a network model for training;
(3.2) building convolutional layers, adopting the most classical convolutional neural network CNN, locally adding a self-attention mechanism in order to help strengthen incremental learning memory, calculating a weight according to the importance of a specific part of input and the correlation of the specific part of the input and output when data is incremented, assigning a correlation score to the input element, and ignoring a noisy part. By means of a self-attention mechanism, key information is reserved, feature extraction and selection are better performed, the correlation of local features is continuously captured, and the performance of a local model is improved as shown in the following formula;
Figure BDA0003151618510000058
wherein Q represents the query matrix, V and K represent the key values of the matrix,
Figure BDA0003151618510000059
a scaling factor is indicated for adjusting to prevent Q, K from having too large an inner product;
(3.3) updating model parameters in a local batch increment mode, and storing the model parameters in fixed batches; and then sending the current local model parameters to the global model.
As shown in fig. 4, the federal learning training model based on the batch increment mode of the present invention firstly solves the problems of time and space loss caused by retraining after interruption in the conventional batch learning algorithm, avoids data leakage,
the incremental learning is applied to the framework, so that knowledge updating of a user can be gradually carried out, previous knowledge can be corrected and enhanced, the updated knowledge can adapt to newly arrived data, and privacy safety of the user data is guaranteed while learning of all data is not needed; the method can solve the problems of time loss, space loss and the like caused by retraining after interruption due to the use of a traditional batch learning algorithm in a federated learning framework, and then endows the average value of the losses of all the current local models to the local in order to primarily solve the historical forgetting problem caused by incremental learning, and finally adds an incremental self-attention mechanism in a network model to help strengthen correlation knowledge memory and improve the accuracy of classification tasks.
Traditional batch learning algorithm can bring the problem of time and space loss that the interrupt back need retrain and bring, and when training small data set, retrain needs the time and space loss of spending less, however in practical application, like artificial intelligence, the data size requirement to the data set is very big, and retrain this moment can bring corresponding loss. The data set used by the invention is bank customer complaints, which are classified into 12 categories, debt recourse, consumer loan, mortgage, credit card, credit report, student loan, bank account or service, salary daily loan, money transfer, other financial service, prepaid card; the bank customer complaints are trained by using a federal learning training model based on a traditional batch learning algorithm, and if training interruption occurs, a large amount of time loss can be caused, so that the invention provides the federal learning training model based on a batch increment mode, which can effectively solve the interruption problem caused by the traditional batch learning algorithm, but the established federal learning framework with the increment learning function retains the historical problem of increment learning, and aiming at the problem, a targeted loss function optimization model is selected by establishing local loss updating; and then, by building a federal learning local increment self-attention mechanism model, the memory of the federal learning framework with increment learning in the process of training data is enhanced, and the accuracy of classification tasks in bank customer complaint data sets is improved.
The specific embodiment is as follows: take bank customer complaints as an example: the specific operation steps are as follows:
(1) the method includes the steps that a federal learning framework with incremental learning is set up, and aims to solve the problem that time and space loss caused by the fact that a bank customer complaint data set needs to be retrained after training is interrupted in a batch learning algorithm used in a traditional federal learning framework;
the historical forgetting problem existing in incremental learning is reserved in the established federated learning framework with the incremental learning;
(2) aiming at the historical forgetting problem existing in the Federal learning framework, a targeted loss function optimization model is selected by constructing local loss update;
(3) and then, by building a federal learning local increment self-attention mechanism model, the memory of the federal learning framework with increment learning in the process of training the data of the bank client complaint data set is enhanced, and finally, the accuracy of classification tasks of the bank client complaint data set in the federal learning training model is improved.
The federal learning training model based on the batch increment mode firstly solves the problems of time and space loss and the like caused by retraining a bank customer complaint data set after the interruption of a traditional batch learning algorithm, and avoids the leakage of the bank customer complaint data.
The incremental learning is applied to the framework, so that knowledge updating of a user can be gradually carried out, previous knowledge can be corrected and enhanced, the updated knowledge can adapt to newly arrived data, and privacy safety of the user data is guaranteed while learning of all data is not needed; the method can solve the problems of time loss, space loss and the like caused by retraining a bank client complaint data set after interruption due to the use of a traditional batch learning algorithm in a federal learning framework, and then gives the average value of the losses of all the current local models to the local in order to primarily solve the historical forgetting problem caused by incremental learning, and finally, an incremental self-attention mechanism is added into a network model to help strengthen correlation knowledge memory, so that the accuracy of classification tasks in the bank client complaint data set is improved.

Claims (5)

1. A federal learning training model based on a batch increment mode is characterized by comprising the following specific operation steps:
(1) the method comprises the steps that a federal learning framework with incremental learning is set up, and aims to solve the problem that time and space loss caused by retraining after training interruption exists in a batch learning algorithm used in a traditional federal learning framework;
the historical forgetting problem existing in incremental learning is reserved in the established federated learning framework with the incremental learning;
(2) aiming at the historical forgetting problem existing in the Federal learning framework, a targeted loss function optimization model is selected by constructing local loss update;
(3) and then, by building a federal learning local increment self-attention mechanism model, the memory of the federal learning frame with increment learning in the process of training data is enhanced, and finally, the accuracy of classification tasks in the federal learning training model is improved.
2. The batch incremental mode-based federal learning training model as claimed in claim 1, wherein in step (1), the specific operation steps for constructing the federal learning framework with incremental learning by proposing are as follows:
(1.1) before training of the client, checking whether model parameters stored in the last training exist, and if the model parameters stored in the previous training exist, loading the model parameters into a global model;
(1.2) the global model sends the aggregation federation model to each client;
(1.3) each client performs batch incremental training on each client model in the local training process, stores model parameters of fixed batch training, and outputs network model parameters obtained by the last fixed batch training after training;
(1.4) after the client is trained, the model parameters at the moment are sent to the aggregation server;
(1.5) the aggregation server aggregates the received model parameters, and finally uses a weighted average for the received model parameters, which is specifically shown as the following formula:
Figure FDA0003151618500000011
in the formula (1), t represents the time of the polymerization averaging process, and nkDenotes the kth parameterThe amount of local data of the party to whom,
Figure FDA0003151618500000012
parameters representing the local model at time t; the server then combines the aggregated model parameters
Figure FDA0003151618500000013
To all participants.
3. The batch-increment-mode-based federal-learning training model as claimed in claim 2, wherein the parameters of the local model at the time t include a weight parameter and a bias parameter.
4. The batch-increment-mode-based federal learning training model as claimed in claim 1, wherein in the step (2), the specific operation of updating by building local loss is as follows:
(2.1) during local training, firstly judging whether model parameters stored in the last training exist, and if so, updating the local loss as shown in the following formula;
Figure FDA0003151618500000021
in the formula (2), x represents a sample, y represents a label, and j represents a variable traversing all labels;
(2.2) if the local loss does not exist, updating the local loss as shown in the following formula;
Figure FDA0003151618500000022
in the formula (3), b is 0.5,
Figure FDA0003151618500000023
representing the loss value resulting from the current local training,
Figure FDA0003151618500000024
represents the average of all local losses present, and q represents the variable that traverses all clients present.
5. The batch increment mode-based federal learning training model as claimed in claim 1, wherein in the step (3), the specific operation steps of constructing the federal learning local increment self-attention mechanism model are as follows:
(3.1) obtaining the latest model update from the Server
Figure FDA0003151618500000027
Data set Dk(ii) a Randomly combining the data set DkDividing the size of the batch B into the size of the batch B, and putting the batch B into a network model for training;
(3.2) building a convolutional layer, and adding a self-attention mechanism locally by adopting a Convolutional Neural Network (CNN) as shown in the following formula;
Figure FDA0003151618500000025
in the formula (4), Q represents the query matrix, V and K represent the key values of the matrix,
Figure FDA0003151618500000026
a scaling factor is indicated for adjusting to prevent Q, K from having too large an inner product;
(3.3) updating model parameters in a local batch increment mode, and storing the model parameters in fixed batches; and then sending the current local model parameters to the global model.
CN202110768514.1A 2021-07-07 2021-07-07 Federal learning training method based on batch increment mode Active CN113554181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110768514.1A CN113554181B (en) 2021-07-07 2021-07-07 Federal learning training method based on batch increment mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110768514.1A CN113554181B (en) 2021-07-07 2021-07-07 Federal learning training method based on batch increment mode

Publications (2)

Publication Number Publication Date
CN113554181A true CN113554181A (en) 2021-10-26
CN113554181B CN113554181B (en) 2023-06-23

Family

ID=78131397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110768514.1A Active CN113554181B (en) 2021-07-07 2021-07-07 Federal learning training method based on batch increment mode

Country Status (1)

Country Link
CN (1) CN113554181B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN112232528A (en) * 2020-12-15 2021-01-15 之江实验室 Method and device for training federated learning model and federated learning system
US20210073677A1 (en) * 2019-09-06 2021-03-11 Oracle International Corporation Privacy preserving collaborative learning with domain adaptation
CN112507219A (en) * 2020-12-07 2021-03-16 中国人民大学 Personalized search system based on federal learning enhanced privacy protection
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210073677A1 (en) * 2019-09-06 2021-03-11 Oracle International Corporation Privacy preserving collaborative learning with domain adaptation
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN112507219A (en) * 2020-12-07 2021-03-16 中国人民大学 Personalized search system based on federal learning enhanced privacy protection
CN112232528A (en) * 2020-12-15 2021-01-15 之江实验室 Method and device for training federated learning model and federated learning system
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HONGRUI SHI 等: "Towards Federated Learning with Attention Transfer to Mitigate System and Data Heterogeneity of Clients", 《EDGESYS \'21: PROCEEDINGS OF THE 4TH INTERNATIONAL WORKSHOP ON EDGE SYSTEMS, ANALYTICS AND NETWORKING》, pages 61 *
HU KAI 等: "A Federated Incremental Learning Algorithm Based on Dual Attention Mechanism", 《APPLIED SCIENCES》, vol. 12, no. 19, pages 1 - 19 *
张彦雯 等: "三维重建算法研究综述", 《南京信息工程大学学报(自然科学版)》, vol. 12, no. 05, pages 591 - 602 *
王生生 等: "基于联邦学习和区块链的新冠肺炎胸部CT图像分割", 《吉林大学学报(工学版) 》, vol. 51, no. 06, pages 2164 - 2173 *

Also Published As

Publication number Publication date
CN113554181B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN112364943B (en) Federal prediction method based on federal learning
US20210073639A1 (en) Federated Learning with Adaptive Optimization
AU2013364041B2 (en) Instance weighted learning machine learning model
CN111798244B (en) Transaction fraud monitoring method and device
CN105591882A (en) Method and system for mixed customer services of intelligent robots and human beings
US11521214B1 (en) Artificial intelligence payment timing models
CN109767225B (en) Network payment fraud detection method based on self-learning sliding time window
CN112292696B (en) Method and device for determining action selection policy of execution device
CN110213097B (en) Edge service supply optimization method based on dynamic resource allocation
CN115330556B (en) Training method, device and product of information adjustment model of charging station
CN111383093A (en) Intelligent overdue bill collection method and system
EP4002213A1 (en) System and method for training recommendation policies
CN110689359A (en) Method and device for dynamically updating model
WO2020233245A1 (en) Method for bias tensor factorization with context feature auto-encoding based on regression tree
US20190362354A1 (en) Real-time updating of predictive analytics engine
CN114282692A (en) Model training method and system for longitudinal federal learning
CN112639841A (en) Sampling scheme for policy search in multi-party policy interaction
CN117994635B (en) Federal element learning image recognition method and system with enhanced noise robustness
CN107256231B (en) Team member identification device, method and system
US20230306445A1 (en) Communication channel or communication timing selection based on user engagement
CN109960811A (en) A kind of data processing method, device and electronic equipment
CN113554181A (en) Federal learning training model based on batch increment mode
CN112470123B (en) Determining action selection guidelines for executing devices
CN116702976A (en) Enterprise resource prediction method and device based on modeling dynamic enterprise relationship
TW202040479A (en) Automatic fund depositing system and automatic fund depositing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant