CN115130683A - Asynchronous federal learning method and system based on multi-agent model - Google Patents

Asynchronous federal learning method and system based on multi-agent model Download PDF

Info

Publication number
CN115130683A
CN115130683A CN202210842680.6A CN202210842680A CN115130683A CN 115130683 A CN115130683 A CN 115130683A CN 202210842680 A CN202210842680 A CN 202210842680A CN 115130683 A CN115130683 A CN 115130683A
Authority
CN
China
Prior art keywords
training
model
client
group
asynchronous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210842680.6A
Other languages
Chinese (zh)
Inventor
余国先
刘礼亮
王峻
郭伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202210842680.6A priority Critical patent/CN115130683A/en
Publication of CN115130683A publication Critical patent/CN115130683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Algebra (AREA)
  • Operations Research (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of asynchronous federal learning, and provides an asynchronous federal learning method and an asynchronous federal learning system based on a multi-agent model, which comprises the following steps: randomly selecting a plurality of pre-training clients in each group of clients, and acquiring a decision result whether each pre-training client participates in the training and uploading of the model; receiving local models obtained by training of the participating models in each group of clients and uploaded client training to obtain a group model; and carrying out weighted aggregation on the group model to obtain a global model. The method not only can solve the problem of long-time waiting delay in synchronous federal learning, but also can solve the problem of communication bottleneck in completely semi-asynchronous federal learning.

Description

Asynchronous federal learning method and system based on multi-agent model
Technical Field
The invention belongs to the technical field of asynchronous federal learning, and particularly relates to an asynchronous federal learning method and system based on a multi-agent model.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The federal learning algorithm involves hundreds of millions of remote devices training locally on their device-generated data and collectively training a global, shared model in coordination with a central server that acts as an aggregator.
The Federal averaging algorithm is a starting point of Federal learning, breaks the inherent mode of traditional centralized and distributed training, and protects the privacy of data by only transmitting model gradient parameters. Experiments show that the mode can realize flexible and efficient communication and reduce communication cost. However, due to the highly ideal environmental requirements of the running federal average algorithm, many problems exist in the real heterogeneous client scene, the most typical problem is that in each round of training, a client which trains faster needs to wait for the slowest client to finish training to upload the aggregation update global model, and the training efficiency of the whole federal learning is determined by the slowest client, so that the training efficiency of the model is greatly reduced, and the training time is prolonged.
For synchronous federal learning like federal mean, it is experimented in a highly ideal scenario, while in a real-world scenario, due to the heterogeneity of devices and the unreliability of the network, it is inevitable that some stragglers (lagging devices or untrained devices) will occur, so in an actual scenario, it is more common to use an asynchronous federal training mode, in which the server does not need to wait for lagging devices to aggregate.
In the existing exponential weighted average asynchronous federal learning algorithm, different clients perform respective local model training by using own local data sets, once the training is completed by the clients, model parameters are immediately sent to a central server, and then the central server immediately aggregates the model parameters without waiting for any other edge device. The core idea is that the later uploaded local model gives it lower weight, so that the way can adaptively trade off convergence speed and variance reduction. It still fails to address the inherent problem of fully asynchronous federal learning, i.e., the communication bottleneck caused by frequent communications with the central server by local clients.
Aiming at the problem of training waiting delay in synchronous federated learning and the problem of communication bottleneck in completely asynchronous federated learning, a method is also provided for neutralizing the two existing huge problems in a semi-asynchronous mode, but no good method is provided for maximally reducing communication overhead and training delay under the condition of not losing model precision in the face of a more complicated heterogeneous client scene in real life.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides an asynchronous federal learning method and an asynchronous federal learning system based on a multi-agent model, which can not only solve the problem of long-time waiting delay in synchronous federal learning, but also solve the problem of communication bottleneck in completely semi-asynchronous federal learning.
In order to achieve the purpose, the invention adopts the following technical scheme:
the first aspect of the invention provides an asynchronous federal learning method based on a multi-agent model, which comprises the following steps:
randomly selecting a plurality of pre-training clients in each group of clients, and acquiring a decision result of whether each pre-training client participates in the training and uploading of the model;
receiving local models obtained by training of the participating models in each group of clients and uploaded client training to obtain a group model;
and carrying out weighted aggregation on the group model to obtain a global model.
Further, the method for obtaining the decision result comprises the following steps:
acquiring the state of each pre-training client;
and inputting the state of each pre-training client into the reinforcement learning agent network to obtain a decision result whether each pre-training client participates in the training and uploading of the model.
Further, the states of the pre-trained client include: the method comprises the steps of training round index t, the size of data volume on a client, the number of times of the client participating in local model updating and uploading until the t round, the number of times of updating a group model where the client is located until the t round, communication overhead of all pre-training clients and training delay of all pre-training clients.
Further, the reinforcement learning agent network targets maximizing cumulative returns.
Further, when the group models are weighted and aggregated, the weight of each group model is related to the update times of the group model of each group.
A second aspect of the present invention provides an asynchronous federated learning system based on a multi-agent model, which includes:
a client-side smart selection module configured to: randomly selecting a plurality of pre-training clients in each group of clients, and acquiring a decision result of whether each pre-training client participates in the training and uploading of the model;
an intra-group synchronization training module configured to: receiving local models obtained by training of the participating models in each group of clients and uploaded client training to obtain a group model;
an inter-group asynchronous training module configured to: and carrying out weighted aggregation on the group model to obtain a global model.
Further, the client intelligent selection module is specifically configured to:
acquiring the state of each pre-training client;
and inputting the state of each pre-training client into the reinforcement learning agent network to obtain a decision result whether each pre-training client participates in the training and uploading of the model.
Further, when the group models are weighted and aggregated, the weight of each group model is related to the update times of the group model of each group.
A third aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of a multi-agent model-based asynchronous federated learning method as described above.
A fourth aspect of the present invention provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the steps in the multi-agent model-based asynchronous federated learning method as described above.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides an asynchronous federated learning method based on a multi-agent model, which introduces a client intelligent selection mode with high-efficiency execution of multi-agent reinforcement learning to replace a random selection strategy in the conventional federated learning method, not only optimizes model precision, but also greatly improves model training efficiency, costs less communication overhead and training delay when training to reach specified precision compared with other more advanced semi-asynchronous federated learning methods, and can be applied to a wide heterogeneous client machine learning model training scene in real life.
The invention provides an asynchronous federated learning method based on a multi-agent model, wherein each client side is provided with a reinforcement learning agent, whether the reinforcement learning agent participates in the training and uploading aggregation of the model in the current round or not is determined according to own observation, the problem of machine learning model training in a more complex heterogeneous client side scene can be solved, the problem of long-time waiting delay in synchronous federated learning can be solved, and the problem of communication bottleneck in completely semi-asynchronous federated learning can be solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is an overall flowchart of an asynchronous federal learning method based on a multi-agent model according to a first embodiment of the present invention;
fig. 2 is a diagram of accuracy change in a complex scene on an F-MNIST dataset according to a first embodiment of the present invention;
fig. 3(a) is a communication overhead diagram in a complex scenario on a F-MNIST dataset according to a first embodiment of the present invention;
fig. 3(b) is a training delay diagram in a complex scenario on the F-MNIST data set according to the first embodiment of the present invention;
FIG. 4 is a diagram of accuracy change in a complex scene on a CIFAR-10 data set according to a first embodiment of the present invention;
FIG. 5(a) is a communication overhead diagram in a complex scenario on a CIFAR-10 data set according to a first embodiment of the present invention;
fig. 5(b) is a training delay diagram in a complex scenario on a CIFAR-10 data set according to a first embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Interpretation of terms:
objective function of federal learning algorithm:
Figure BDA0003751735120000061
wherein,
Figure BDA0003751735120000062
is the local experience loss of client k,/ i (x i ,y i (ii) a w) are data samples { x i ,y i The corresponding loss function value, w is the machine learning model to be trained; k is the total number of clients, D k (K e {1,. K }) represents the data stored in the local clientData samples on terminal k, n k =|D k L represents the number of data samples on client k;
Figure BDA0003751735120000063
the total data sample number stored on K clients; it is assumed that for any k ≠ k',
Figure BDA0003751735120000064
final objective of the federal learning algorithm: find a model w * To minimize the objective function:
w * =arg min f(w)
federal averaging Algorithm: a common approach to solve the optimization problem defined in the final objective of the Federal learning algorithm with a synchronous update under non-convex settings. The method performs model training by randomly sampling a subset of clients with a certain probability in each round, and each local client performs a number of local iterations using its own data using an optimizer (such as a random gradient descent).
Updating mode of a global model in an exponential weighted average asynchronous federated learning algorithm:
α t ←α×s(t-τ)
w t ←(1-α t )w t-1t w new
wherein tau is a round index when the fastest client uploads and updates the global model, t is a current round index, alpha belongs to (0,1) and alpha is a mixing coefficient t For the coefficient after the dynamic update of the current round t, w t-1 Old model obtained for the last round of training, w new For the new model obtained for the current training, w t For the new model obtained by weighting update and used for next training, s (-) is a model obsolescence degree function, and values can be taken
Figure BDA0003751735120000065
Example one
The embodiment provides an asynchronous federated learning method based on a multi-agent model, which specifically includes the following steps, as shown in fig. 1:
step 1, client intelligent selection stage. At each round of training (tth round), at each group of client groups m And randomly selecting a plurality of (P | pieces) pre-training clients, acquiring the state of each pre-training client, inputting the state of each pre-training client into the reinforcement learning agent network, and obtaining a decision result whether each pre-training client participates in the training and uploading of the model.
Suppose now there is a federated learning task that divides all clients into M groups, one of which is a group m The formal description is made, and the other groups are similar. group m And (3) randomly selecting | P | pre-training clients in each training round t, then deciding whether to participate in the training and uploading of the model of the round according to the own reinforcement learning agent by the | P | clients respectively, and performing local synchronous federated training and model uploading updating on the | P' | clients after decision. The communication overhead of each communication (uploading or downloading model) between the client n and the central server is a fixed value B n Response time CP of client n n Expressed in the number of global rounds that the client experiences in local training one time, CP n The larger the value is, the longer the time for the client to perform local training is, the slower the response is, and the training delay of the client n is
Figure BDA0003751735120000071
The update of the group model requires waiting for the slowest client in the group to complete local training.
Step 101, the selected clients to be pre-trained obtain their current states:
Figure BDA0003751735120000072
the state space of each reinforcement learning agent n consists of six parts, which are respectively: index t of current training round, and corresponding data size | D on client n n I, until round t, client n participates inNumber of local model updates and uploads
Figure BDA0003751735120000073
Until the t round, the group where the client n is located m The number of times of group model update is
Figure BDA0003751735120000074
Communication overhead B for all pre-trained clients t ={B j I j belongs to P, training delay of all pre-training clients
Figure BDA0003751735120000075
Step 102, inputting the current state into a reinforcement learning agent network to obtain a decision result:
Figure BDA0003751735120000081
wherein, 1 represents that the client participates in the training and uploading of the model in the current round, and 0 represents that the client does not participate.
The reinforcement learning agent network aims at maximizing the accumulated return, and specifically comprises the following steps:
uploading the group model to update the global model after completing the training in the group, and then calculating the reward:
Figure BDA0003751735120000082
wherein u is a constant greater than 1, and a suitable value, acc, will be selected according to experimental conditions t Representing the precision of the global model updated after the proper client training is selected in the t round, acc last The latest global model precision is represented and,
Figure BDA0003751735120000083
represents the sum of the communication overhead of all clients intelligently selected in the t-th round,
Figure BDA0003751735120000084
represents the sum of the training delays of all clients selected in the intelligent selection round t;
the reinforcement learning agent network will be trained to maximize the expectation of cumulative returns R, which are described as follows:
Figure BDA0003751735120000085
where E is the total update times of the global model and γ ∈ (0,1) is the discount factor for future rewards.
And 2, an intra-group synchronous training stage. And receiving local models obtained by the training of the participated models in each group of clients and the uploaded client training to obtain a group model.
All clients which make intra-group decision for model training and uploading will make synchronous federal training to obtain a group model:
Figure BDA0003751735120000086
Figure BDA0003751735120000091
wherein, P t '、|P t '|、n k 、N c 、η、
Figure BDA0003751735120000092
Respectively expressed in the t-th wheel group m Intelligently selected client subset, intelligently selected number of clients, amount of data on client k, P t All data volume in,' learning rate, gradient of local experience loss for client k.
And 3, an inter-group asynchronous training stage. And carrying out weighted aggregation on the group model to obtain a global model. When the group models are weighted and aggregated, the weight of each group model is related to the updating times of the group model of each group.
Suppose that willAll clients are divided into M groups, and the updating times of each group are T respectively up to now 1 ,T 2 ,…,T M The total number of updates for all groups is T 1 +T 2 +...+T M The description of weighted aggregation to get the global model is as follows:
Figure BDA0003751735120000093
wherein,
Figure BDA0003751735120000094
to be group m The corresponding weight of the weight is set to be,
Figure BDA0003751735120000095
the relatively slow set M +1-M according to this formula has smaller values, T (M+1-m) The value is larger and thus assigned a larger weight value.
In order to solve the problem of long-time waiting delay in synchronous federal learning and solve the communication bottleneck problem in completely semi-asynchronous federal learning, the embodiment provides an asynchronous federal learning method (abbreviated as MAAFL) based on a multi-agent model, and the method is different from other semi-asynchronous methods in that a multi-agent reinforcement learning is introduced to execute high-efficiency intelligent selection of client ends in a group, each client end is provided with a reinforcement learning agent, and whether the client ends participate in the training and uploading aggregation of the model in the current round or not is determined according to own observation; according to the method, the high-efficiency client intelligent selection strategy is executed by combining multi-agent reinforcement learning in an asynchronous scene, so that not only can the model precision be optimized, but also the model training efficiency is greatly improved, and the communication overhead and the training delay are less consumed when the training reaches the specified precision compared with other advanced semi-asynchronous federal learning methods.
This example compares MAAFL and the synchronous federal mean method (abbreviated FedAvg) and the fully asynchronous federal method (abbreviated FedAsync) with the hierarchical semi-asynchronous method (abbreviated FedAT).
The simultaneous federal mean method (abbreviated FedAvg) is a baseline federal learning method. In each round, a proportion of all clients is randomly drawn for training, and the server averages the weights received from the selected clients.
The fully asynchronous federated method (abbreviated feda sync) uses a baseline asynchronous federated learning method that updates the server global model using weighted averaging. Different from the previous synchronous federal learning method, all the clients are trained simultaneously, when the server receives weights from any client, the weights and the current global model weight are weighted and averaged immediately to obtain the latest global model, and then the latest global model is communicated with all the available clients for training
A layered semi-asynchronous method (abbreviated as FedAT) is a semi-asynchronous federal learning method, and combines synchronous in-layer training and cross-layer asynchronous training. And selecting part of clients to perform synchronous federal training by adopting a random selection strategy in the layers, and communicating with a central server to update a global model in an asynchronous mode between the layers.
In order to verify the effectiveness of the asynchronous federal learning method based on the multi-agent model, the model accuracy of FedAvg, FedAsync, FedAT and MAAFL training, and the communication overhead and training delay spent on reaching the specified accuracy are compared experimentally.
In the experiment, three data sets were used, MNIST, Fashion-MNIST, CIFAR-10, MNIST being a handwriting recognition volume data set containing a training set of 60,000 samples and a testing set of 10,000 samples. Each example is a 28 x 28 dimensional grayscale image, associated with labels from 10 classes; the fast-MNIST is a dress picture data set containing a training set of 60,000 samples and a test set of 10,000 samples. Each example is a 28 x 28 dimensional grayscale image, associated with labels from 10 classes; the CIFAR-10 dataset contains 60000 color images of 32X 32 dimensions, divided into 10 classes of 6000 each, 5000 for training and 1000 for testing.
The model is evaluated on the three different data sets respectively in the experiment, and the data partitioning mode with non-independent and same distribution is adopted. Specifically, all data of each data set is divided into 200 fragments according to category labels, all data corresponding to each label in 10 category labels are divided into 20 fragments, and 2 fragments belonging to different label categories are allocated to each local client. The division is to ensure that only two kinds of labeled data exist on each client in 100 local clients, and the data size on each client is different, so that the training is ensured to be carried out in a non-independent and identically distributed data environment.
The experiment randomly divides local data of each client into 80% of training sets and 20% of testing sets, for synchronous training in a group, a sampling method which is the same as the federal average is adopted, MAAFL randomly selects a part of clients as pre-training clients, then each pre-training client executes intelligent decision to decide whether to participate in the training of the current round, and random gradient descent (SGD) is used as a local optimizer. For all datasets, the training configuration parameters of the local model are uniform: the learning rate is 0.01, the number of local iterations is 3, the batch size is 10, and the number of randomly selected pre-trained clients is set to 20 for all algorithms.
Different performance groups are simulated through experiments, all clients are averagely divided into 5 groups firstly, and then response time is randomly distributed for 1-3 rounds, 3-5 rounds, 5-7 rounds, 7-9 rounds and 9-11 rounds for the clients of each group respectively. Furthermore, to simulate an unstable network connection, for all tests run, 20 "unstable" clients were randomly selected, which would exit the training with a certain small probability during the training process. Once the client exits, it will not come back to reengage federal training.
In the experiment, the communication overhead of each local client and the communication (uploading or downloading model) of the central server is fixed, in order to simulate the heterogeneity of the local clients in an actual scene, a fixed communication overhead value is randomly distributed to each client within a fixed range, and different clients have different communication overheads. The total communication overhead is the sum of the communication overhead of all clients with the central server throughout the training process.
In the experiment, due to the difference of factors such as computing power of different local clients, the delay of uploading of each round of local training models is brought. In a local client that is intelligently selected to train in a round, a fast client needs to wait for a slow client, for example, a client with a response time of 2 rounds needs to wait for a client with a response time of 5 rounds with a delay of 3 global rounds. The total training delay is the accumulation of the training delay of each client during the entire training process.
The MAAFL algorithm and the three comparison methods are respectively evaluated on the three data sets, each algorithm runs on each data set three times, the optimal global model precision is obtained after the model is converged every time, the optimal results of the three times are averaged to obtain the average precision of the optimal global model, and the standard deviation of the results of the three times is calculated to obtain the result shown in the table 1.
TABLE 1 precision representation of different algorithms on three datasets
Figure BDA0003751735120000121
Figure BDA0003751735120000131
The communication overhead and training delay spent to achieve a specified accuracy on the three data sets is shown in tables 2, 3 and 4.
TABLE 2 communication overhead and training delay spent by different algorithms to achieve specified accuracy on MNIST data sets
Figure BDA0003751735120000132
TABLE 3 communication overhead and training delay spent by different algorithms to achieve specified accuracy on F-MNIST dataset
Figure BDA0003751735120000141
TABLE 4 communication overhead and training delay spent by different algorithms to achieve specified accuracy on CIFAR-10 dataset
Figure BDA0003751735120000142
The experiment further simulates a more complex scenario, setting the response time of the slowest client 21 rounds behind, and the resulting experimental results are shown in fig. 2, fig. 3(a), fig. 3(b), fig. 4, fig. 5(a), and fig. 5 (b).
From these experimental results, although the overall performance of the MAAFL method is better than that of the FedAT which is not in a semi-asynchronous mode under a general scene, the MAAFL method has its own advantages, and the MAAFL exceeds the FedAT in both precision and training efficiency under a more complex scene. The bright spots of the experimental results are summarized as follows:
(1) compared with the FedAvg, the MAAFL can well solve the problem of training delay in the FedAvg, has very important significance in a real heterogeneous scene, and avoids a long-time waiting process between clients;
(2) compared with FedAsync, MAAFL avoids the communication bottleneck problem and solves the communication bottleneck problem caused by frequent communication with a central server in a completely asynchronous method;
(3) MAAFL highlights its advantages in complex scenarios more than FedAT compared to the same semi-asynchronous approach, not only with accuracy exceeding FedAT, but also with a maximum reduction of 44% training delay and 36% communication overhead over FedAT on average when a specified accuracy is achieved on two more complex data sets.
In the embodiment, all clients participating in the federal training are grouped according to the response time of the clients, the federal training in a group is synchronized to obtain a group model, the group model is updated in an asynchronous weighting mode between the groups, multi-agent reinforcement learning is introduced, and each client has a reinforcement learning agent which determines whether to participate in the training of the model according to own observation. The method can solve the problem of machine learning model training in a more complex heterogeneous client scene, not only can solve the problem of long-time waiting delay in synchronous federated learning, but also can solve the problem of communication bottleneck in fully semi-asynchronous federated learning.
Example two
The embodiment provides an asynchronous federal learning system based on a multi-agent model, which specifically comprises the following modules:
a client-side smart selection module configured to: randomly selecting a plurality of pre-training clients in each group of clients, and acquiring a decision result whether each pre-training client participates in the training and uploading of the model;
an intra-group synchronization training module configured to: receiving local models obtained by training of the participating models in each group of clients and uploaded client training to obtain a group model;
an inter-group asynchronous training module configured to: and carrying out weighted aggregation on the group model to obtain a global model.
The client intelligent selection module is specifically configured to:
acquiring the state of each pre-training client;
and inputting the state of each pre-training client into the reinforcement learning agent network to obtain a decision result whether each pre-training client participates in the training and uploading of the model.
When the group models are subjected to weighted aggregation, the weight of each group model is related to the updating times of the group model of each group.
It should be noted that, each module in the present embodiment corresponds to each step in the first embodiment one to one, and the specific implementation process is the same, which is not described again here.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the asynchronous federated learning method based on a multi-agent model as described in the first embodiment above.
Example four
The embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the steps in the asynchronous federated learning method based on a multi-agent model as described in the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An asynchronous federal learning method based on a multi-agent model is characterized by comprising the following steps:
randomly selecting a plurality of pre-training clients in each group of clients, and acquiring a decision result of whether each pre-training client participates in the training and uploading of the model;
receiving local models obtained by training of the participating models in each group of clients and uploaded client training to obtain a group model;
and carrying out weighted aggregation on the group model to obtain a global model.
2. The asynchronous federated learning method based on the multi-agent model as claimed in claim 1, characterized in that the method for obtaining the decision result is:
acquiring the state of each pre-training client;
and inputting the state of each pre-training client into the reinforcement learning agent network to obtain a decision result whether each pre-training client participates in the training and uploading of the model.
3. The multi-agent model-based asynchronous federated learning method of claim 2, wherein the state of the pre-trained client includes: the training round index t, the data size on the client, the number of times that the client participates in local model updating and uploading until the t round, the number of times that the group model of the client is updated until the t round, the communication overhead of all pre-training clients and the training delay of all pre-training clients.
4. The multi-agent model-based asynchronous federated learning method of claim 2, wherein the reinforcement learning agent network targets a maximum cumulative return.
5. The asynchronous federated learning method based on multi-agent model of claim 1, wherein when the group models are weighted and aggregated, the weight of each group model is related to the number of updates of the group model of each group.
6. An asynchronous federated learning system based on a multi-agent model, comprising:
a client-side smart selection module configured to: randomly selecting a plurality of pre-training clients in each group of clients, and acquiring a decision result of whether each pre-training client participates in the training and uploading of the model;
an intra-group synchronization training module configured to: receiving local models obtained by training of the participating models in each group of clients and uploaded client training to obtain a group model;
an inter-group asynchronous training module configured to: and carrying out weighted aggregation on the group model to obtain a global model.
7. The multi-agent model-based asynchronous federated learning system of claim 6, wherein the client-side intelligent selection module is specifically configured to:
acquiring the state of each pre-training client;
and inputting the state of each pre-training client into the reinforcement learning agent network to obtain a decision result whether each pre-training client participates in the training and uploading of the model.
8. The multi-agent model-based asynchronous federated learning system of claim 6, wherein the weight of each group model is related to the number of updates of each group model when the group models are weighted aggregated.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of a multi-agent model-based asynchronous federated learning method as claimed in any one of claims 1 to 5.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of a multi-agent model-based asynchronous federated learning method as recited in any one of claims 1-5 when executing the program.
CN202210842680.6A 2022-07-18 2022-07-18 Asynchronous federal learning method and system based on multi-agent model Pending CN115130683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210842680.6A CN115130683A (en) 2022-07-18 2022-07-18 Asynchronous federal learning method and system based on multi-agent model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210842680.6A CN115130683A (en) 2022-07-18 2022-07-18 Asynchronous federal learning method and system based on multi-agent model

Publications (1)

Publication Number Publication Date
CN115130683A true CN115130683A (en) 2022-09-30

Family

ID=83384447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210842680.6A Pending CN115130683A (en) 2022-07-18 2022-07-18 Asynchronous federal learning method and system based on multi-agent model

Country Status (1)

Country Link
CN (1) CN115130683A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029371A (en) * 2023-03-27 2023-04-28 北京邮电大学 Federal learning workflow construction method based on pre-training and related equipment
CN116306986A (en) * 2022-12-08 2023-06-23 哈尔滨工业大学(深圳) Federal learning method based on dynamic affinity aggregation and related equipment

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091200A (en) * 2019-12-20 2020-05-01 深圳前海微众银行股份有限公司 Updating method, system, agent, server and storage medium of training model
US20210012224A1 (en) * 2019-07-14 2021-01-14 Olivia Karen Grabmaier Precision hygiene using reinforcement learning
CN112532451A (en) * 2020-11-30 2021-03-19 安徽工业大学 Layered federal learning method and device based on asynchronous communication, terminal equipment and storage medium
CN112668877A (en) * 2020-12-25 2021-04-16 西安电子科技大学 Thing resource information distribution method and system combining federal learning and reinforcement learning
CN113011599A (en) * 2021-03-23 2021-06-22 上海嗨普智能信息科技股份有限公司 Federal learning system based on heterogeneous data
CN113191484A (en) * 2021-04-25 2021-07-30 清华大学 Federal learning client intelligent selection method and system based on deep reinforcement learning
CN113490254A (en) * 2021-08-11 2021-10-08 重庆邮电大学 VNF migration method based on bidirectional GRU resource demand prediction in federal learning
CN113643553A (en) * 2021-07-09 2021-11-12 华东师范大学 Multi-intersection intelligent traffic signal lamp control method and system based on federal reinforcement learning
CN113971089A (en) * 2021-09-27 2022-01-25 国网冀北电力有限公司信息通信分公司 Method and device for selecting equipment nodes of federal learning system
CN114037089A (en) * 2021-10-26 2022-02-11 中山大学 Heterogeneous scene-oriented asynchronous federated learning method, device and storage medium
CN114528304A (en) * 2022-02-18 2022-05-24 安徽工业大学 Federal learning method, system and storage medium for updating self-adaptive client parameters
CN114580658A (en) * 2021-12-28 2022-06-03 天翼云科技有限公司 Block chain-based federal learning incentive method, device, equipment and medium
CN114584581A (en) * 2022-01-29 2022-06-03 华东师范大学 Federal learning system and federal learning training method for smart city Internet of things and letter fusion
CN114971819A (en) * 2022-03-28 2022-08-30 东北大学 User bidding method and device based on multi-agent reinforcement learning algorithm under federal learning

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210012224A1 (en) * 2019-07-14 2021-01-14 Olivia Karen Grabmaier Precision hygiene using reinforcement learning
CN111091200A (en) * 2019-12-20 2020-05-01 深圳前海微众银行股份有限公司 Updating method, system, agent, server and storage medium of training model
WO2021121029A1 (en) * 2019-12-20 2021-06-24 深圳前海微众银行股份有限公司 Training model updating method and system, and agent, server and computer-readable storage medium
CN112532451A (en) * 2020-11-30 2021-03-19 安徽工业大学 Layered federal learning method and device based on asynchronous communication, terminal equipment and storage medium
CN112668877A (en) * 2020-12-25 2021-04-16 西安电子科技大学 Thing resource information distribution method and system combining federal learning and reinforcement learning
CN113011599A (en) * 2021-03-23 2021-06-22 上海嗨普智能信息科技股份有限公司 Federal learning system based on heterogeneous data
CN113191484A (en) * 2021-04-25 2021-07-30 清华大学 Federal learning client intelligent selection method and system based on deep reinforcement learning
CN113643553A (en) * 2021-07-09 2021-11-12 华东师范大学 Multi-intersection intelligent traffic signal lamp control method and system based on federal reinforcement learning
CN113490254A (en) * 2021-08-11 2021-10-08 重庆邮电大学 VNF migration method based on bidirectional GRU resource demand prediction in federal learning
CN113971089A (en) * 2021-09-27 2022-01-25 国网冀北电力有限公司信息通信分公司 Method and device for selecting equipment nodes of federal learning system
CN114037089A (en) * 2021-10-26 2022-02-11 中山大学 Heterogeneous scene-oriented asynchronous federated learning method, device and storage medium
CN114580658A (en) * 2021-12-28 2022-06-03 天翼云科技有限公司 Block chain-based federal learning incentive method, device, equipment and medium
CN114584581A (en) * 2022-01-29 2022-06-03 华东师范大学 Federal learning system and federal learning training method for smart city Internet of things and letter fusion
CN114528304A (en) * 2022-02-18 2022-05-24 安徽工业大学 Federal learning method, system and storage medium for updating self-adaptive client parameters
CN114971819A (en) * 2022-03-28 2022-08-30 东北大学 User bidding method and device based on multi-agent reinforcement learning algorithm under federal learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DIAN SHI 等: "Make Smart Decisions Faster: Deciding D2D Resource Allocation via Stackelberg Game Guided Multi-Agent Deep Reinforcement Learning", 《IEEE TRANSACTIONS ON MOBILE COMPUTING》, 1 June 2021 (2021-06-01) *
SAI QIAN ZHANG 等: "A Multi-agent Reinforcement Learning Approach for Efficient Client Selection in Federated Learning", 《ARXIV》, 9 January 2022 (2022-01-09), pages 3 - 5 *
梁应敞;谭俊杰;DUSIT NIYATO;: "智能无线通信技术研究概况", 通信学报, no. 07, 31 December 2020 (2020-12-31) *
郭伟: "我国中小企业外部融资环境研究", 《中国优秀硕士学位论文全文数据库 经济与管理科学辑》, no. 5, 15 May 2013 (2013-05-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116306986A (en) * 2022-12-08 2023-06-23 哈尔滨工业大学(深圳) Federal learning method based on dynamic affinity aggregation and related equipment
CN116306986B (en) * 2022-12-08 2024-01-12 哈尔滨工业大学(深圳) Federal learning method based on dynamic affinity aggregation and related equipment
CN116029371A (en) * 2023-03-27 2023-04-28 北京邮电大学 Federal learning workflow construction method based on pre-training and related equipment
CN116029371B (en) * 2023-03-27 2023-06-06 北京邮电大学 Federal learning workflow construction method based on pre-training and related equipment

Similar Documents

Publication Publication Date Title
CN115130683A (en) Asynchronous federal learning method and system based on multi-agent model
CN113191484B (en) Federal learning client intelligent selection method and system based on deep reinforcement learning
US20190279088A1 (en) Training method, apparatus, chip, and system for neural network model
Yoshida et al. MAB-based client selection for federated learning with uncertain resources in mobile networks
CN113516250A (en) Method, device and equipment for federated learning and storage medium
CN111507768B (en) Potential user determination method and related device
CN105184367B (en) The model parameter training method and system of deep neural network
CN110956202B (en) Image training method, system, medium and intelligent device based on distributed learning
CN110968426A (en) Edge cloud collaborative k-means clustering model optimization method based on online learning
EP4350572A1 (en) Method, apparatus and system for generating neural network model, devices, medium and program product
CN115374853A (en) Asynchronous federal learning method and system based on T-Step polymerization algorithm
CN115587633A (en) Personalized federal learning method based on parameter layering
CN117236421B (en) Large model training method based on federal knowledge distillation
WO2022252694A1 (en) Neural network optimization method and apparatus
CN115643594B (en) Information age optimization scheduling method for multi-sensor multi-server industrial Internet of things
CN109558898B (en) Multi-choice learning method with high confidence based on deep neural network
CN113850394A (en) Federal learning method and device, electronic equipment and storage medium
CN116384504A (en) Federal migration learning system
CN113094180B (en) Wireless federal learning scheduling optimization method and device
CN118095410A (en) Federal learning parameter efficient fine-tuning method and device for neural network architecture search
CN117785490A (en) Training architecture, method, system and server of graph neural network model
CN115577797B (en) Federal learning optimization method and system based on local noise perception
Zhang et al. Optimizing federated edge learning on Non-IID data via neural architecture search
CN110378464A (en) The management method and device of the configuration parameter of artificial intelligence platform
CN115115064A (en) Semi-asynchronous federal learning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination