CN113391897B - Heterogeneous scene-oriented federal learning training acceleration method - Google Patents

Heterogeneous scene-oriented federal learning training acceleration method Download PDF

Info

Publication number
CN113391897B
CN113391897B CN202110661958.5A CN202110661958A CN113391897B CN 113391897 B CN113391897 B CN 113391897B CN 202110661958 A CN202110661958 A CN 202110661958A CN 113391897 B CN113391897 B CN 113391897B
Authority
CN
China
Prior art keywords
client
time
round
sample set
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110661958.5A
Other languages
Chinese (zh)
Other versions
CN113391897A (en
Inventor
刘宇涛
夏子翔
章小宁
何耶肖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110661958.5A priority Critical patent/CN113391897B/en
Publication of CN113391897A publication Critical patent/CN113391897A/en
Application granted granted Critical
Publication of CN113391897B publication Critical patent/CN113391897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a heterogeneous scene-oriented federal learning training acceleration method, which comprises the following steps: s1: distributing the training tasks to the server and the client; s2: and running a client algorithm and a server algorithm according to the training task to obtain a client running result and a server running result. The heterogeneous scene oriented federal learning training acceleration method provided by the invention can solve the problem of low synchronization efficiency in the existing federal learning.

Description

Heterogeneous scene-oriented federal learning training acceleration method
Technical Field
The invention relates to the technical field of machine learning, in particular to a heterogeneous scene-oriented federal learning training acceleration method.
Background
Driven by resource and privacy considerations, we witnessed the rise of Federal Learning (FL) in the big data era for decades. Federal learning, as a distributed paradigm, has gradually replaced traditional centralized systems to implement Artificial Intelligence (AI) at the edge of the network, as shown in fig. 1. In federal learning, each client trains its local model using its own collected data without sharing the raw data with other clients. Client federations with the same interests may be joined to arrive at a sharing model by periodically synchronizing their local parameters under the coordination of a central server. However, due to the heterogeneous and dynamic nature of the edge environment, federated learning may encounter the problem of straggler (i.e., the slowest client in federated learning causes the overall waiting of all clients), as shown in fig. 2, making synchronization between clients inefficient, which slows convergence and aggravates the learning process.
Disclosure of Invention
The invention aims to provide a heterogeneous scene-oriented federal learning training acceleration method to solve the problem of low synchronization efficiency in the existing federal learning.
The technical scheme for solving the technical problems is as follows:
the invention provides a self-adaptive training quantity synchronous parallel training method, and the federal learning training acceleration method for heterogeneous scenes comprises the following steps:
s1: distributing the training tasks to the server and the client;
s2: and running a client algorithm and a server algorithm according to the training task to obtain a client running result and a server running result.
Optionally, the step S2 includes the following sub-steps:
s21: initializing global iteration times and global model parameters;
s22: iterating the global model parameters to generate new global model parameters;
s23: adding one to the overall iteration times, judging whether the overall iteration times reach overall preset iteration times, and if so, ending the operation of the client algorithm and the server algorithm; otherwise, go to step S24;
s24: obtaining the size of a small batch sample set of the client according to the target iterative estimation time;
s25: sending the small batch sample set to the client and recording the sending time;
s26: performing local iterative operation of the client by using the small batch sample set size and the new global model parameter to obtain the cumulative gradient of the client;
s27: sending the accumulated gradient to a server;
s28: recording the receiving time of the accumulated gradient in a server, and obtaining a calculation speed estimation parameter and a communication time estimation parameter of the client according to the sending time and the receiving time;
s29: and selecting target iteration estimation time according to the speed estimation parameter and the communication time estimation parameter, and returning to the step S23.
Optionally, in S28, obtaining the estimated speed parameter of the client according to the sending time and the receiving time is:
Figure GDA0003150265940000021
wherein,
Figure GDA0003150265940000022
a t +1 th round of computing a speed estimate parameter, in conjunction with a server, representing an ith client>
Figure GDA0003150265940000023
Is the mini-batch sample set size, based on the ith client>
Figure GDA0003150265940000024
Is the actual iteration time of the ith client, i.e. the uploading time minus the sending time, i.e. the
Figure GDA0003150265940000025
Figure GDA0003150265940000026
Representing the moment of reception, send, of the server receiving the ith client cumulative gradient t And the representing server sends the global model parameters and the small-batch sample set size to the sending time of all the clients, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
Optionally, obtaining the communication time estimation parameter of the client according to the sending time and the receiving time is:
Figure GDA0003150265940000031
wherein,
Figure GDA0003150265940000032
a Tth round communication time estimate parameter, representing an ith client, based on the time estimate value>
Figure GDA0003150265940000033
A t +1 th round of computing a speed estimate parameter, in conjunction with a server, representing an ith client>
Figure GDA0003150265940000034
Is the mini-batch sample set size, based on the ith client>
Figure GDA0003150265940000035
Is the actual iteration time of the ith client, i.e., the upload time minus the transmit time, i.e., ->
Figure GDA0003150265940000036
Figure GDA00031502659400000312
Representing the moment of reception, send, of the server receiving the ith client cumulative gradient t And the representing server sends the global model parameters and the small-batch sample set size to the sending time of all the clients, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
Optionally, the step S24 includes the following sub-steps:
s241: acquiring target iteration estimation time;
s242: and generating the small batch sample set size of the client according to the target iteration estimation time.
Optionally, in step S242, the generating the small batch sample set size of the client according to the target iteration estimation time is as follows:
Figure GDA0003150265940000037
wherein,
Figure GDA0003150265940000038
represents the size, or @, of the small batch sample set of the ith client's tth round>
Figure GDA0003150265940000039
A communication time estimation parameter representing the ith round of the client, <' >>
Figure GDA00031502659400000310
A calculated speed estimate parameter, representing the ith client's tth round, is based on>
Figure GDA00031502659400000311
Represents the target iteration estimate time, β, of the t-th round 0 Initial value representing size of small lot sample set
Optionally, the step S26 includes the following sub-steps:
s261: adding one to the local iteration times, judging whether the local iteration times reach local preset iteration times, and if so, entering step S262; otherwise, go to step S263;
s262: uploading the accumulated gradient of the client in the local iteration process to the server, and ending the local iteration operation of the client;
s263: randomly selecting a small-batch sample set with the size of the small-batch sample set from a local data set of a client;
s264: calculating a descending gradient and updating local model parameters according to the small-batch sample set;
s265: and accumulating and calculating the descending gradient to obtain an accumulated gradient of the client and returning to the step S261.
Optionally, in step S264, according to the small batch sample set, calculating a descent gradient and updating local model parameters as follows:
Figure GDA0003150265940000041
wherein the gradient is
Figure GDA0003150265940000042
After substituting sample ξ for the neural network penalty function, the penalty function will ≦ the local model parameter>
Figure GDA0003150265940000043
Is based on the partial derivative of (4)>
Figure GDA0003150265940000049
For small batches of sample sets, based on the number of samples collected>
Figure GDA0003150265940000044
The small batch sample set size of the t-th round is represented, k represents the number of rounds of the current local iteration, and i represents the ith client.
Optionally, between step S28 and step S29, further comprising:
updating the global model parameters according to the accumulated gradient received in the server; and
and selecting the next round of target iteration estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client.
Optionally, the formula for updating the global model parameter according to the receiving time of the accumulated gradient in the server is as follows:
Figure GDA0003150265940000045
wherein w t+1 Representing the t +1 th round global model parameter, w t Representing the t-th round global model parameter, η t Representing the learning rate of the neural network training, N is the total number of participating clients,
Figure GDA0003150265940000046
representing the cumulative gradient;
the formula for selecting the next round of target iterative estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client is as follows:
Figure GDA0003150265940000047
wherein,
Figure GDA0003150265940000048
represents the target iterative estimation time, beta, of the t +1 th round min Pre-set minimum representing small lot sample set sizeValue,. Or>
Figure GDA0003150265940000051
A calculated speed estimation parameter, representing the ith client round t +1, <' > based on the number of times>
Figure GDA0003150265940000052
And (3) representing the communication time estimation parameter of the ith client round t + 1.
The invention has the following beneficial effects:
according to the technical scheme, namely the federated learning training acceleration method for the heterogeneous scene, provided by the invention, on one hand, the small-batch sample set size (mini-batch) of each client is adaptively adjusted through continuous estimation of calculation and communication resources to solve the synchronous problem of federated learning in heterogeneous and dynamic environments; on the other hand, processing time differences between all participating clients are minimized by adaptively adjusting the hyper-parameters, thereby reducing synchronization delay and improving training efficiency.
Drawings
FIG. 1 is a flowchart of a heterogeneous scenario-oriented federated learning training acceleration method provided in an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the steps of step S2 in FIG. 1;
FIG. 3 is a flowchart illustrating the steps of step S24 shown in FIG. 2;
fig. 4 is a flowchart illustrating the steps of step S26 in fig. 2.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Examples
The technical scheme for solving the technical problems is as follows:
the invention provides a heterogeneous scene-oriented federal learning training acceleration method, which comprises the following steps of:
s1: distributing the training tasks to the server and the client;
s2: and running a client algorithm and a server algorithm according to the training task to obtain a client running result and a server running result.
The invention has the following beneficial effects:
through the technical scheme, namely the federated learning training acceleration method for the heterogeneous scene, provided by the invention, on one hand, the synchronous problem of federated learning in heterogeneous and dynamic environments is solved by continuously estimating calculation and communication resources to adaptively adjust the size of a small-batch sample set (namely mini-batch, the same applies below) of each client; on the other hand, processing time differences between all participating clients are minimized by adaptively adjusting the hyper-parameters, thereby reducing synchronization delay and improving training efficiency.
Alternatively, referring to fig. 2, the step S2 includes the following sub-steps:
s21: initializing global iteration times and global model parameters;
s22: iterating the global model parameters to generate new global model parameters;
s23: adding one to the overall iteration times, judging whether the overall iteration times reach overall preset iteration times, and if so, ending the operation of the client algorithm and the server algorithm; otherwise, go to step S24;
s24: obtaining the size of a small batch sample set of the client according to the target iterative estimation time;
s25: sending the small batch sample set to the client and recording the sending time;
s26: performing local iterative operation of the client by using the size of the small batch sample set and the new global model parameter to obtain the accumulated gradient of the client;
s27: sending the accumulated gradient to a server;
s28: recording the receiving time of the accumulated gradient in a server, and obtaining a calculation speed estimation parameter and a communication time estimation parameter of the client according to the sending time and the receiving time;
s29: and selecting target iteration estimation time according to the speed estimation parameter and the communication time estimation parameter, and returning to the step S23.
Optionally, in S28, obtaining the estimated speed parameter of the client according to the sending time and the receiving time is:
Figure GDA0003150265940000061
wherein,
Figure GDA0003150265940000071
the t +1 th round of calculating the speed estimate parameter, which represents the ith client, is based on>
Figure GDA0003150265940000072
Is the mini-batch sample set size of the ith client, in>
Figure GDA0003150265940000073
Is the actual iteration time of the ith client, i.e., the upload time minus the transmit time, i.e., ->
Figure GDA0003150265940000074
Figure GDA00031502659400000715
Representing the moment of reception, send, of the server receiving the ith client cumulative gradient t And the representing server sends the global model parameters and the small-batch sample set size to the sending time of all the clients, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
Optionally, obtaining a communication time estimation parameter of the client according to the accumulated gradient and the receiving time is:
Figure GDA0003150265940000075
wherein,
Figure GDA0003150265940000076
a Tth round communication time estimate parameter, representing an ith client, based on the time estimate value>
Figure GDA0003150265940000077
The t +1 th round of calculating the speed estimate parameter, which represents the ith client, is based on>
Figure GDA0003150265940000078
Is the mini-batch sample set size, based on the ith client>
Figure GDA0003150265940000079
Is the actual iteration time of the ith client, i.e., the upload time minus the transmit time, i.e., ->
Figure GDA00031502659400000710
Figure GDA00031502659400000716
Represents the moment of reception, send, of the server receiving the i-th client cumulative gradient t And the representation server sends the global model parameters and the small batch sample set size to all the clients at the sending time, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
Alternatively, referring to fig. 3, the step S24 includes the following sub-steps:
s241: acquiring target iteration estimation time;
s242: and generating the small batch sample set size of the client according to the target iteration estimation time.
Optionally, in step S242, the formula for generating the small batch sample set size of the client according to the target iteration estimation time is as follows:
Figure GDA00031502659400000711
wherein,
Figure GDA00031502659400000712
represents the size, or @, of the small batch sample set of the ith client's tth round>
Figure GDA00031502659400000713
A communication time estimation parameter representing the ith round of the client, <' >>
Figure GDA00031502659400000714
A calculated speed estimate parameter, representing the ith client's tth round, is based on>
Figure GDA0003150265940000081
Represents the target iteration estimate time, β, of the t-th round 0 An initial value representing the size of the sample set for the small lot. Here, the formula uses the trinocular operator to represent a conditional operation, i.e., if t =0! Then>
Figure GDA0003150265940000082
Otherwise->
Figure GDA0003150265940000083
Alternatively, referring to fig. 4, the step S26 includes the following sub-steps:
s261: adding one to the local iteration times, judging whether the local iteration times reach local preset iteration times, and if so, entering step S262; otherwise, go to step S263;
s262: uploading the accumulated gradient of the client in the local iteration process to the server, and ending the local iteration operation of the client;
s263: randomly selecting a small batch sample set from the small batch sample set;
s264: calculating a descending gradient and updating local model parameters according to the small-batch sample set;
s265: and accumulating and calculating the descending gradient parameters to obtain the accumulated gradient of the client and returning to the step S261.
Optionally, in step S264, calculating a gradient of descent and updating local model parameters according to the small batch sample set includes:
Figure GDA0003150265940000084
/>
wherein the gradient is
Figure GDA0003150265940000085
After substituting sample ξ for the neural network penalty function, the penalty function will ≦ the local model parameter>
Figure GDA0003150265940000086
Is based on the partial derivative of (4)>
Figure GDA0003150265940000088
For small batches of sample sets>
Figure GDA0003150265940000087
The small batch sample set size of the t-th round is represented, k represents the number of rounds of the current local iteration, and i represents the ith client.
Optionally, between step S28 and step S29, further comprising:
updating the global model parameters according to the accumulated gradient received in the server; and
and selecting the next round of target iteration estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client.
Optionally, the updating the global model parameter according to the receiving time of the accumulated gradient in the server includes:
Figure GDA0003150265940000091
wherein w t+1 Representing the t +1 th round global model parameter, w t Representing the t-th round global modelParameter η t Representing the learning rate of the neural network training, N is the total number of participating clients,
Figure GDA0003150265940000092
representing the cumulative gradient;
the selecting the next round of target iteration estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client comprises the following steps:
Figure GDA0003150265940000093
wherein,
Figure GDA0003150265940000094
represents the target iterative estimation time, beta, of the t +1 th round min A predetermined minimum value, representing the size of the small batch of sample sets>
Figure GDA0003150265940000095
A calculated speed estimation parameter, representing the ith client round t +1, <' > based on the number of times>
Figure GDA0003150265940000096
And (3) representing the communication time estimation parameter of the ith client round t + 1.
In the method provided by the invention, the server collects all necessary information and updates of the clients through push operation to estimate the computing power and communication power of each client. According to the estimation, the server calculates an appropriate mini-batch size for each client before sharing the new model. The results are then transmitted back to the client through a pull operation along with the shared model. This process is performed at each iteration in order to accommodate dynamic changes in the edge environment.
Specifically, the present invention runs two algorithms on the server and the client, respectively. After the training tasks are assigned to the server and the available clients, they will start the respective algorithms simultaneously.
In the client algorithm, client i first performs an algorithmAnd (4) initializing the method, wherein the global step t is set to be 0 in the initialization. And the client repeatedly executes pull operation, gradient calculation and push operation to provide local update for the server to perform gradient aggregation and parameter update until the global iteration number exceeds the global preset iteration number T. In each iteration, the client compares the local step size k and the accumulated gradient
Figure GDA0003150265940000097
Is set to 0. In a pull operation, except for the global parameter w t In addition, the transmitted data also includes a value->
Figure GDA0003150265940000098
This value specifies the mini-batch size that shall be used by client i in the t-th global step. The client will block until the data extracted in the server is available. Thereafter, the client will have the local parameter->
Figure GDA0003150265940000099
Is set as a global parameter w t The value of (c). In the gradient calculation, the client repeatedly accumulates the local gradient until the local iteration number exceeds the local preset iteration number K. At each iteration, the client will be in its local dataset D i In which a size is selected randomly>
Figure GDA0003150265940000101
Is selected as the small batch>
Figure GDA0003150265940000102
A local gradient is then calculated based on the selected mini-batch>
Figure GDA0003150265940000103
The calculated gradient is not only accumulated in->
Figure GDA0003150265940000104
But also to local parameters. Finally, the local step k is increased by 1. After performing gradient calculation, the client will ≥>
Figure GDA0003150265940000105
Push to the server and increase the global step t by 1.
In the server algorithm, when the algorithm is started, the server executes initialization, and a global step length t and an initial global parameter w are used during initialization 0 Set to 0 and random values, respectively. And the server repeatedly executes the sending operation, the receiving operation, the gradient aggregation and the resource estimation until the global iteration times exceed the global preset iteration times T. In a sending operation, the server sends a global parameter w for each available client t And mini-batch size
Figure GDA0003150265940000106
Using estimated calculation and communication resource parameters, i.e. <' >>
Figure GDA0003150265940000107
And &>
Figure GDA0003150265940000108
If the global step t is not 0, the mini-batch size is set to the initial value beta 0 . The transmission time is recorded as send in the global step t t . In a receive operation, the server iteratively attempts to receive accumulated gradients ≦ from each available client>
Figure GDA0003150265940000109
And records the time of receipt as->
Figure GDA00031502659400001010
The receive operation will be blocked until all updates are received. In gradient aggregation, the server will aggregate all accumulated gradients collected from the client and use the results and corresponding learning rates η t And updating the global parameters. In resource estimation, the server estimates the computation and communication resource parameters for each available client, i.e. < >>
Figure GDA00031502659400001011
And &>
Figure GDA00031502659400001012
The estimation utilizes a least squares approach and uses the mini-batch size used and the corresponding time consumption in past iterations. We can then obtain a minimum time consumption estimate for each available client in the next iteration. Finally, the server increases the global step t by 1.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. The heterogeneous scenario-oriented federal learning training acceleration method is characterized by comprising the following steps:
s1: distributing the training tasks to the server and the client;
s2: according to the training task, a client algorithm and a server algorithm are operated to obtain a client operation result and a server operation result; the step S2 comprises the following sub-steps:
s21: initializing global iteration times and global model parameters;
s22: iterating the global model parameters to generate new global model parameters;
s23: adding one to the overall iteration times, judging whether the overall iteration times reach overall preset iteration times, and if so, ending the operation of the client algorithm and the server algorithm; otherwise, go to step S24;
s24: obtaining the size of a small batch sample set of the client according to the target iterative estimation time;
s25: sending the small batch sample set to the client and recording the sending time;
s26: performing local iterative operation of the client by using the small batch sample set size and the new global model parameter to obtain the cumulative gradient of the client;
s27: sending the accumulated gradient to a server;
s28: recording the receiving time of the accumulated gradient in a server, and obtaining a calculation speed estimation parameter and a communication time estimation parameter of the client according to the sending time and the receiving time;
s29: and selecting target iterative estimation time according to the speed estimation parameter and the communication time estimation parameter, and returning to the step S23.
2. The method for accelerating training of federal learning oriented to a heterogeneous scenario according to claim 1, wherein in S28, the calculation speed estimation parameters of the client obtained according to the sending time and the receiving time are:
Figure FDA0004083907610000011
wherein,
Figure FDA0004083907610000021
the t +1 th round of calculating the speed estimate parameter, which represents the ith client, is based on>
Figure FDA0004083907610000022
Is the mini-batch sample set size of the ith client, in>
Figure FDA0004083907610000023
Is the actual iteration time of the ith client, i.e. the uploading time minus the sending time, i.e. the
Figure FDA0004083907610000024
Representing the moment of reception, send, of the server receiving the ith client cumulative gradient t The sending time of the server for sending the global model parameters and the small batch sample set to all the clients is represented, t is the current round, s represents any round, n is a natural number, and i represents the ith client。
3. The heterogeneous scenario-oriented federated learning training acceleration method according to claim 1, wherein the communication time estimation parameters of the client obtained according to the sending time and the receiving time are:
Figure FDA0004083907610000025
wherein,
Figure FDA0004083907610000026
a t +1 th round communication time estimate parameter, representing an ith client, based on the time estimate, and->
Figure FDA0004083907610000027
A Tth round of calculation of a speed estimate parameter, representing an ith client, based on the comparison of the Tth round of calculation of the speed estimate parameter, the Tth round of calculation of the Tth client>
Figure FDA0004083907610000028
Is the mini-batch sample set size of the ith client, in>
Figure FDA0004083907610000029
Is the actual iteration time of the ith client, i.e., the upload time minus the transmit time, i.e., ->
Figure FDA00040839076100000210
Representing the moment of reception, send, of the server receiving the ith client cumulative gradient t And the representing server sends the global model parameters and the small-batch sample set size to the sending time of all the clients, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
4. The heterogeneous scenario-oriented federated learning training acceleration method according to claim 1, wherein the step S24 includes the following substeps:
s241: acquiring target iterative estimation time;
s242: and generating the small batch sample set size of the client according to the target iterative estimation time.
5. The method as claimed in claim 4, wherein in step S242, the generating a small batch sample set size of the client according to the target iterative estimation time is:
Figure FDA00040839076100000211
wherein,
Figure FDA0004083907610000031
represents the size, or @, of the small batch sample set of the ith client's tth round>
Figure FDA0004083907610000032
A communication time estimation parameter representing the ith round of the client, <' >>
Figure FDA0004083907610000033
A calculated speed estimate parameter representative of a tth round of an ith client, <' > based on a comparison of a time period between the time period and the number of previous rounds in the past>
Figure FDA0004083907610000034
Represents the target iteration estimate time, β, of the t-th round 0 An initial value representing the size of the sample set for the small lot.
6. The heterogeneous scenario-oriented federated learning training acceleration method according to claim 1, wherein the step S26 includes the following substeps:
s261: adding one to the local iteration times, judging whether the local iteration times reach local preset iteration times, and if so, entering step S262; otherwise, go to step S263;
s262: uploading the accumulated gradient of the client in the local iteration process to the server, and ending the local iteration operation of the client;
s263: randomly selecting a small batch sample set with the size of the small batch sample set from a local data set of a client;
s264: calculating a descending gradient and updating local model parameters according to the small-batch sample set;
s265: and accumulating and calculating the descending gradient to obtain an accumulated gradient of the client and returning to the step S261.
7. The method as claimed in claim 6, wherein in step S264, according to the small batch sample set, calculating a descent gradient and updating local model parameters are as follows:
Figure FDA0004083907610000035
wherein the gradient is
Figure FDA0004083907610000036
After substituting sample ξ for the neural network penalty function, the penalty function will ≦ the local model parameter>
Figure FDA0004083907610000037
Is based on the partial derivative of (4)>
Figure FDA0004083907610000038
For small batches of sample sets, based on the number of samples collected>
Figure FDA0004083907610000039
The small batch sample set size of the t-th round is represented, k represents the number of rounds of the current local iteration, and i represents the ith client.
8. The method for accelerating the training of the federal learning oriented in a heterogeneous scenario according to claim 2, wherein between step S28 and step S29, further comprising:
updating the global model parameters according to the accumulated gradient received in the server; and
and selecting the target iterative estimation time of the next round according to the calculation speed estimation parameter and the communication time estimation parameter of the client.
9. The method for accelerating training of federal learning oriented to a heterogeneous scenario according to claim 8, wherein the formula for updating the global model parameters according to the receiving time of the accumulated gradient in the server is as follows:
Figure FDA0004083907610000041
wherein w t+1 Representing the t +1 th round global model parameter, w t Representing the t-th round global model parameter, η t Representing the learning rate of the neural network training, N is the total number of participating clients,
Figure FDA0004083907610000042
representing the cumulative gradient; />
The formula for selecting the next round of target iterative estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client is as follows:
Figure FDA0004083907610000043
wherein,
Figure FDA0004083907610000044
represents the target iterative estimation time, beta, of the t +1 th round min A preset minimum value representing the size of the sample set for the small lot,
Figure FDA0004083907610000045
a calculated speed estimation parameter, representing the ith client round t +1, <' > based on the number of times>
Figure FDA0004083907610000046
And (4) representing the communication time estimation parameter of the t +1 th round of the ith client. />
CN202110661958.5A 2021-06-15 2021-06-15 Heterogeneous scene-oriented federal learning training acceleration method Active CN113391897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110661958.5A CN113391897B (en) 2021-06-15 2021-06-15 Heterogeneous scene-oriented federal learning training acceleration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110661958.5A CN113391897B (en) 2021-06-15 2021-06-15 Heterogeneous scene-oriented federal learning training acceleration method

Publications (2)

Publication Number Publication Date
CN113391897A CN113391897A (en) 2021-09-14
CN113391897B true CN113391897B (en) 2023-04-07

Family

ID=77621461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110661958.5A Active CN113391897B (en) 2021-06-15 2021-06-15 Heterogeneous scene-oriented federal learning training acceleration method

Country Status (1)

Country Link
CN (1) CN113391897B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496204B (en) * 2022-10-09 2024-02-02 南京邮电大学 Federal learning-oriented evaluation method and device under cross-domain heterogeneous scene

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10769529B2 (en) * 2018-12-04 2020-09-08 Google Llc Controlled adaptive optimization
US11443240B2 (en) * 2019-09-06 2022-09-13 Oracle International Corporation Privacy preserving collaborative learning with domain adaptation
CN111008709A (en) * 2020-03-10 2020-04-14 支付宝(杭州)信息技术有限公司 Federal learning and data risk assessment method, device and system
CN111444021B (en) * 2020-04-02 2023-03-24 电子科技大学 Synchronous training method, server and system based on distributed machine learning
CN111522669A (en) * 2020-04-29 2020-08-11 深圳前海微众银行股份有限公司 Method, device and equipment for optimizing horizontal federated learning system and readable storage medium
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN112734000A (en) * 2020-11-11 2021-04-30 江西理工大学 Intrusion detection method, system, equipment and readable storage medium
CN112532451B (en) * 2020-11-30 2022-04-26 安徽工业大学 Layered federal learning method and device based on asynchronous communication, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN113391897A (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN113435604B (en) Federal learning optimization method and device
CN113222179B (en) Federal learning model compression method based on model sparsification and weight quantification
Lee et al. Adaptive transmission scheduling in wireless networks for asynchronous federated learning
CN113010305B (en) Federal learning system deployed in edge computing network and learning method thereof
CN111708640A (en) Edge calculation-oriented federal learning method and system
WO2019184836A1 (en) Data analysis device, and multi-model co-decision system and method
CN113469325A (en) Layered federated learning method, computer equipment and storage medium for edge aggregation interval adaptive control
CN114169543B (en) Federal learning method based on model staleness and user participation perception
CN113391897B (en) Heterogeneous scene-oriented federal learning training acceleration method
Li et al. Privacy-preserving communication-efficient federated multi-armed bandits
CN116471286A (en) Internet of things data sharing method based on block chain and federal learning
CN116702881A (en) Multilayer federal learning scheme based on sampling aggregation optimization
CN114375050A (en) Digital twin-assisted 5G power distribution network resource scheduling method
CN111343006B (en) CDN peak flow prediction method, device and storage medium
Li et al. Model-distributed dnn training for memory-constrained edge computing devices
CN110929885A (en) Smart campus-oriented distributed machine learning model parameter aggregation method
CN117076132B (en) Resource allocation and aggregation optimization method and device for hierarchical federal learning system
CN117114113B (en) Collaborative reasoning acceleration method based on queuing theory
WO2024108601A2 (en) Terminal selection method, apparatus and system, and model training method, apparatus and system
CN115115064B (en) Semi-asynchronous federal learning method and system
Zhang et al. Improving the accuracy of load forecasting for campus buildings based on federated learning
CN115118591B (en) Cluster federation learning method based on alliance game
Zhang et al. RTCoInfer: Real-time collaborative CNN inference for stream analytics on ubiquitous images
WO2023175381A1 (en) Iterative training of collaborative distributed coded artificial intelligence model
He et al. Client selection and resource allocation for federated learning in digital-twin-enabled industrial Internet of Things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant