CN113391897A - Heterogeneous scene-oriented federal learning training acceleration method - Google Patents
Heterogeneous scene-oriented federal learning training acceleration method Download PDFInfo
- Publication number
- CN113391897A CN113391897A CN202110661958.5A CN202110661958A CN113391897A CN 113391897 A CN113391897 A CN 113391897A CN 202110661958 A CN202110661958 A CN 202110661958A CN 113391897 A CN113391897 A CN 113391897A
- Authority
- CN
- China
- Prior art keywords
- client
- time
- round
- sample set
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses a heterogeneous scene-oriented federal learning training acceleration method, which comprises the following steps: s1: distributing the training tasks to the server and the client; s2: and running a client algorithm and a server algorithm according to the training task to obtain a client running result and a server running result. The heterogeneous scene oriented federal learning training acceleration method provided by the invention can solve the problem of low synchronization efficiency in the existing federal learning.
Description
Technical Field
The invention relates to the technical field of machine learning, in particular to a heterogeneous scene-oriented federal learning training acceleration method.
Background
Driven by resource and privacy considerations, we witnessed the rise of Federal Learning (FL) in the big data era for decades. Federal learning as a distributed paradigm, as shown in fig. 1, has gradually replaced traditional centralized systems to implement Artificial Intelligence (AI) at the edge of the network. In federated learning, each client trains its local model using its own collected data without sharing the raw data with other clients. Client federations with the same interests may be joined together to arrive at a sharing model by periodically synchronizing their local parameters under the coordination of a central server. However, due to the heterogeneity and dynamics of the edge environment, federated learning may encounter the problem of straggler (i.e., the slowest client in federated learning causes the overall waiting of all clients), as shown in fig. 2, such that synchronization between clients becomes inefficient, which slows convergence and aggravates the learning process.
Disclosure of Invention
The invention aims to provide a heterogeneous scene-oriented federal learning training acceleration method to solve the problem of low synchronization efficiency in the existing federal learning.
The technical scheme for solving the technical problems is as follows:
the invention provides a self-adaptive training quantity synchronous parallel training method, and the federal learning training acceleration method for heterogeneous scenes comprises the following steps:
s1: distributing the training tasks to the server and the client;
s2: and running a client algorithm and a server algorithm according to the training task to obtain a client running result and a server running result.
Alternatively, the step S2 includes the following substeps:
s21: initializing global iteration times and global model parameters;
s22: iterating the global model parameters to generate new global model parameters;
s23: adding one to the overall iteration times, judging whether the overall iteration times reach overall preset iteration times, and if so, ending the operation of the client algorithm and the server algorithm; otherwise, go to step S24;
s24: obtaining the size of a small batch sample set of the client according to the target iterative estimation time;
s25: sending the small batch sample set to the client and recording the sending time;
s26: carrying out local iterative operation of the client by using the small batch sample set size and the new global model parameter to obtain the cumulative gradient of the client;
s27: sending the accumulated gradient to a server;
s28: recording the receiving time of the accumulated gradient in a server, and obtaining a calculation speed estimation parameter and a communication time estimation parameter of the client according to the sending time and the receiving time;
s29: and selecting target iteration estimation time according to the speed estimation parameter and the communication time estimation parameter, and returning to the step S23.
Optionally, in S28, the obtaining of the estimated speed parameter of the client according to the sending time and the receiving time is:
wherein the content of the first and second substances,the t +1 th round of calculating the speed estimation parameter representing the ith client,is the small batch sample set size for the ith client,is the actual iteration time of the ith client, i.e. the uploading time minus the sending time, i.e. theRepresenting the moment of reception, send, of the server receiving the ith client cumulative gradienttThe sending time of the server for sending the global model parameters and the small-batch sample set to all the clients is represented, t is the current round, and s represents any oneAnd in turn, n is a natural number, and i represents the ith client.
Optionally, obtaining the communication time estimation parameter of the client according to the sending time and the receiving time is:
wherein the content of the first and second substances,a t-th communication time estimation parameter representing an i-th client,the t +1 th round of calculating the speed estimation parameter representing the ith client,is the small batch sample set size for the ith client,is the actual iteration time of the ith client, i.e. the uploading time minus the sending time, i.e. theRepresenting the moment of reception, send, of the server receiving the ith client cumulative gradienttAnd the representing server sends the global model parameters and the small-batch sample set size to the sending time of all the clients, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
Alternatively, the step S24 includes the following substeps:
s241: acquiring target iteration estimation time;
s242: and generating the small batch sample set size of the client according to the target iteration estimation time.
Optionally, in step S242, the generating the small batch sample set size of the client according to the target iteration estimation time is as follows:
wherein the content of the first and second substances,represents the minibatch sample set size for the ith client's t-th round,a communication time estimation parameter representing the ith round of the client,representing the calculated speed estimation parameter of the ith client's t-th round,represents the target iteration estimate time, β, of the t-th round0Initial value representing size of small lot sample set
Alternatively, the step S26 includes the following substeps:
s261: adding one to the local iteration times, judging whether the local iteration times reach local preset iteration times, and if so, entering step S262; otherwise, go to step S263;
s262: uploading the accumulated gradient of the client in the local iteration process to the server, and finishing the local iteration operation of the client;
s263: randomly selecting a small-batch sample set with the size of the small-batch sample set from a local data set of a client;
s264: calculating a descending gradient and updating local model parameters according to the small-batch sample set;
s265: and accumulating and calculating the descending gradient to obtain an accumulated gradient of the client and returning to the step S261.
Optionally, in step S264, according to the small batch sample set, calculating a descent gradient and updating local model parameters as follows:
wherein the gradient isAfter substituting the neural network loss function into the sample xi, the loss function is applied to the local model parametersThe partial derivative of (a) of (b),in the case of a small sample set batch,the small batch sample set size of the t-th round is represented, k represents the number of rounds of the current local iteration, and i represents the ith client.
Optionally, between the step S28 and the step S29, further comprising:
updating the global model parameters according to the accumulated gradient received in the server; and
and selecting the next round of target iteration estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client.
Optionally, the formula for updating the global model parameter according to the receiving time of the accumulated gradient in the server is as follows:
wherein, wt+1Representing the t +1 th round global model parameter, wtRepresenting the t-th round global model parameter, ηtRepresenting the learning rate of the neural network training, N is the total number of participating clients,representing the cumulative gradient;
the formula for selecting the next round of target iteration estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client is as follows:
wherein the content of the first and second substances,represents the target iterative estimation time, beta, of the t +1 th roundminA preset minimum value representing the size of the sample set in the small lot,represents the calculated speed estimation parameter of the ith client round t +1,and (3) representing the communication time estimation parameter of the ith client round t + 1.
The invention has the following beneficial effects:
according to the technical scheme, namely the federal learning training acceleration method for the heterogeneous scene, on one hand, the small-batch sample set size (mini-batch) of each client is adaptively adjusted through continuous estimation of calculation and communication resources to solve the synchronous problem of federal learning in the heterogeneous and dynamic environments; on the other hand, processing time differences between all participating clients are minimized by adaptively adjusting the hyper-parameters, thereby reducing synchronization delay and improving training efficiency.
Drawings
Fig. 1 is a flowchart of a heterogeneous scenario-oriented federal learning training acceleration method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the substeps of step S2 in FIG. 1;
FIG. 3 is a flowchart illustrating the substeps of step S24 in FIG. 2;
fig. 4 is a flowchart illustrating a substep of step S26 in fig. 2.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Examples
The technical scheme for solving the technical problems is as follows:
the invention provides a heterogeneous scene-oriented federal learning training acceleration method, which comprises the following steps of:
s1: distributing the training tasks to the server and the client;
s2: and running a client algorithm and a server algorithm according to the training task to obtain a client running result and a server running result.
The invention has the following beneficial effects:
according to the technical scheme, namely the federated learning training acceleration method for the heterogeneous scene, provided by the invention, on one hand, the synchronous problem of federated learning in heterogeneous and dynamic environments is solved by continuously estimating calculation and communication resources to adaptively adjust the small-batch sample set size (namely mini-batch, the same below) of each client; on the other hand, processing time differences between all participating clients are minimized by adaptively adjusting the hyper-parameters, thereby reducing synchronization delay and improving training efficiency.
Alternatively, referring to fig. 2, the step S2 includes the following sub-steps:
s21: initializing global iteration times and global model parameters;
s22: iterating the global model parameters to generate new global model parameters;
s23: adding one to the overall iteration times, judging whether the overall iteration times reach overall preset iteration times, and if so, ending the operation of the client algorithm and the server algorithm; otherwise, go to step S24;
s24: obtaining the size of a small batch sample set of the client according to the target iterative estimation time;
s25: sending the small batch sample set to the client and recording the sending time;
s26: carrying out local iterative operation of the client by using the small batch sample set size and the new global model parameter to obtain the cumulative gradient of the client;
s27: sending the accumulated gradient to a server;
s28: recording the receiving time of the accumulated gradient in a server, and obtaining a calculation speed estimation parameter and a communication time estimation parameter of the client according to the sending time and the receiving time;
s29: and selecting target iteration estimation time according to the speed estimation parameter and the communication time estimation parameter, and returning to the step S23.
Optionally, in S28, the obtaining of the estimated speed parameter of the client according to the sending time and the receiving time is:
wherein the content of the first and second substances,the t +1 th round of calculating the speed estimation parameter representing the ith client,is the small batch sample set size for the ith client,is the actual iteration time of the ith client, i.e. the uploading time minus the sending time, i.e. theRepresenting the moment of reception, send, of the server receiving the ith client cumulative gradienttAnd the sending time when the server sends the global model parameters and the small-batch sample set to all the clients is represented, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
Optionally, obtaining a communication time estimation parameter of the client according to the accumulated gradient and the receiving time is:
wherein the content of the first and second substances,a t-th communication time estimation parameter representing an i-th client,the t +1 th round of calculating the speed estimation parameter representing the ith client,is the small batch sample set size for the ith client,is the actual iteration time of the ith client, i.e. the uploading time minus the sending time, i.e. theRepresenting the moment of reception, send, of the server receiving the ith client cumulative gradienttAnd the representing server sends the global model parameters and the small-batch sample set size to the sending time of all the clients, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
Alternatively, referring to fig. 3, the step S24 includes the following sub-steps:
s241: acquiring target iteration estimation time;
s242: and generating the small batch sample set size of the client according to the target iteration estimation time.
Optionally, in step S242, the formula for generating the small batch sample set size of the client according to the target iteration estimation time is as follows:
wherein the content of the first and second substances,represents the minibatch sample set size for the ith client's t-th round,a communication time estimation parameter representing the ith round of the client,representing the calculated speed estimation parameter of the ith client's t-th round,represents the target iteration estimate time, β, of the t-th round0An initial value representing the size of the sample set for the small lot. Here, the formula represents a conditional operation using a trinocular operator, that is, if t is 0! Then, thenOtherwise
Alternatively, referring to fig. 4, the step S26 includes the following sub-steps:
s261: adding one to the local iteration times, judging whether the local iteration times reach local preset iteration times, and if so, entering step S262; otherwise, go to step S263;
s262: uploading the accumulated gradient of the client in the local iteration process to the server, and finishing the local iteration operation of the client;
s263: randomly selecting a small batch sample set from the small batch sample set;
s264: calculating a descending gradient and updating local model parameters according to the small-batch sample set;
s265: and accumulating and calculating the descending gradient parameters to obtain the accumulated gradient of the client and returning to the step S261.
Optionally, in step S264, calculating a gradient of descent and updating local model parameters according to the small batch sample set includes:
wherein the gradient isAfter substituting the neural network loss function into the sample xi, the loss function is applied to the local model parametersThe partial derivative of (a) of (b),in the case of a small sample set batch,the small batch sample set size of the t-th round is represented, k represents the number of rounds of the current local iteration, and i represents the ith client.
Optionally, between the step S28 and the step S29, further comprising:
updating the global model parameters according to the accumulated gradient received in the server; and
and selecting the next round of target iteration estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client.
Optionally, the updating the global model parameter according to the receiving time of the accumulated gradient in the server includes:
wherein, wt+1Representing the t +1 th round global model parameter, wtRepresenting the t-th round global model parameter, ηtRepresenting the learning rate of the neural network training, N is the total number of participating clients,representing the cumulative gradient;
the selecting the next round of target iteration estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client comprises the following steps:
wherein the content of the first and second substances,represents the target iterative estimation time, beta, of the t +1 th roundminA preset minimum value representing the size of the sample set in the small lot,represents the calculated speed estimation parameter of the ith client round t +1,and (3) representing the communication time estimation parameter of the ith client round t + 1.
In the method provided by the invention, the server collects all necessary information and updates of the clients through push operation to estimate the computing power and communication power of each client. According to the estimation, the server calculates an appropriate mini-batch size for each client before sharing the new model. The results are then passed back to the client through a pull operation along with the shared model. This process is performed at each iteration in order to accommodate dynamic changes in the edge environment.
Specifically, the present invention runs two algorithms on the server and the client, respectively. After the training tasks are assigned to the servers and the available clients, they will start the respective algorithms at the same time.
In the client algorithm, client i first performs an algorithm initialization with global step t set to 0. The client repeatedly executes the pull operation, the gradient calculation and the push operation to provide local update to the server for gradient aggregation and parameter update until the global iteration number exceeds the global preset iteration number T. In each iteration, the client compares the local step size k and the accumulated gradientIs set to 0. In a pull operation, except for the global parameter wtThe transmitted data also includes a valueThis value specifies the mini-batch size that client i should use in the t global step. The client will block until the data extracted in the server is available. Then, the client will locally parameterIs set to a global parameter wtThe value of (c). In the gradient calculation, the client repeatedly accumulates the local gradient until the local iteration number exceeds the local preset iteration number K. At each iteration, the client will be in its local dataset DiIn randomly selecting a size ofSmall batch size ofThen root ofCalculating local gradient according to the selected mini-batchThe calculated gradient is not only added up toBut also to local parameters. Finally, the local step k is increased by 1. After gradient calculation, the client willPush to server and increase global step t by 1.
In the server algorithm, when the algorithm is started, the server executes initialization, and the global step t and the initial global parameter w are initialized0Set to 0 and random values, respectively. And the server repeatedly executes the sending operation, the receiving operation, the gradient aggregation and the resource estimation until the global iteration number exceeds the global preset iteration number T. In a sending operation, the server sends a global parameter w for each available clienttAnd mini-batch sizeUsing estimated calculation and communication resource parameters, i.e.Andif the global step t is not 0, the mini-batch size is set to the initial value beta0. The transmission time is recorded as send in the global step tt. In a receive operation, the server iteratively attempts to receive the accumulated gradient from each available clientAnd recording the time of reception asUpon receivingBefore all updates, receive operations will be blocked. In gradient aggregation, the server will aggregate all accumulated gradients collected from the client and use the results and corresponding learning rates ηtAnd updating the global parameters. In resource estimation, the server estimates the computation and communication resource parameters for each available client, i.e.Andthe estimation utilizes a least squares approach and uses the mini-batch size used and the corresponding time consumption in past iterations. We can then obtain a minimum time consumption estimate for each available client in the next iteration. Finally, the server increases the global step t by 1.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. The heterogeneous scenario-oriented federal learning training acceleration method is characterized by comprising the following steps:
s1: distributing the training tasks to the server and the client;
s2: and running a client algorithm and a server algorithm according to the training task to obtain a client running result and a server running result.
2. The heterogeneous scenario-oriented federated learning training acceleration method according to claim 1, wherein the step S2 includes the following substeps:
s21: initializing global iteration times and global model parameters;
s22: iterating the global model parameters to generate new global model parameters;
s23: adding one to the overall iteration times, judging whether the overall iteration times reach overall preset iteration times, and if so, ending the operation of the client algorithm and the server algorithm; otherwise, go to step S24;
s24: obtaining the size of a small batch sample set of the client according to the target iterative estimation time;
s25: sending the small batch sample set to the client and recording the sending time;
s26: performing local iterative operation of the client by using the small batch sample set size and the new global model parameter to obtain the cumulative gradient of the client;
s27: sending the accumulated gradient to a server;
s28: recording the receiving time of the accumulated gradient in a server, and obtaining a calculation speed estimation parameter and a communication time estimation parameter of the client according to the sending time and the receiving time;
s29: and selecting target iteration estimation time according to the speed estimation parameter and the communication time estimation parameter, and returning to the step S23.
3. The method for accelerating federal learning training in a heterogeneous scenario according to claim 2, wherein in S28, the calculation speed estimation parameters of the client obtained according to the sending time and the receiving time are:
wherein the content of the first and second substances,the t +1 th round of calculating the speed estimation parameter representing the ith client,is the small batch sample set size of the ith client,Is the actual iteration time of the ith client, i.e. the uploading time minus the sending time, i.e. the Representing the moment of reception, send, of the server receiving the ith client cumulative gradienttAnd the representing server sends the global model parameters and the small-batch sample set size to the sending time of all the clients, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
4. The method for accelerating the training of the federal learning oriented to the heterogeneous scenario according to claim 2, wherein the communication time estimation parameters of the client obtained according to the sending time and the receiving time are:
wherein the content of the first and second substances,a t +1 th round communication time estimation parameter representing the ith client,the t-th round of computing the speed estimation parameter representing the ith client,is the small batch sample set size for the ith client,is the actual iteration time of the ith client, i.e. the uploading time minus the sending time, i.e. the Representing the moment of reception, send, of the server receiving the ith client cumulative gradienttAnd the representing server sends the global model parameters and the small-batch sample set size to the sending time of all the clients, t is the current round, s represents any round, n is a natural number, and i represents the ith client.
5. The heterogeneous scenario-oriented federated learning training acceleration method according to claim 2, wherein the step S24 includes the following substeps:
s241: acquiring target iteration estimation time;
s242: and generating the small batch sample set size of the client according to the target iteration estimation time.
6. The method of claim 5, wherein in step S242, the generating of the small batch sample set size of the client according to the target iterative estimation time is:
wherein the content of the first and second substances,represents the minibatch sample set size for the ith client's t-th round,when the ith client communicates in the t roundThe parameters of the inter-estimation are,representing the calculated speed estimation parameter of the ith client's t-th round,represents the target iteration estimate time, β, of the t-th round0An initial value representing the size of the sample set for the small lot.
7. The heterogeneous scenario-oriented federated learning training acceleration method according to claim 2, wherein the step S26 includes the following substeps:
s261: adding one to the local iteration times, judging whether the local iteration times reach local preset iteration times, and if so, entering step S262; otherwise, go to step S263;
s262: uploading the accumulated gradient of the client in the local iteration process to the server, and ending the local iteration operation of the client;
s263: randomly selecting a small-batch sample set with the size of the small-batch sample set from a local data set of a client;
s264: calculating a descending gradient and updating local model parameters according to the small-batch sample set;
s265: and accumulating and calculating the descending gradient to obtain an accumulated gradient of the client and returning to the step S261.
8. The method as claimed in claim 7, wherein in step S264, according to the small batch sample set, calculating a descent gradient and updating local model parameters are as follows:
wherein the gradient isAfter substituting the sample xi for the neural network loss function, the loss function is applied to the local model parametersThe partial derivative of (a) of (b),in the case of a small sample set batch,the small batch sample set size of the t-th round is represented, k represents the number of rounds of the current local iteration, and i represents the ith client.
9. The method for accelerating the training of the federal learning oriented in the heterogeneous scenarios of claim 3, wherein between the step S28 and the step S29, the method further comprises:
updating the global model parameters according to the accumulated gradient received in the server; and
and selecting the next round of target iteration estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client.
10. The method for accelerating training of federal learning oriented to a heterogeneous scenario according to claim 9, wherein the formula for updating the global model parameters according to the receiving time of the accumulated gradient in the server is as follows:
wherein, wt+1Representing the t +1 th round global model parameter, wtRepresenting the t-th round global model parameter, ηtRepresenting the learning rate of the neural network training, N is the total number of participating clients,representing the cumulative gradient;
the formula for selecting the next round of target iteration estimation time according to the calculation speed estimation parameter and the communication time estimation parameter of the client is as follows:
wherein the content of the first and second substances,represents the target iterative estimation time, beta, of the t +1 th roundminA preset minimum value representing the size of the sample set for the small lot,represents the calculated speed estimation parameter of the ith client round t +1,and (3) representing the communication time estimation parameter of the ith client round t + 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110661958.5A CN113391897B (en) | 2021-06-15 | 2021-06-15 | Heterogeneous scene-oriented federal learning training acceleration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110661958.5A CN113391897B (en) | 2021-06-15 | 2021-06-15 | Heterogeneous scene-oriented federal learning training acceleration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113391897A true CN113391897A (en) | 2021-09-14 |
CN113391897B CN113391897B (en) | 2023-04-07 |
Family
ID=77621461
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110661958.5A Active CN113391897B (en) | 2021-06-15 | 2021-06-15 | Heterogeneous scene-oriented federal learning training acceleration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113391897B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115496204A (en) * | 2022-10-09 | 2022-12-20 | 南京邮电大学 | Evaluation method and device for federal learning in cross-domain heterogeneous scene |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008709A (en) * | 2020-03-10 | 2020-04-14 | 支付宝(杭州)信息技术有限公司 | Federal learning and data risk assessment method, device and system |
US20200175365A1 (en) * | 2018-12-04 | 2020-06-04 | Google Llc | Controlled Adaptive Optimization |
CN111444021A (en) * | 2020-04-02 | 2020-07-24 | 电子科技大学 | Synchronous training method, server and system based on distributed machine learning |
CN111522669A (en) * | 2020-04-29 | 2020-08-11 | 深圳前海微众银行股份有限公司 | Method, device and equipment for optimizing horizontal federated learning system and readable storage medium |
CN111708640A (en) * | 2020-06-23 | 2020-09-25 | 苏州联电能源发展有限公司 | Edge calculation-oriented federal learning method and system |
US20210073677A1 (en) * | 2019-09-06 | 2021-03-11 | Oracle International Corporation | Privacy preserving collaborative learning with domain adaptation |
CN112532451A (en) * | 2020-11-30 | 2021-03-19 | 安徽工业大学 | Layered federal learning method and device based on asynchronous communication, terminal equipment and storage medium |
CN112734000A (en) * | 2020-11-11 | 2021-04-30 | 江西理工大学 | Intrusion detection method, system, equipment and readable storage medium |
-
2021
- 2021-06-15 CN CN202110661958.5A patent/CN113391897B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200175365A1 (en) * | 2018-12-04 | 2020-06-04 | Google Llc | Controlled Adaptive Optimization |
US20210073677A1 (en) * | 2019-09-06 | 2021-03-11 | Oracle International Corporation | Privacy preserving collaborative learning with domain adaptation |
CN111008709A (en) * | 2020-03-10 | 2020-04-14 | 支付宝(杭州)信息技术有限公司 | Federal learning and data risk assessment method, device and system |
CN111444021A (en) * | 2020-04-02 | 2020-07-24 | 电子科技大学 | Synchronous training method, server and system based on distributed machine learning |
CN111522669A (en) * | 2020-04-29 | 2020-08-11 | 深圳前海微众银行股份有限公司 | Method, device and equipment for optimizing horizontal federated learning system and readable storage medium |
CN111708640A (en) * | 2020-06-23 | 2020-09-25 | 苏州联电能源发展有限公司 | Edge calculation-oriented federal learning method and system |
CN112734000A (en) * | 2020-11-11 | 2021-04-30 | 江西理工大学 | Intrusion detection method, system, equipment and readable storage medium |
CN112532451A (en) * | 2020-11-30 | 2021-03-19 | 安徽工业大学 | Layered federal learning method and device based on asynchronous communication, terminal equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
FENGWEI WANG: ""A privacy-preserving and non-interactive federated learning scheme for regression training with gradient descent"" * |
芦效峰: ""一种面向边缘计算的高效异步联邦学习机制"" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115496204A (en) * | 2022-10-09 | 2022-12-20 | 南京邮电大学 | Evaluation method and device for federal learning in cross-domain heterogeneous scene |
CN115496204B (en) * | 2022-10-09 | 2024-02-02 | 南京邮电大学 | Federal learning-oriented evaluation method and device under cross-domain heterogeneous scene |
Also Published As
Publication number | Publication date |
---|---|
CN113391897B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113435604B (en) | Federal learning optimization method and device | |
Liu et al. | FedCPF: An efficient-communication federated learning approach for vehicular edge computing in 6G communication networks | |
CN113010305B (en) | Federal learning system deployed in edge computing network and learning method thereof | |
WO2021017227A1 (en) | Path optimization method and device for unmanned aerial vehicle, and storage medium | |
CN111787509B (en) | Unmanned aerial vehicle task unloading method and system based on reinforcement learning in edge calculation | |
CN110968426B (en) | Edge cloud collaborative k-means clustering model optimization method based on online learning | |
CN111708640A (en) | Edge calculation-oriented federal learning method and system | |
CN113221470A (en) | Federal learning method for power grid edge computing system and related device thereof | |
CN114528304A (en) | Federal learning method, system and storage medium for updating self-adaptive client parameters | |
CN110955463B (en) | Internet of things multi-user computing unloading method supporting edge computing | |
CN113391897B (en) | Heterogeneous scene-oriented federal learning training acceleration method | |
CN114169543A (en) | Federal learning algorithm based on model obsolescence and user participation perception | |
Li et al. | Privacy-preserving communication-efficient federated multi-armed bandits | |
CN116702881A (en) | Multilayer federal learning scheme based on sampling aggregation optimization | |
CN116187429A (en) | End Bian Yun collaborative synchronization federal learning training algorithm based on segmentation learning | |
Li et al. | Model-distributed dnn training for memory-constrained edge computing devices | |
CN110929885A (en) | Smart campus-oriented distributed machine learning model parameter aggregation method | |
Wang et al. | Digital twin-enabled computation offloading in UAV-assisted MEC emergency networks | |
CN117202264A (en) | 5G network slice oriented computing and unloading method in MEC environment | |
Deng et al. | Adaptive Federated Learning With Negative Inner Product Aggregation | |
CN115115064B (en) | Semi-asynchronous federal learning method and system | |
CN115118591A (en) | Cluster federation learning method based on alliance game | |
WO2023175381A1 (en) | Iterative training of collaborative distributed coded artificial intelligence model | |
CN110768841A (en) | Acceleration distributed online optimization method based on condition gradient | |
Yoon et al. | GDFed: Dynamic Federated Learning for Heterogenous Device Using Graph Neural Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |