CN113206887A - Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation - Google Patents

Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation Download PDF

Info

Publication number
CN113206887A
CN113206887A CN202110502300.XA CN202110502300A CN113206887A CN 113206887 A CN113206887 A CN 113206887A CN 202110502300 A CN202110502300 A CN 202110502300A CN 113206887 A CN113206887 A CN 113206887A
Authority
CN
China
Prior art keywords
model
edge
training
data
terminal equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110502300.XA
Other languages
Chinese (zh)
Inventor
袁景凌
毛慧华
白立华
向尧
刘永坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202110502300.XA priority Critical patent/CN113206887A/en
Publication of CN113206887A publication Critical patent/CN113206887A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design

Abstract

The invention discloses a method for accelerating federal learning aiming at data and equipment isomerism under edge calculation. According to the method, terminal equipment is selected, the terminal equipment with the data sets which are not independent and have the same distribution degree and the lower data sets are selected to participate in federal learning training, and meanwhile, part of models are trained by utilizing the computing power of the edge server, so that terminal and edge collaborative computing is achieved. Compared with the method that the terminal equipment is randomly selected and all the terminal equipment bears training energy consumption and computing resources, the method effectively improves the efficiency of federal learning, reduces the energy consumption of the terminal equipment and improves the accuracy of the model.

Description

Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation
Technical Field
The invention relates to the field of cloud computing and edge computing, in particular to a method for accelerating federal learning aiming at data and equipment isomerism under edge computing.
Technical Field
Performing a Federal Learning (FL) training model (as shown in fig. 1) under an Edge Computing (MEC) architecture, wherein terminal devices participating in training have heterogeneous characteristics due to differences in their hardware characteristics (different CPUs, memories, network connections, power supplies, and the like); in addition, the forms of user data generated by these terminal devices are diversified, which results in imbalance of the collected user data sets, i.e. the data sets are different in size and degree of non-independent and uniform distribution (non-IID) of the data sets, so that the data sets also have heterogeneous characteristics. These heterogeneous characteristics described above may affect the federal learning training process to varying degrees: for example, in the federal learning framework using a synchronous round, slower training equipment can restrict the overall learning progress; specifically, if the battery power of some terminal devices participating in training is insufficient, the terminal devices may be disconnected due to insufficient power supply during training, and thus the overall training progress is tired; or in the case of insufficient computing resources (CPU, GPU, etc.), the training device takes longer to train the model, so that the time taken for the entire training process also becomes longer. In addition, the use of a data set with a higher degree of non-independent co-distribution (non-IID) may cause deviation to the model training, resulting in a decrease in the accuracy of the final model.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a method for accelerating federal learning aiming at data and equipment heterogeneity under edge calculation. By uploading a part of the data set on the terminal equipment to the edge node server, the terminal equipment and the edge server are trained cooperatively, so that the energy consumption of the terminal equipment is reduced, and the computational power deficiency of the terminal equipment is relieved.
In order to achieve the above object, the invention provides a method for accelerating federal learning aiming at data and equipment isomerism under edge calculation, which is characterized in that the method comprises the following steps: selecting terminal equipment, randomly sampling a data set on the terminal equipment, uploading the data set to an edge node server, performing collaborative training on the terminal equipment and the edge node server, and aggregating and updating the model until the model meets the requirement.
Preferably, the step of selecting the terminal device includes:
s11: for each edge node r (r ∈ V)E) Selecting n as the number by random samplingrThe terminal equipment forms a candidate set C of each arear(r∈VE) In which V isEIs a set of edge nodes;
s12: distributing the global model to each edge node r, and then distributing the global model to the terminal equipment i to be selected (i belongs to C)r);
S13: on the edge server of each edge node r, an artificially set auxiliary data set is used
Figure BDA0003056890030000021
Training the model to obtain auxiliary model parameters
Figure BDA0003056890030000022
Simultaneously, for terminal equipment i in the edge node, i belongs to CrThe data set on the equipment is also sampled by using a random sampling method for model training to obtain test model parameters
Figure BDA0003056890030000023
S14: calculating test model parameters of terminal equipment i in each edge node r
Figure BDA0003056890030000024
And auxiliary model parameters
Figure BDA0003056890030000025
Weight difference of
Figure BDA0003056890030000026
Sorting the terminal equipment in ascending order according to the weight to obtain a set
Figure BDA0003056890030000027
S15: in the collection
Figure BDA0003056890030000028
Before η · n is selectedrThe terminal devices form a set S of devices which finally participate in the trainingrWhere eta is a selection ratio, 0<η≤1。
Preferably, the weight difference in step S14
Figure BDA0003056890030000029
The calculation method comprises the following steps:
Figure BDA00030568900300000210
preferably, the specific steps of performing the collaborative training of the terminal device and the edge node server are as follows:
s21: for device k in each edge node r (k ∈ S)r) Owning a data set
Figure BDA00030568900300000211
Selecting data set by random sampling method and recording it as
Figure BDA00030568900300000212
Uploading to an edge server; using the remaining data set while uploading data
Figure BDA0003056890030000031
The training of the local model is started,obtaining model parameters
Figure BDA0003056890030000032
Wherein t represents the tth round of training;
s22: the edge server of the edge node r receives all the data sets from the terminal equipment
Figure BDA0003056890030000033
Usage data set
Figure BDA0003056890030000034
Model training is carried out to obtain model parameters
Figure BDA0003056890030000035
Meanwhile, the training of the auxiliary model is continued on the edge server to obtain the parameters of the auxiliary model
Figure BDA0003056890030000036
Preferably, the remaining data set of step 21) is
Figure BDA0003056890030000037
The calculation method comprises the following steps:
Figure BDA0003056890030000038
Figure BDA0003056890030000039
a scaling factor is selected.
Preferably, the data set in step 22) is
Figure BDA00030568900300000310
The calculation formula of (a) is as follows:
Figure BDA00030568900300000311
preferably, the specific steps of aggregating and updating the model are as follows:
s31: terminal equipment k in each edge node r (k ∈ S)r) In which S isrModel parameters for the set of devices ultimately participating in the training
Figure BDA00030568900300000312
Uploading to an edge server r;
s32: the edge server of each edge node r is responsible for receiving the model parameters uploaded by all the terminal equipment
Figure BDA00030568900300000313
Obtaining model parameters after the edge server completes training
Figure BDA00030568900300000314
And auxiliary model parameters
Figure BDA00030568900300000315
Then, the model aggregation of the region is started to obtain region model parameters wr(t);
S33: each edge node r combines the region model parameters wr(t) uploading to a cloud center for aggregation of global models;
s34: and after receiving all the regional model parameters, the cloud center executes global model aggregation to obtain global model parameters w (t).
S35: and repeating the steps S1-S34 until the model converges or the preset precision requirement is met.
Preferably, the region model parameter wrThe calculation method of (t) is as follows:
Figure BDA00030568900300000316
wherein beta represents an adjustable parameter, and the range of beta is more than or equal to 0 and less than or equal to 1;
Figure BDA00030568900300000317
indicating the remainder of terminal device kA data set;
Figure BDA00030568900300000318
represents the sum of the data sets offloaded to the edge server of edge node r;
Figure BDA00030568900300000319
a data set representing a terminal device k in an edge node r; drRepresenting the sum of all terminal device datasets participating in the training in region r.
Preferably, the global model parameter w (t) is calculated by:
Figure BDA0003056890030000041
VEis a set of edge nodes, DrRepresenting the sum of all terminal equipment data sets participating in training in the edge node r; d is the sum of all region data sets.
The invention has the beneficial effects that:
1. according to the method, terminal equipment is selected, the terminal equipment with a data set with a lower non-independent same distribution (non-IID) degree is selected to participate in federal learning training, and meanwhile, part of models are trained by utilizing the computing power of an edge server, so that terminal and edge collaborative computing is achieved.
2. According to the invention, the terminal equipment and the edge server are trained cooperatively by uploading a part of the data set on the terminal equipment to the edge node server, so that the energy consumption of the terminal equipment is reduced, and the computational power deficiency of the terminal equipment is relieved.
3. Compared with the method that the terminal equipment is randomly selected and all the terminal equipment bears training energy consumption and computing resources, the method and the device effectively improve the efficiency of federal learning, reduce the energy consumption of the terminal equipment and improve the accuracy of the model.
Drawings
Fig. 1 is a schematic diagram of federal learning under MEC architecture.
Fig. 2 is a schematic diagram of the framework of the present invention.
FIG. 3 is a graph showing the results of test 1.
FIG. 4 is a graph showing the results of experiment 2.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
As shown in fig. 2, the method for accelerating federal learning of data and device heterogeneity in edge computing provided by the present invention is a process of selecting a terminal device, randomly sampling a data set on the terminal device, uploading the sampled data set to an edge node server, performing collaborative training on the terminal device and the edge node server, and aggregating and updating a model until the model meets requirements. The method comprises the following specific steps:
step S1, selecting the terminal equipment participating in training; the method comprises the following specific steps:
s11: for each edge node r (r ∈ V)E) In which V isEIs a set of edge nodes, and the number of the edge nodes is selected to be n by adopting a random sampling methodrThe terminal equipment forms a candidate set C of each arear(r∈VE)。
S12: distributing the global model to each edge node r, and then distributing the global model to the terminal equipment i to be selected (i belongs to C)r)。
S13: at each edge node r, an artificially set number of standard Independent Identically Distributed (IID) auxiliary data sets are used
Figure BDA0003056890030000051
Training the model to obtain auxiliary model parameters
Figure BDA0003056890030000052
Meanwhile, the terminal equipment i, i in the edge node r belongs to CrAnd sampling a proper data set on the equipment by using a simple random sampling method to perform model training to obtain test model parameters
Figure BDA0003056890030000053
S14: test model for calculating terminal equipment i in each edge node rParameter(s)
Figure BDA0003056890030000054
And auxiliary model parameters
Figure BDA0003056890030000055
Weight difference of
Figure BDA0003056890030000056
Sorting the terminal equipment in ascending order according to the weight to obtain a set
Figure BDA0003056890030000057
Wherein the weight difference
Figure BDA0003056890030000058
The calculation formula of (a) is as follows:
Figure BDA0003056890030000059
s15: in the collection
Figure BDA00030568900300000510
Before η · n is selectedrThe terminal devices form a set S of devices which finally participate in the trainingrWhere eta is a selection ratio, set artificially, 0<η≤1。
Step S2, the terminal device and the edge node server are trained cooperatively, the specific steps are as follows:
s21: device k in each edge node r (k ∈ S)r) Owning a data set
Figure BDA00030568900300000511
Selecting a certain proportion from them by random sampling method
Figure BDA00030568900300000512
Is recorded as
Figure BDA00030568900300000513
And uploading to the edge server. Uploading dataWhile using the remaining data set
Figure BDA00030568900300000514
Starting local model training to obtain model parameters
Figure BDA00030568900300000515
Where t represents the tth round of training. Data set
Figure BDA00030568900300000516
The calculation formula of (a) is as follows:
Figure BDA00030568900300000517
s22: the edge server of the edge node r receives all the data sets from the terminal equipment
Figure BDA00030568900300000518
Model training is performed using the data set to obtain model parameters
Figure BDA00030568900300000519
Meanwhile, the training of the auxiliary model is continued on the edge server r to obtain the parameters of the auxiliary model
Figure BDA00030568900300000520
Data set
Figure BDA0003056890030000061
The calculation formula of (a) is as follows:
Figure BDA0003056890030000062
step S3, model aggregation and update, the concrete steps are as follows:
s31: terminal equipment k in each edge node r (k ∈ S)r) The model parameters thereof are calculated
Figure BDA0003056890030000063
And uploading to the edge server r.
S32: the edge server of each edge node r is responsible for receiving the model parameters uploaded by all the terminal equipment
Figure BDA0003056890030000064
Obtaining model parameters after the edge server completes training
Figure BDA0003056890030000065
And auxiliary model parameters
Figure BDA0003056890030000066
Then, the model aggregation of the region is started to obtain region model parameters wr(t)。
S33: each edge node r combines the region model parameters wrAnd (t) uploading to a cloud center for aggregation of the global model.
S34: and after receiving all the regional model parameters, the cloud center executes global model aggregation to obtain global model parameters w (t).
S35: and repeating the steps S1-S34 until the model converges or the preset precision requirement is met.
In the above step S32, the region model wrThe calculation formula of (t) is as follows:
Figure BDA0003056890030000067
wherein beta is an adjustable parameter, and the range of beta is more than or equal to 0 and less than or equal to 1;
Figure BDA0003056890030000068
representing the remaining data set of terminal device k;
Figure BDA0003056890030000069
represents the sum of the data sets offloaded to the server of the edge node r;
Figure BDA00030568900300000610
a data set representing a terminal device k in an edge node r;
Drand (3) representing the sum of all data sets of the terminal equipment participating in training in the edge node r, wherein the calculation formula is as follows:
Figure BDA00030568900300000611
in step S34, the calculation formula of the global model parameter w (t) is as follows:
Figure BDA00030568900300000612
wherein, VEIs a set of edge nodes, DrAnd D represents the sum of all the data sets of the terminal equipment participating in training in the edge node r, wherein D is the sum of all the area data sets, and the calculation formula of D is as follows:
Figure BDA0003056890030000071
this example tests part of the strategy of the proposed protocol by two experiments. The experimental software and hardware environment comprises: (1) hardware: an operating system Windows 64 bit, a memory 32GB, a CPU32 core; (2) software: pytrch, Python development environment. Experimental data the MNIST dataset, which is a handwriting dataset, was used for 6 ten thousand training samples and 1 ten thousand test samples.
Experiment 1 Federal averaging method Using Non-IID data
In order to simulate an actual scene as much as possible, in the experiment, the communication turn is set to be 50 times, 100 clients are set, training samples are non-uniformly sampled, and meanwhile, the data distribution and the number of the clients are different; randomly selecting 10 clients to participate in training, wherein the clients use multithread simulation, and the round of local training of the clients each time is 5 times; the model uses a simple Convolutional Neural Network (CNN). And the server performs weighted average on the model parameters uploaded by the client, and the weight is positively correlated with the data volume of the client. The training time is the experimental result shown in fig. 3. The accuracy of the final model was 65.9%.
Experiment 2 federal learning based on client selection
In order to simplify the experiment and embody the importance of the client selection, on the basis of the experiment 1, the following modifications are made: and randomly selecting 20 clients as a candidate set. These clients use multi-threaded simulation; meanwhile, 1 edge node is set to be responsible for training the auxiliary model, and multithread simulation is still used; and selecting 10 clients from the candidate set through a policy again to participate in training. Client data is not offloaded, with β set to 0.5; the results of the experiment are shown in FIG. 4. The accuracy of the final model was 84.6%.
According to experiments, the terminal equipment is selected, the terminal equipment with the data set which is not independent and has the same distribution degree and the lower distribution degree is selected to participate in the federal learning training, and the accuracy of the model can be improved.
Finally, it should be noted that the above detailed description is only for illustrating the technical solution of the patent and not for limiting, although the patent is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the patent can be modified or replaced by equivalents without departing from the spirit and scope of the technical solution of the patent, which should be covered by the claims of the patent.

Claims (9)

1. A method for accelerating federal learning aiming at data and equipment isomerism under edge calculation is characterized in that: the method comprises the following steps: selecting terminal equipment, randomly sampling a data set on the terminal equipment, uploading the data set to an edge node server, performing collaborative training on the terminal equipment and the edge node server, and aggregating and updating the model until the model meets the requirement.
2. The method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 1, wherein: the step of selecting the terminal device includes:
s11: for each edge node r (r ∈ V)E) Selecting n as the number by random samplingrThe terminal equipment forms a candidate set C of each arear(r∈VE) In which V isEIs a set of edge nodes;
s12: distributing the global model to each edge node r, and then distributing the global model to the terminal equipment i to be selected (i belongs to C)r);
S13: on the edge server of each edge node r, an artificially set auxiliary data set is used
Figure FDA0003056890020000011
Training the model to obtain auxiliary model parameters
Figure FDA0003056890020000012
Simultaneously, for terminal equipment i in the edge node, i belongs to CrThe data set on the equipment is also sampled by using a random sampling method for model training to obtain test model parameters
Figure FDA0003056890020000013
S14: calculating test model parameters of terminal equipment i in each edge node r
Figure FDA0003056890020000014
And auxiliary model parameters
Figure FDA0003056890020000015
Weight difference of
Figure FDA0003056890020000016
Sorting the terminal equipment in ascending order according to the weight to obtain a set
Figure FDA0003056890020000017
S15: in the collection
Figure FDA0003056890020000018
Before η · n is selectedrThe terminal devices form a set S of devices which finally participate in the trainingrWherein eta is a selection ratio, 0 < eta is less than or equal to 1.
3. The method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 2, wherein: the weight difference in step S14
Figure FDA0003056890020000019
The calculation method comprises the following steps:
Figure FDA00030568900200000110
4. the method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 1, wherein: the specific steps for performing collaborative training on the terminal equipment and the edge node server are as follows:
s21: for device k in each edge node r (k ∈ S)r) Owning a data set
Figure FDA0003056890020000021
Selecting data set by random sampling method and recording it as
Figure FDA0003056890020000022
Uploading to an edge server; using the remaining data set while uploading data
Figure FDA0003056890020000023
Starting local model training to obtain model parameters
Figure FDA0003056890020000024
Wherein t represents the tth round of training;
s22: the edge server of the edge node r receives all the data sets from the terminal equipment
Figure FDA0003056890020000025
Usage data set
Figure FDA0003056890020000026
Model training is carried out to obtain model parameters
Figure FDA0003056890020000027
Meanwhile, the training of the auxiliary model is continued on the edge server to obtain the parameters of the auxiliary model
Figure FDA0003056890020000028
5. The method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 4, wherein: the remaining data set in the step 21)
Figure FDA0003056890020000029
The calculation method comprises the following steps:
Figure FDA00030568900200000210
Figure FDA00030568900200000211
a scaling factor is selected.
6. The method for accelerating federal learning for data and equipment heterogeneity under edge calculation as claimed in claim 4, wherein: the data set in the step 22)
Figure FDA00030568900200000212
The calculation formula of (a) is as follows:
Figure FDA00030568900200000213
7. the method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 1, wherein: the specific steps of aggregating and updating the models are as follows:
s31: terminal equipment k in each edge node r (k ∈ S)r) In which S isrModel parameters for the set of devices ultimately participating in the training
Figure FDA00030568900200000214
Uploading to an edge server r;
s32: the edge server of each edge node r is responsible for receiving the model parameters uploaded by all the terminal equipment
Figure FDA00030568900200000215
Obtaining model parameters after the edge server completes training
Figure FDA00030568900200000216
And auxiliary model parameters
Figure FDA00030568900200000217
Then, the model aggregation of the region is started to obtain region model parameters wr(t);
S33: each edge node r combines the region model parameters wr(t) uploading to a cloud center for aggregation of global models;
s34: and after receiving all the regional model parameters, the cloud center executes global model aggregation to obtain global model parameters w (t).
S35: and repeating the steps S1-S34 until the model converges or the preset precision requirement is met.
8. The method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 7, wherein: the region model parameter wrThe calculation method of (t) is as follows:
Figure FDA0003056890020000031
wherein beta represents an adjustable parameter, and the range of beta is more than or equal to 0 and less than or equal to 1;
Figure FDA0003056890020000032
representing the remaining data set of terminal device k;
Figure FDA0003056890020000033
represents the sum of the data sets offloaded to the edge server of edge node r;
Figure FDA0003056890020000034
a data set representing a terminal device k in an edge node r; drRepresenting the sum of all terminal device datasets participating in the training in region r.
9. The method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 7, wherein: the calculation method of the global model parameter w (t) comprises the following steps:
Figure FDA0003056890020000035
VEis a set of edge nodes, DrRepresenting the sum of all terminal equipment data sets participating in training in the edge node r; d is the sum of all region data sets.
CN202110502300.XA 2021-05-08 2021-05-08 Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation Pending CN113206887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110502300.XA CN113206887A (en) 2021-05-08 2021-05-08 Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110502300.XA CN113206887A (en) 2021-05-08 2021-05-08 Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation

Publications (1)

Publication Number Publication Date
CN113206887A true CN113206887A (en) 2021-08-03

Family

ID=77030842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110502300.XA Pending CN113206887A (en) 2021-05-08 2021-05-08 Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation

Country Status (1)

Country Link
CN (1) CN113206887A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312180A (en) * 2021-06-07 2021-08-27 北京大学 Resource allocation optimization method and system based on federal learning
CN113657607A (en) * 2021-08-05 2021-11-16 浙江大学 Continuous learning method for federal learning
CN114584406A (en) * 2022-05-09 2022-06-03 湖南红普创新科技发展有限公司 Industrial big data privacy protection system and method for federated learning
CN115329989A (en) * 2022-10-13 2022-11-11 合肥本源物联网科技有限公司 Synchronous federated learning acceleration method based on model segmentation under edge calculation scene
WO2023072049A1 (en) * 2021-10-28 2023-05-04 华为技术有限公司 Federated learning method and related apparatus
CN116166406A (en) * 2023-04-25 2023-05-26 合肥工业大学智能制造技术研究院 Personalized edge unloading scheduling method, model training method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020185973A1 (en) * 2019-03-11 2020-09-17 doc.ai incorporated System and method with federated learning model for medical research applications
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN111866869A (en) * 2020-07-07 2020-10-30 兰州交通大学 Federal learning indoor positioning privacy protection method facing edge calculation
CN112367109A (en) * 2020-09-28 2021-02-12 西北工业大学 Incentive method for digital twin-driven federal learning in air-ground network
US20210089878A1 (en) * 2019-09-20 2021-03-25 International Business Machines Corporation Bayesian nonparametric learning of neural networks
CN112668128A (en) * 2020-12-21 2021-04-16 国网辽宁省电力有限公司物资分公司 Method and device for selecting terminal equipment nodes in federated learning system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020185973A1 (en) * 2019-03-11 2020-09-17 doc.ai incorporated System and method with federated learning model for medical research applications
US20210089878A1 (en) * 2019-09-20 2021-03-25 International Business Machines Corporation Bayesian nonparametric learning of neural networks
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN111866869A (en) * 2020-07-07 2020-10-30 兰州交通大学 Federal learning indoor positioning privacy protection method facing edge calculation
CN112367109A (en) * 2020-09-28 2021-02-12 西北工业大学 Incentive method for digital twin-driven federal learning in air-ground network
CN112668128A (en) * 2020-12-21 2021-04-16 国网辽宁省电力有限公司物资分公司 Method and device for selecting terminal equipment nodes in federated learning system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WENYU ZHANG等: "Client Selection for Federated Learning With Non-IID Data in Mobile Edge Computing", 《IEEE》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312180A (en) * 2021-06-07 2021-08-27 北京大学 Resource allocation optimization method and system based on federal learning
CN113312180B (en) * 2021-06-07 2022-02-15 北京大学 Resource allocation optimization method and system based on federal learning
CN113657607A (en) * 2021-08-05 2021-11-16 浙江大学 Continuous learning method for federal learning
CN113657607B (en) * 2021-08-05 2024-03-22 浙江大学 Continuous learning method for federal learning
WO2023072049A1 (en) * 2021-10-28 2023-05-04 华为技术有限公司 Federated learning method and related apparatus
CN114584406A (en) * 2022-05-09 2022-06-03 湖南红普创新科技发展有限公司 Industrial big data privacy protection system and method for federated learning
CN114584406B (en) * 2022-05-09 2022-08-12 湖南红普创新科技发展有限公司 Industrial big data privacy protection system and method for federated learning
CN115329989A (en) * 2022-10-13 2022-11-11 合肥本源物联网科技有限公司 Synchronous federated learning acceleration method based on model segmentation under edge calculation scene
CN115329989B (en) * 2022-10-13 2023-02-14 合肥本源物联网科技有限公司 Synchronous federated learning acceleration method based on model segmentation under edge calculation scene
CN116166406A (en) * 2023-04-25 2023-05-26 合肥工业大学智能制造技术研究院 Personalized edge unloading scheduling method, model training method and system

Similar Documents

Publication Publication Date Title
CN113206887A (en) Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation
CN111353582B (en) Particle swarm algorithm-based distributed deep learning parameter updating method
CN109948029B (en) Neural network self-adaptive depth Hash image searching method
CN113112027A (en) Federal learning method based on dynamic adjustment model aggregation weight
CN109299781A (en) Distributed deep learning system based on momentum and beta pruning
Li et al. FedSAE: A novel self-adaptive federated learning framework in heterogeneous systems
CN113052334A (en) Method and system for realizing federated learning, terminal equipment and readable storage medium
CN113518007B (en) Multi-internet-of-things equipment heterogeneous model efficient mutual learning method based on federal learning
CN112990478B (en) Federal learning data processing system
CN114584581A (en) Federal learning system and federal learning training method for smart city Internet of things and letter fusion
Chen et al. Deep-broad learning system for traffic flow prediction toward 5G cellular wireless network
CN115587633A (en) Personalized federal learning method based on parameter layering
CN116523079A (en) Reinforced learning-based federal learning optimization method and system
CN113435595A (en) Two-stage optimization method for extreme learning machine network parameters based on natural evolution strategy
CN113672684A (en) Layered user training management system and method for non-independent same-distribution data
CN113435125A (en) Model training acceleration method and system for federal Internet of things system
CN117236421A (en) Large model training method based on federal knowledge distillation
Jin et al. Simulating aggregation algorithms for empirical verification of resilient and adaptive federated learning
CN112101528A (en) Terminal contribution measurement method based on back propagation
CN113743012B (en) Cloud-edge collaborative mode task unloading optimization method under multi-user scene
CN111047040A (en) Web service combination method based on IFPA algorithm
CN115695429A (en) Non-IID scene-oriented federal learning client selection method
CN115659807A (en) Method for predicting talent performance based on Bayesian optimization model fusion algorithm
CN114357865A (en) Hydropower station runoff and associated source load power year scene simulation and prediction method thereof
CN109685242B (en) Photovoltaic ultra-short term combined prediction method based on Adaboost algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210803