CN113206887A - Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation - Google Patents
Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation Download PDFInfo
- Publication number
- CN113206887A CN113206887A CN202110502300.XA CN202110502300A CN113206887A CN 113206887 A CN113206887 A CN 113206887A CN 202110502300 A CN202110502300 A CN 202110502300A CN 113206887 A CN113206887 A CN 113206887A
- Authority
- CN
- China
- Prior art keywords
- model
- edge
- training
- data
- terminal equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000004364 calculation method Methods 0.000 title claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 60
- 238000005070 sampling Methods 0.000 claims description 13
- 230000002776 aggregation Effects 0.000 claims description 10
- 238000004220 aggregation Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 9
- 230000004931 aggregating effect Effects 0.000 claims description 5
- 230000001174 ascending effect Effects 0.000 claims description 3
- 238000009826 distribution Methods 0.000 abstract description 6
- 238000005265 energy consumption Methods 0.000 abstract description 6
- 238000002474 experimental method Methods 0.000 description 9
- 238000004088 simulation Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
Abstract
The invention discloses a method for accelerating federal learning aiming at data and equipment isomerism under edge calculation. According to the method, terminal equipment is selected, the terminal equipment with the data sets which are not independent and have the same distribution degree and the lower data sets are selected to participate in federal learning training, and meanwhile, part of models are trained by utilizing the computing power of the edge server, so that terminal and edge collaborative computing is achieved. Compared with the method that the terminal equipment is randomly selected and all the terminal equipment bears training energy consumption and computing resources, the method effectively improves the efficiency of federal learning, reduces the energy consumption of the terminal equipment and improves the accuracy of the model.
Description
Technical Field
The invention relates to the field of cloud computing and edge computing, in particular to a method for accelerating federal learning aiming at data and equipment isomerism under edge computing.
Technical Field
Performing a Federal Learning (FL) training model (as shown in fig. 1) under an Edge Computing (MEC) architecture, wherein terminal devices participating in training have heterogeneous characteristics due to differences in their hardware characteristics (different CPUs, memories, network connections, power supplies, and the like); in addition, the forms of user data generated by these terminal devices are diversified, which results in imbalance of the collected user data sets, i.e. the data sets are different in size and degree of non-independent and uniform distribution (non-IID) of the data sets, so that the data sets also have heterogeneous characteristics. These heterogeneous characteristics described above may affect the federal learning training process to varying degrees: for example, in the federal learning framework using a synchronous round, slower training equipment can restrict the overall learning progress; specifically, if the battery power of some terminal devices participating in training is insufficient, the terminal devices may be disconnected due to insufficient power supply during training, and thus the overall training progress is tired; or in the case of insufficient computing resources (CPU, GPU, etc.), the training device takes longer to train the model, so that the time taken for the entire training process also becomes longer. In addition, the use of a data set with a higher degree of non-independent co-distribution (non-IID) may cause deviation to the model training, resulting in a decrease in the accuracy of the final model.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a method for accelerating federal learning aiming at data and equipment heterogeneity under edge calculation. By uploading a part of the data set on the terminal equipment to the edge node server, the terminal equipment and the edge server are trained cooperatively, so that the energy consumption of the terminal equipment is reduced, and the computational power deficiency of the terminal equipment is relieved.
In order to achieve the above object, the invention provides a method for accelerating federal learning aiming at data and equipment isomerism under edge calculation, which is characterized in that the method comprises the following steps: selecting terminal equipment, randomly sampling a data set on the terminal equipment, uploading the data set to an edge node server, performing collaborative training on the terminal equipment and the edge node server, and aggregating and updating the model until the model meets the requirement.
Preferably, the step of selecting the terminal device includes:
s11: for each edge node r (r ∈ V)E) Selecting n as the number by random samplingrThe terminal equipment forms a candidate set C of each arear(r∈VE) In which V isEIs a set of edge nodes;
s12: distributing the global model to each edge node r, and then distributing the global model to the terminal equipment i to be selected (i belongs to C)r);
S13: on the edge server of each edge node r, an artificially set auxiliary data set is usedTraining the model to obtain auxiliary model parametersSimultaneously, for terminal equipment i in the edge node, i belongs to CrThe data set on the equipment is also sampled by using a random sampling method for model training to obtain test model parameters
S14: calculating test model parameters of terminal equipment i in each edge node rAnd auxiliary model parametersWeight difference ofSorting the terminal equipment in ascending order according to the weight to obtain a set
S15: in the collectionBefore η · n is selectedrThe terminal devices form a set S of devices which finally participate in the trainingrWhere eta is a selection ratio, 0<η≤1。
preferably, the specific steps of performing the collaborative training of the terminal device and the edge node server are as follows:
s21: for device k in each edge node r (k ∈ S)r) Owning a data setSelecting data set by random sampling method and recording it asUploading to an edge server; using the remaining data set while uploading dataThe training of the local model is started,obtaining model parametersWherein t represents the tth round of training;
s22: the edge server of the edge node r receives all the data sets from the terminal equipmentUsage data setModel training is carried out to obtain model parametersMeanwhile, the training of the auxiliary model is continued on the edge server to obtain the parameters of the auxiliary model
Preferably, the remaining data set of step 21) isThe calculation method comprises the following steps:
preferably, the specific steps of aggregating and updating the model are as follows:
s31: terminal equipment k in each edge node r (k ∈ S)r) In which S isrModel parameters for the set of devices ultimately participating in the trainingUploading to an edge server r;
s32: the edge server of each edge node r is responsible for receiving the model parameters uploaded by all the terminal equipmentObtaining model parameters after the edge server completes trainingAnd auxiliary model parametersThen, the model aggregation of the region is started to obtain region model parameters wr(t);
S33: each edge node r combines the region model parameters wr(t) uploading to a cloud center for aggregation of global models;
s34: and after receiving all the regional model parameters, the cloud center executes global model aggregation to obtain global model parameters w (t).
S35: and repeating the steps S1-S34 until the model converges or the preset precision requirement is met.
Preferably, the region model parameter wrThe calculation method of (t) is as follows:
wherein beta represents an adjustable parameter, and the range of beta is more than or equal to 0 and less than or equal to 1;indicating the remainder of terminal device kA data set;represents the sum of the data sets offloaded to the edge server of edge node r;a data set representing a terminal device k in an edge node r; drRepresenting the sum of all terminal device datasets participating in the training in region r.
Preferably, the global model parameter w (t) is calculated by:
VEis a set of edge nodes, DrRepresenting the sum of all terminal equipment data sets participating in training in the edge node r; d is the sum of all region data sets.
The invention has the beneficial effects that:
1. according to the method, terminal equipment is selected, the terminal equipment with a data set with a lower non-independent same distribution (non-IID) degree is selected to participate in federal learning training, and meanwhile, part of models are trained by utilizing the computing power of an edge server, so that terminal and edge collaborative computing is achieved.
2. According to the invention, the terminal equipment and the edge server are trained cooperatively by uploading a part of the data set on the terminal equipment to the edge node server, so that the energy consumption of the terminal equipment is reduced, and the computational power deficiency of the terminal equipment is relieved.
3. Compared with the method that the terminal equipment is randomly selected and all the terminal equipment bears training energy consumption and computing resources, the method and the device effectively improve the efficiency of federal learning, reduce the energy consumption of the terminal equipment and improve the accuracy of the model.
Drawings
Fig. 1 is a schematic diagram of federal learning under MEC architecture.
Fig. 2 is a schematic diagram of the framework of the present invention.
FIG. 3 is a graph showing the results of test 1.
FIG. 4 is a graph showing the results of experiment 2.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
As shown in fig. 2, the method for accelerating federal learning of data and device heterogeneity in edge computing provided by the present invention is a process of selecting a terminal device, randomly sampling a data set on the terminal device, uploading the sampled data set to an edge node server, performing collaborative training on the terminal device and the edge node server, and aggregating and updating a model until the model meets requirements. The method comprises the following specific steps:
step S1, selecting the terminal equipment participating in training; the method comprises the following specific steps:
s11: for each edge node r (r ∈ V)E) In which V isEIs a set of edge nodes, and the number of the edge nodes is selected to be n by adopting a random sampling methodrThe terminal equipment forms a candidate set C of each arear(r∈VE)。
S12: distributing the global model to each edge node r, and then distributing the global model to the terminal equipment i to be selected (i belongs to C)r)。
S13: at each edge node r, an artificially set number of standard Independent Identically Distributed (IID) auxiliary data sets are usedTraining the model to obtain auxiliary model parametersMeanwhile, the terminal equipment i, i in the edge node r belongs to CrAnd sampling a proper data set on the equipment by using a simple random sampling method to perform model training to obtain test model parameters
S14: test model for calculating terminal equipment i in each edge node rParameter(s)And auxiliary model parametersWeight difference ofSorting the terminal equipment in ascending order according to the weight to obtain a setWherein the weight differenceThe calculation formula of (a) is as follows:
s15: in the collectionBefore η · n is selectedrThe terminal devices form a set S of devices which finally participate in the trainingrWhere eta is a selection ratio, set artificially, 0<η≤1。
Step S2, the terminal device and the edge node server are trained cooperatively, the specific steps are as follows:
s21: device k in each edge node r (k ∈ S)r) Owning a data setSelecting a certain proportion from them by random sampling methodIs recorded asAnd uploading to the edge server. Uploading dataWhile using the remaining data setStarting local model training to obtain model parametersWhere t represents the tth round of training. Data setThe calculation formula of (a) is as follows:
s22: the edge server of the edge node r receives all the data sets from the terminal equipmentModel training is performed using the data set to obtain model parametersMeanwhile, the training of the auxiliary model is continued on the edge server r to obtain the parameters of the auxiliary modelData setThe calculation formula of (a) is as follows:
step S3, model aggregation and update, the concrete steps are as follows:
s31: terminal equipment k in each edge node r (k ∈ S)r) The model parameters thereof are calculatedAnd uploading to the edge server r.
S32: the edge server of each edge node r is responsible for receiving the model parameters uploaded by all the terminal equipmentObtaining model parameters after the edge server completes trainingAnd auxiliary model parametersThen, the model aggregation of the region is started to obtain region model parameters wr(t)。
S33: each edge node r combines the region model parameters wrAnd (t) uploading to a cloud center for aggregation of the global model.
S34: and after receiving all the regional model parameters, the cloud center executes global model aggregation to obtain global model parameters w (t).
S35: and repeating the steps S1-S34 until the model converges or the preset precision requirement is met.
In the above step S32, the region model wrThe calculation formula of (t) is as follows:
wherein beta is an adjustable parameter, and the range of beta is more than or equal to 0 and less than or equal to 1;
Drand (3) representing the sum of all data sets of the terminal equipment participating in training in the edge node r, wherein the calculation formula is as follows:
in step S34, the calculation formula of the global model parameter w (t) is as follows:
wherein, VEIs a set of edge nodes, DrAnd D represents the sum of all the data sets of the terminal equipment participating in training in the edge node r, wherein D is the sum of all the area data sets, and the calculation formula of D is as follows:
this example tests part of the strategy of the proposed protocol by two experiments. The experimental software and hardware environment comprises: (1) hardware: an operating system Windows 64 bit, a memory 32GB, a CPU32 core; (2) software: pytrch, Python development environment. Experimental data the MNIST dataset, which is a handwriting dataset, was used for 6 ten thousand training samples and 1 ten thousand test samples.
Experiment 1 Federal averaging method Using Non-IID data
In order to simulate an actual scene as much as possible, in the experiment, the communication turn is set to be 50 times, 100 clients are set, training samples are non-uniformly sampled, and meanwhile, the data distribution and the number of the clients are different; randomly selecting 10 clients to participate in training, wherein the clients use multithread simulation, and the round of local training of the clients each time is 5 times; the model uses a simple Convolutional Neural Network (CNN). And the server performs weighted average on the model parameters uploaded by the client, and the weight is positively correlated with the data volume of the client. The training time is the experimental result shown in fig. 3. The accuracy of the final model was 65.9%.
Experiment 2 federal learning based on client selection
In order to simplify the experiment and embody the importance of the client selection, on the basis of the experiment 1, the following modifications are made: and randomly selecting 20 clients as a candidate set. These clients use multi-threaded simulation; meanwhile, 1 edge node is set to be responsible for training the auxiliary model, and multithread simulation is still used; and selecting 10 clients from the candidate set through a policy again to participate in training. Client data is not offloaded, with β set to 0.5; the results of the experiment are shown in FIG. 4. The accuracy of the final model was 84.6%.
According to experiments, the terminal equipment is selected, the terminal equipment with the data set which is not independent and has the same distribution degree and the lower distribution degree is selected to participate in the federal learning training, and the accuracy of the model can be improved.
Finally, it should be noted that the above detailed description is only for illustrating the technical solution of the patent and not for limiting, although the patent is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the patent can be modified or replaced by equivalents without departing from the spirit and scope of the technical solution of the patent, which should be covered by the claims of the patent.
Claims (9)
1. A method for accelerating federal learning aiming at data and equipment isomerism under edge calculation is characterized in that: the method comprises the following steps: selecting terminal equipment, randomly sampling a data set on the terminal equipment, uploading the data set to an edge node server, performing collaborative training on the terminal equipment and the edge node server, and aggregating and updating the model until the model meets the requirement.
2. The method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 1, wherein: the step of selecting the terminal device includes:
s11: for each edge node r (r ∈ V)E) Selecting n as the number by random samplingrThe terminal equipment forms a candidate set C of each arear(r∈VE) In which V isEIs a set of edge nodes;
s12: distributing the global model to each edge node r, and then distributing the global model to the terminal equipment i to be selected (i belongs to C)r);
S13: on the edge server of each edge node r, an artificially set auxiliary data set is usedTraining the model to obtain auxiliary model parametersSimultaneously, for terminal equipment i in the edge node, i belongs to CrThe data set on the equipment is also sampled by using a random sampling method for model training to obtain test model parameters
S14: calculating test model parameters of terminal equipment i in each edge node rAnd auxiliary model parametersWeight difference ofSorting the terminal equipment in ascending order according to the weight to obtain a set
4. the method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 1, wherein: the specific steps for performing collaborative training on the terminal equipment and the edge node server are as follows:
s21: for device k in each edge node r (k ∈ S)r) Owning a data setSelecting data set by random sampling method and recording it asUploading to an edge server; using the remaining data set while uploading dataStarting local model training to obtain model parametersWherein t represents the tth round of training;
7. the method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 1, wherein: the specific steps of aggregating and updating the models are as follows:
s31: terminal equipment k in each edge node r (k ∈ S)r) In which S isrModel parameters for the set of devices ultimately participating in the trainingUploading to an edge server r;
s32: the edge server of each edge node r is responsible for receiving the model parameters uploaded by all the terminal equipmentObtaining model parameters after the edge server completes trainingAnd auxiliary model parametersThen, the model aggregation of the region is started to obtain region model parameters wr(t);
S33: each edge node r combines the region model parameters wr(t) uploading to a cloud center for aggregation of global models;
s34: and after receiving all the regional model parameters, the cloud center executes global model aggregation to obtain global model parameters w (t).
S35: and repeating the steps S1-S34 until the model converges or the preset precision requirement is met.
8. The method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 7, wherein: the region model parameter wrThe calculation method of (t) is as follows:
wherein beta represents an adjustable parameter, and the range of beta is more than or equal to 0 and less than or equal to 1;representing the remaining data set of terminal device k;represents the sum of the data sets offloaded to the edge server of edge node r;a data set representing a terminal device k in an edge node r; drRepresenting the sum of all terminal device datasets participating in the training in region r.
9. The method for accelerating federated learning for data and device heterogeneity under edge computing as claimed in claim 7, wherein: the calculation method of the global model parameter w (t) comprises the following steps:
VEis a set of edge nodes, DrRepresenting the sum of all terminal equipment data sets participating in training in the edge node r; d is the sum of all region data sets.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110502300.XA CN113206887A (en) | 2021-05-08 | 2021-05-08 | Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110502300.XA CN113206887A (en) | 2021-05-08 | 2021-05-08 | Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113206887A true CN113206887A (en) | 2021-08-03 |
Family
ID=77030842
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110502300.XA Pending CN113206887A (en) | 2021-05-08 | 2021-05-08 | Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113206887A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113312180A (en) * | 2021-06-07 | 2021-08-27 | 北京大学 | Resource allocation optimization method and system based on federal learning |
CN113657607A (en) * | 2021-08-05 | 2021-11-16 | 浙江大学 | Continuous learning method for federal learning |
CN114584406A (en) * | 2022-05-09 | 2022-06-03 | 湖南红普创新科技发展有限公司 | Industrial big data privacy protection system and method for federated learning |
CN115329989A (en) * | 2022-10-13 | 2022-11-11 | 合肥本源物联网科技有限公司 | Synchronous federated learning acceleration method based on model segmentation under edge calculation scene |
WO2023072049A1 (en) * | 2021-10-28 | 2023-05-04 | 华为技术有限公司 | Federated learning method and related apparatus |
CN116166406A (en) * | 2023-04-25 | 2023-05-26 | 合肥工业大学智能制造技术研究院 | Personalized edge unloading scheduling method, model training method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020185973A1 (en) * | 2019-03-11 | 2020-09-17 | doc.ai incorporated | System and method with federated learning model for medical research applications |
CN111708640A (en) * | 2020-06-23 | 2020-09-25 | 苏州联电能源发展有限公司 | Edge calculation-oriented federal learning method and system |
CN111866869A (en) * | 2020-07-07 | 2020-10-30 | 兰州交通大学 | Federal learning indoor positioning privacy protection method facing edge calculation |
CN112367109A (en) * | 2020-09-28 | 2021-02-12 | 西北工业大学 | Incentive method for digital twin-driven federal learning in air-ground network |
US20210089878A1 (en) * | 2019-09-20 | 2021-03-25 | International Business Machines Corporation | Bayesian nonparametric learning of neural networks |
CN112668128A (en) * | 2020-12-21 | 2021-04-16 | 国网辽宁省电力有限公司物资分公司 | Method and device for selecting terminal equipment nodes in federated learning system |
-
2021
- 2021-05-08 CN CN202110502300.XA patent/CN113206887A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020185973A1 (en) * | 2019-03-11 | 2020-09-17 | doc.ai incorporated | System and method with federated learning model for medical research applications |
US20210089878A1 (en) * | 2019-09-20 | 2021-03-25 | International Business Machines Corporation | Bayesian nonparametric learning of neural networks |
CN111708640A (en) * | 2020-06-23 | 2020-09-25 | 苏州联电能源发展有限公司 | Edge calculation-oriented federal learning method and system |
CN111866869A (en) * | 2020-07-07 | 2020-10-30 | 兰州交通大学 | Federal learning indoor positioning privacy protection method facing edge calculation |
CN112367109A (en) * | 2020-09-28 | 2021-02-12 | 西北工业大学 | Incentive method for digital twin-driven federal learning in air-ground network |
CN112668128A (en) * | 2020-12-21 | 2021-04-16 | 国网辽宁省电力有限公司物资分公司 | Method and device for selecting terminal equipment nodes in federated learning system |
Non-Patent Citations (1)
Title |
---|
WENYU ZHANG等: "Client Selection for Federated Learning With Non-IID Data in Mobile Edge Computing", 《IEEE》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113312180A (en) * | 2021-06-07 | 2021-08-27 | 北京大学 | Resource allocation optimization method and system based on federal learning |
CN113312180B (en) * | 2021-06-07 | 2022-02-15 | 北京大学 | Resource allocation optimization method and system based on federal learning |
CN113657607A (en) * | 2021-08-05 | 2021-11-16 | 浙江大学 | Continuous learning method for federal learning |
CN113657607B (en) * | 2021-08-05 | 2024-03-22 | 浙江大学 | Continuous learning method for federal learning |
WO2023072049A1 (en) * | 2021-10-28 | 2023-05-04 | 华为技术有限公司 | Federated learning method and related apparatus |
CN114584406A (en) * | 2022-05-09 | 2022-06-03 | 湖南红普创新科技发展有限公司 | Industrial big data privacy protection system and method for federated learning |
CN114584406B (en) * | 2022-05-09 | 2022-08-12 | 湖南红普创新科技发展有限公司 | Industrial big data privacy protection system and method for federated learning |
CN115329989A (en) * | 2022-10-13 | 2022-11-11 | 合肥本源物联网科技有限公司 | Synchronous federated learning acceleration method based on model segmentation under edge calculation scene |
CN115329989B (en) * | 2022-10-13 | 2023-02-14 | 合肥本源物联网科技有限公司 | Synchronous federated learning acceleration method based on model segmentation under edge calculation scene |
CN116166406A (en) * | 2023-04-25 | 2023-05-26 | 合肥工业大学智能制造技术研究院 | Personalized edge unloading scheduling method, model training method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113206887A (en) | Method for accelerating federal learning aiming at data and equipment isomerism under edge calculation | |
CN111353582B (en) | Particle swarm algorithm-based distributed deep learning parameter updating method | |
CN109948029B (en) | Neural network self-adaptive depth Hash image searching method | |
CN113112027A (en) | Federal learning method based on dynamic adjustment model aggregation weight | |
CN109299781A (en) | Distributed deep learning system based on momentum and beta pruning | |
Li et al. | FedSAE: A novel self-adaptive federated learning framework in heterogeneous systems | |
CN113052334A (en) | Method and system for realizing federated learning, terminal equipment and readable storage medium | |
CN113518007B (en) | Multi-internet-of-things equipment heterogeneous model efficient mutual learning method based on federal learning | |
CN112990478B (en) | Federal learning data processing system | |
CN114584581A (en) | Federal learning system and federal learning training method for smart city Internet of things and letter fusion | |
Chen et al. | Deep-broad learning system for traffic flow prediction toward 5G cellular wireless network | |
CN115587633A (en) | Personalized federal learning method based on parameter layering | |
CN116523079A (en) | Reinforced learning-based federal learning optimization method and system | |
CN113435595A (en) | Two-stage optimization method for extreme learning machine network parameters based on natural evolution strategy | |
CN113672684A (en) | Layered user training management system and method for non-independent same-distribution data | |
CN113435125A (en) | Model training acceleration method and system for federal Internet of things system | |
CN117236421A (en) | Large model training method based on federal knowledge distillation | |
Jin et al. | Simulating aggregation algorithms for empirical verification of resilient and adaptive federated learning | |
CN112101528A (en) | Terminal contribution measurement method based on back propagation | |
CN113743012B (en) | Cloud-edge collaborative mode task unloading optimization method under multi-user scene | |
CN111047040A (en) | Web service combination method based on IFPA algorithm | |
CN115695429A (en) | Non-IID scene-oriented federal learning client selection method | |
CN115659807A (en) | Method for predicting talent performance based on Bayesian optimization model fusion algorithm | |
CN114357865A (en) | Hydropower station runoff and associated source load power year scene simulation and prediction method thereof | |
CN109685242B (en) | Photovoltaic ultra-short term combined prediction method based on Adaboost algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210803 |