CN112235062A - Federal learning method and system for resisting communication noise - Google Patents

Federal learning method and system for resisting communication noise Download PDF

Info

Publication number
CN112235062A
CN112235062A CN202011078479.2A CN202011078479A CN112235062A CN 112235062 A CN112235062 A CN 112235062A CN 202011078479 A CN202011078479 A CN 202011078479A CN 112235062 A CN112235062 A CN 112235062A
Authority
CN
China
Prior art keywords
wireless channel
model
training model
node
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011078479.2A
Other languages
Chinese (zh)
Inventor
昂凡
陈力
陈晓辉
王卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202011078479.2A priority Critical patent/CN112235062A/en
Publication of CN112235062A publication Critical patent/CN112235062A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/06Receivers
    • H04B1/10Means associated with receiver for limiting or suppressing noise or interference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/06Receivers
    • H04B1/10Means associated with receiver for limiting or suppressing noise or interference
    • H04B1/1027Means associated with receiver for limiting or suppressing noise or interference assessing signal quality or detecting noise/interference for the received signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Electromagnetism (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

The invention discloses a federal learning method and a system for resisting communication noise, wherein the method comprises the following steps: each computing node transmits a corresponding local training model through a wireless channel; the central node performs weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, and broadcasts the global training model to each computing node through the wireless channel; and each computing node performs gradient descent computation based on the received model and the loss function until convergence. According to the method, the influence of noise on the model parameters can be expressed through the design of the loss function, local training is performed by using a gradient descent algorithm, and weighted average is completed at the central end to realize federal learning, so that the optimal model is solved, the accuracy of model estimation is improved, and the loss function value of model training is reduced.

Description

Federal learning method and system for resisting communication noise
Technical Field
The invention relates to the technical field of distributed federal learning, in particular to a federal learning method and a system for resisting communication noise.
Background
Federal Learning (federal Learning) is a network model training technique that is used in Artificial Intelligence (Artificial Intelligence). As the amount of data collected by terminal devices increases day by day, the past methods of transmitting all data to the central end and learning at the central end will impose a huge burden on the network load and delay in individual data calculation. In addition, considering individual data privacy, how to deliver the learning process of the central end to the individual to complete becomes an important issue for future research. Therefore, federal study and operation are carried out, and efficient machine learning is carried out among multiple computing nodes on the premise that information safety during data exchange is guaranteed and terminal data and personal data privacy are protected. The main idea of federated learning is to iteratively update between the two steps of local training of the edge computing devices and updating the weighted average of the local model at the central server to obtain the optimal global training model. The main advantage of federal learning is that the process of training learning is completed locally without uploading raw data, so that the load of network transmission is reduced and the privacy of individual data is guaranteed. Federal learning is expected to become the basis of next-generation artificial intelligence cooperative algorithms and cooperative networks.
Communication Noise (Communication Noise) is a kind of interference to signals in Communication. The noise is mainly generated by imperfect estimation of the channel, error of feedback quantization, delay of signal acquisition, and the like. In the wireless communication process, noise is inevitable, and how to reduce the interference influence of the noise on the system is an important issue in wireless communication research.
At present, with the rise of 5G and the high-speed development of the Internet of things, federal study is widely applied by the advantages of low data transmission quantity and good privacy protection. Due to communication noise introduced in the iterative updating process, the iteration times of the model are increased, and the accuracy of model estimation is reduced. Therefore, it is important to solve the influence of noise.
Disclosure of Invention
In view of this, the invention provides a federal learning method for countering communication noise, which can express the influence of noise on model parameters through the design of a loss function, then use a gradient descent algorithm for local training, and complete weighted average at a central end to realize federal learning, thereby solving an optimal model, improving the accuracy of model estimation and reducing the loss function value of model training.
The invention provides a federal learning method for resisting communication noise, which comprises the following steps:
each computing node transmits a corresponding local training model through a wireless channel;
the central node performs weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, and broadcasts the global training model to each computing node through the wireless channel;
and each computing node performs gradient descent computation based on the received model and the loss function until convergence.
Preferably, the transmitting, by each computing node, the corresponding local training model through the wireless channel includes:
n computing nodes transmit local training models w through wireless channels respectively1,w2,...,wN
Preferably, the central node performs weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, including:
for the t-th transmission process, the central node is based on a formula
Figure BDA0002717533130000021
Obtaining a global training model, wherein Δ wtIs noise caused by the wireless channel transmission, and satisfies | | | delta w | | luminance2≤σ2,σ2Is a constant.
Preferably, for the tth transmission process, the model received by the jth node is:
Figure BDA0002717533130000031
wherein,
Figure BDA0002717533130000032
is the noise caused by the wireless channel transmission, which satisfies
Figure BDA0002717533130000033
Figure BDA0002717533130000034
Is a constant.
Preferably, the loss function is:
Figure BDA0002717533130000035
where ρ ist∈(0,1]λ is a set parameter, Fj(. h) is a function related to the samples in compute node j,
Figure BDA0002717533130000036
to satisfy
Figure BDA0002717533130000037
Any value of (a) is selected,
Figure BDA0002717533130000038
wherein,
Figure BDA0002717533130000039
is a gradient accumulation function expressed as
Figure BDA00027175331300000310
A federal learning system for countering communication noise, comprising: a plurality of computing nodes and a central node; wherein:
the computing node is used for transmitting the corresponding local training model through a wireless channel;
the central node is used for carrying out weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, and broadcasting the global training model to each computing node through the wireless channel;
and the computing node is also used for performing gradient descent computation based on the received model and the loss function until convergence.
Preferably, the N computing nodes are specifically configured to transmit the local training model w via wireless channels, respectively1,w2,...,wN
Preferably, the central node is specifically configured to:
for the t-th transmission process, the method is based on a formula
Figure BDA00027175331300000311
Obtaining a global training model, wherein Δ wtIs noise caused by the wireless channel transmission, and satisfies | | | delta w | | luminance2≤σ2,σ2Is a constant.
Preferably, for the tth transmission process, the model received by the jth node is:
Figure BDA0002717533130000041
wherein,
Figure BDA0002717533130000042
is the noise caused by the wireless channel transmission, which satisfies
Figure BDA0002717533130000043
Figure BDA0002717533130000044
Is a constant.
Preferably, the loss function is:
Figure BDA0002717533130000045
where ρ ist∈(0,1]λ is a set parameter, Fj(. h) is a function related to the samples in compute node j,
Figure BDA0002717533130000046
to satisfy
Figure BDA0002717533130000047
Any value of (a) is selected,
Figure BDA0002717533130000048
wherein,
Figure BDA0002717533130000049
is a gradient accumulation function expressed as
Figure BDA00027175331300000410
In summary, the invention discloses a federal learning method for fighting against communication noise, which includes that each computing node transmits a corresponding local training model through a wireless channel, then a central node performs weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, and the global training model is broadcasted to each computing node through the wireless channel; and each computing node performs gradient descent computation based on the received model and the loss function until convergence. According to the method, the influence of noise on the model parameters can be expressed through the design of the loss function, local training is performed by using a gradient descent algorithm, and weighted average is completed at the central end to realize federal learning, so that the optimal model is solved, the accuracy of model estimation is improved, and the loss function value of model training is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a federated learning method for countering communication noise disclosed herein;
FIG. 2 is a communication flow diagram between a compute node and a central node according to the present disclosure;
FIG. 3 is a comparison diagram of the Federation learning performance test disclosed in the present invention;
FIG. 4 is a comparative representation of another Federal learning Performance test disclosed herein;
fig. 5 is a schematic structural diagram of a federal learning system for countering communication noise according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and 2, an embodiment of a federal learning method for combatting communication noise disclosed in the present invention may include the following steps:
s101, each computing node transmits a corresponding local training model through a wireless channel;
in the initial state, at the computing nodes, the training data sample size of each computing node is D1,D2,...,DNThe local training models are w1,w2,...,wNAnd N is the number of the calculation nodes. The total dataset size is D. N computing nodes respectively transmit corresponding local training models w through wireless channels1,w2,...,wN
S102, the central node performs weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, and broadcasts the global training model to each computing node through the wireless channel;
for the t-th transmission process, at the central node, the global training model after the wireless channel transmission and the weighted average operation is as follows:
Figure BDA0002717533130000051
wherein, Δ wtIs noise caused by the wireless channel transmission, and satisfies | | | delta w | | luminance2≤σ2,σ2Is a constant.
The central node then broadcasts the computed global training model to the various compute nodes over a wireless channel.
S103, each computing node performs gradient descent computation based on the received model and the loss function until convergence.
And each computing node performs gradient descent computation by using the received model and the loss function. For the tth transmission process, the model received by the jth computing node is:
Figure BDA0002717533130000061
wherein,
Figure BDA0002717533130000062
is the noise caused by the wireless channel transmission, which satisfies
Figure BDA0002717533130000063
Figure BDA0002717533130000064
Is a constant.
The loss function of the node designed in the embodiment of the invention is as follows:
Figure BDA0002717533130000065
where ρ ist∈(0,1]λ is a set parameter, Fj(. cndot.) is a function related to the samples in compute node j, the form of which can be arbitrarily set by the user, without departing from the scope of the design of the embodiments of the present invention,
Figure BDA0002717533130000066
to satisfy
Figure BDA0002717533130000067
Any value of (a) is selected,
Figure BDA0002717533130000068
while
Figure BDA0002717533130000069
The gradient accumulation function designed for the present invention can be expressed as:
Figure BDA00027175331300000610
combining the received model and the loss function, the gradient descent operation is:
Figure BDA00027175331300000611
wherein, γt+1For the step length parameter set for this time,
Figure BDA00027175331300000612
gradients designed for use in the examples of the invention, wherein
Figure BDA00027175331300000613
The above process is repeated until convergence.
In order to illustrate the beneficial effects of the solutions provided by the embodiments of the present invention, the following description is made with reference to the performance test charts shown in fig. 3 and fig. 4.
Testing parameters: noise parameter set to
Figure BDA00027175331300000614
γt=1/tα,ρt=1/tβBeta is more than 0.5 and less than alpha is less than 1, and lambda is 0.5. The training sample set is an MNIST image data set, and 70000 pictures (60000 for training and 10000 for testing) are contained in the MNIST image data set. For pictures in training samplesj, its input value is xjOutput value of yj. For settable sample correlation function Fj(. cndot.), set as Support Vector Machine (SVM) in the experiment.
In order to more intuitively illustrate the performance of the invention, classical federal learning is used as a comparison item of the invention, and under the same simulated computation network deployment structure, communication noise input and training process, the loss function of each computation node is
Figure BDA0002717533130000071
For the t-th transmission, the gradient descent method used by the computing node j is:
Figure BDA0002717533130000072
in fig. 3, the abscissa represents the number of iterations, and the ordinate represents the test accuracy, and the higher the test accuracy, the better the design of the federal learning training process is. The test accuracy of the federal learning training result can be effectively improved, the effect of more accurately predicting data is achieved, and the advantage of the method is more obvious compared with the traditional federal learning along with the increase of the iteration times.
In fig. 4, the abscissa represents the number of iterations, and the ordinate represents the loss function value, and a smaller loss function value indicates a better design of the federal learning training process. As can be seen from FIG. 4, the loss function value in the Federal learning training process can be effectively reduced, so that the trained Federal learning model is more stable.
In conclusion, the invention expresses the influence of noise on the model parameters by designing the loss function, then utilizes the gradient descent algorithm to train locally, and completes weighted average at the central end to realize federal learning, thereby solving the optimal model, improving the accuracy of model estimation and reducing the loss function value of model training.
As shown in fig. 5, which is a schematic structural diagram of an embodiment of the federal learning system for countering communication noise disclosed in the present invention, the system may include: a plurality of compute nodes and a central node, wherein:
the computing node is used for transmitting the corresponding local training model through a wireless channel;
in the initial state, at the computing nodes, the training data sample size of each computing node is D1,D2,...,DNThe local training models are w1,w2,...,wNAnd N is the number of the calculation nodes. The total dataset size is D. N computing nodes respectively transmit corresponding local training models w through wireless channels1,w2,...,wN
The central node is used for carrying out weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, and broadcasting the global training model to each computing node through the wireless channel;
for the t-th transmission process, at the central node, the global training model after the wireless channel transmission and the weighted average operation is as follows:
Figure BDA0002717533130000081
wherein, Δ wtIs noise caused by the wireless channel transmission, and satisfies | | | delta w | | luminance2≤σ2,σ2Is a constant.
The central node then broadcasts the computed global training model to the various compute nodes over a wireless channel.
And the computing node is also used for performing gradient descent computation based on the received model and the loss function until convergence.
And each computing node performs gradient descent computation by using the received model and the loss function. For the tth transmission process, the model received by the jth computing node is:
Figure BDA0002717533130000082
wherein,
Figure BDA0002717533130000083
is the noise caused by the wireless channel transmission, which satisfies
Figure BDA0002717533130000084
Figure BDA0002717533130000085
Is a constant.
The loss function of the node designed in the embodiment of the invention is as follows:
Figure BDA0002717533130000086
where ρ ist∈(0,1]λ is a set parameter, Fj(. cndot.) is a function related to the samples in compute node j, the form of which can be arbitrarily set by the user, without departing from the scope of the design of the embodiments of the present invention,
Figure BDA0002717533130000091
to satisfy
Figure BDA0002717533130000092
Any value of (a) is selected,
Figure BDA0002717533130000093
while
Figure BDA0002717533130000094
The gradient accumulation function designed for the present invention can be expressed as:
Figure BDA0002717533130000095
combining the received model and the loss function, the gradient descent operation is:
Figure BDA0002717533130000096
wherein, γt+1For the step length parameter set for this time,
Figure BDA0002717533130000097
gradients designed for use in the examples of the invention, wherein
Figure BDA0002717533130000098
The above process is repeated until convergence.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of federal learning against communication noise, comprising:
each computing node transmits a corresponding local training model through a wireless channel;
the central node performs weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, and broadcasts the global training model to each computing node through the wireless channel;
and each computing node performs gradient descent computation based on the received model and the loss function until convergence.
2. The method of claim 1, wherein transmitting the corresponding local training model by each computing node via a wireless channel comprises:
n computing nodes transmit local training models w through wireless channels respectively1,w2,...,wN
3. The method of claim 2, wherein the central node performs weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, and the method comprises:
for the t-th transmission process, the central node is based on a formula
Figure FDA0002717533120000011
Obtaining a global training model, wherein Δ wtIs noise caused by the wireless channel transmission, and satisfies | | | delta w | | luminance2≤σ2,σ2Is a constant.
4. The method of claim 3 wherein for the tth transmission, the jth node receives the model as:
Figure FDA0002717533120000012
wherein,
Figure FDA0002717533120000013
is the noise caused by the wireless channel transmission, which satisfies
Figure FDA0002717533120000014
Figure FDA0002717533120000015
Is a constant.
5. The method of claim 4, wherein the loss function is:
Figure FDA0002717533120000021
where ρ ist∈(0,1]λ is a set parameter, Fj(. h) is a function related to the samples in compute node j,
Figure FDA0002717533120000022
to satisfy
Figure FDA0002717533120000023
Any value of (a) is selected,
Figure FDA0002717533120000024
wherein,
Figure FDA0002717533120000025
is a gradient accumulation function expressed as
Figure FDA0002717533120000026
6. A federal learning system for countering communication noise, comprising: a plurality of computing nodes and a central node; wherein:
the computing node is used for transmitting the corresponding local training model through a wireless channel;
the central node is used for carrying out weighted average operation on each local training model transmitted through the wireless channel to obtain a global training model, and broadcasting the global training model to each computing node through the wireless channel;
and the computing node is also used for performing gradient descent computation based on the received model and the loss function until convergence.
7. The system of claim 6, wherein the N computing nodes are further configured to transmit the local training model w over a wireless channel1,w2,...,wN
8. The system of claim 7, wherein the central node is specifically configured to:
for the t-th transmission process, the method is based on a formula
Figure FDA0002717533120000027
Obtaining a global training model, wherein Δ wtIs noise caused by the wireless channel transmission, and satisfies | | | delta w | | luminance2≤σ2,σ2Is a constant.
9. The system of claim 8 wherein for the tth transmission, the jth node receives the model as:
Figure FDA0002717533120000028
wherein,
Figure FDA0002717533120000029
is the noise caused by the wireless channel transmission, which satisfies
Figure FDA00027175331200000210
Figure FDA00027175331200000211
Is a constant.
10. The system of claim 9, wherein the loss function is:
Figure FDA0002717533120000031
where ρ ist∈(0,1]λ is a set parameter, Fj(. h) is a function related to the samples in compute node j,
Figure FDA0002717533120000032
to satisfy
Figure FDA0002717533120000033
Any value of (a) is selected,
Figure FDA0002717533120000034
wherein,
Figure FDA0002717533120000035
is a gradient accumulation function expressed as
Figure FDA0002717533120000036
CN202011078479.2A 2020-10-10 2020-10-10 Federal learning method and system for resisting communication noise Pending CN112235062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011078479.2A CN112235062A (en) 2020-10-10 2020-10-10 Federal learning method and system for resisting communication noise

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011078479.2A CN112235062A (en) 2020-10-10 2020-10-10 Federal learning method and system for resisting communication noise

Publications (1)

Publication Number Publication Date
CN112235062A true CN112235062A (en) 2021-01-15

Family

ID=74113187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011078479.2A Pending CN112235062A (en) 2020-10-10 2020-10-10 Federal learning method and system for resisting communication noise

Country Status (1)

Country Link
CN (1) CN112235062A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113504999A (en) * 2021-08-05 2021-10-15 重庆大学 Scheduling and resource allocation method for high-performance hierarchical federated edge learning
CN114065863A (en) * 2021-11-18 2022-02-18 北京百度网讯科技有限公司 Method, device and system for federal learning, electronic equipment and storage medium
WO2022188790A1 (en) * 2021-03-11 2022-09-15 华为技术有限公司 Communication method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190340534A1 (en) * 2016-09-26 2019-11-07 Google Llc Communication Efficient Federated Learning
CN111582504A (en) * 2020-05-14 2020-08-25 深圳前海微众银行股份有限公司 Federal modeling method, device, equipment and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190340534A1 (en) * 2016-09-26 2019-11-07 Google Llc Communication Efficient Federated Learning
CN111582504A (en) * 2020-05-14 2020-08-25 深圳前海微众银行股份有限公司 Federal modeling method, device, equipment and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FAN ANG等: "Robust Federated Learning Under Worst-Case Model", 《2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC)》 *
FAN ANG等: "Robust Federated Learning With Noisy Communication", 《IEEE TRANSACTIONS ON COMMUNICATIONS》 *
曹晓雯等: "面向边缘智能的空中计算", 《中兴通讯技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022188790A1 (en) * 2021-03-11 2022-09-15 华为技术有限公司 Communication method and device
CN113504999A (en) * 2021-08-05 2021-10-15 重庆大学 Scheduling and resource allocation method for high-performance hierarchical federated edge learning
CN113504999B (en) * 2021-08-05 2023-07-04 重庆大学 Scheduling and resource allocation method for high-performance hierarchical federal edge learning
CN114065863A (en) * 2021-11-18 2022-02-18 北京百度网讯科技有限公司 Method, device and system for federal learning, electronic equipment and storage medium
CN114065863B (en) * 2021-11-18 2023-08-29 北京百度网讯科技有限公司 Federal learning method, apparatus, system, electronic device and storage medium

Similar Documents

Publication Publication Date Title
Zhang et al. Gradient statistics aware power control for over-the-air federated learning
CN112668128B (en) Method and device for selecting terminal equipment nodes in federal learning system
US11023561B2 (en) Systems and methods of distributed optimization
CN112235062A (en) Federal learning method and system for resisting communication noise
WO2024027164A1 (en) Adaptive personalized federated learning method supporting heterogeneous model
US10984319B2 (en) Neural architecture search
Zhang et al. Federated learning with adaptive communication compression under dynamic bandwidth and unreliable networks
CN111628946B (en) Channel estimation method and receiving equipment
Li et al. Quantized event-triggered communication based multi-agent system for distributed resource allocation optimization
CN104506378A (en) Data flow prediction device and method
CN112948885B (en) Method, device and system for realizing privacy protection of multiparty collaborative update model
CN115358487A (en) Federal learning aggregation optimization system and method for power data sharing
CN116187483A (en) Model training method, device, apparatus, medium and program product
CN115829055B (en) Federal learning model training method, federal learning model training device, federal learning model training computer equipment and federal learning model storage medium
CN117349672B (en) Model training method, device and equipment based on differential privacy federal learning
CN113723620A (en) Terminal scheduling method and device in wireless federal learning
CN112836822A (en) Federal learning strategy optimization method and device based on width learning
CN117009053A (en) Task processing method of edge computing system and related equipment
Behmandpoor et al. Federated learning based resource allocation for wireless communication networks
Fan et al. Cb-dsl: Communication-efficient and byzantine-robust distributed swarm learning on non-iid data
CN114595815A (en) Transmission-friendly cloud-end cooperation training neural network model method
CN116887205A (en) Wireless federal segmentation learning algorithm for cooperative intelligence of Internet of things
CN117746172A (en) Heterogeneous model polymerization method and system based on domain difference perception distillation
US12015507B2 (en) Training in communication systems
CN117278367B (en) Distributed compressed sensing sparse time-varying channel estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210115

WD01 Invention patent application deemed withdrawn after publication