CN116384513A - Yun Bianduan collaborative learning system and method - Google Patents

Yun Bianduan collaborative learning system and method Download PDF

Info

Publication number
CN116384513A
CN116384513A CN202310620160.5A CN202310620160A CN116384513A CN 116384513 A CN116384513 A CN 116384513A CN 202310620160 A CN202310620160 A CN 202310620160A CN 116384513 A CN116384513 A CN 116384513A
Authority
CN
China
Prior art keywords
local
model
global
edge
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310620160.5A
Other languages
Chinese (zh)
Inventor
郭永安
王国成
王宇翱
孙洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Edge Intelligence Research Institute Nanjing Co ltd
Nanjing University of Posts and Telecommunications
Original Assignee
Edge Intelligence Research Institute Nanjing Co ltd
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Edge Intelligence Research Institute Nanjing Co ltd, Nanjing University of Posts and Telecommunications filed Critical Edge Intelligence Research Institute Nanjing Co ltd
Priority to CN202310620160.5A priority Critical patent/CN116384513A/en
Publication of CN116384513A publication Critical patent/CN116384513A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a cloud edge end collaborative learning system, which comprises: a user equipment layer, an edge server layer and a cloud server layer; the cloud server layer is used for controlling the cloud server to aggregate the local model from the edge server into a global model, and broadcasting the global model according to the global accuracy judgment result; the edge server layer is used for controlling the edge server to receive the global model from the cloud server and broadcasting the global model as a local model to the user equipment; the user equipment layer is used for controlling the user equipment to train the received local model to obtain the local model, the edge server layer is also used for aggregating the received local model, the aggregated model is used as the local model, the local precision of the local model is judged, if the local precision meets the requirement, the local model is uploaded to the cloud server to aggregate, and otherwise, the local model is returned to the edge server to continue training until the requirement is met.

Description

Yun Bianduan collaborative learning system and method
Technical Field
The invention belongs to the technical field of machine learning, and particularly relates to a cloud edge end collaborative learning system and method.
Background
With the remarkable increase of the number of devices of the internet of things, the data volume generated by the edge network also increases rapidly. Most of this data is privacy sensitive in nature and processing and analyzing this data requires machine learning algorithms. Conventional machine learning algorithms require a central processor for collecting data for model training. However, due to the privacy security of the data, the user device may not be willing to share its local data. In order to solve the problem, a distributed machine learning algorithm, namely federal learning (Federated Learning, FL), has been developed, and privacy safety of users is effectively ensured by transferring a data storage and model training stage of machine learning to local users and only interacting with a central server to update a model.
The existing federal learning is to aggregate and update models by a cloud server. Firstly, when FL is implemented on a wireless network, performance of a user device is poor compared with a cloud server, and when a learning task is complex and a local model is large, training with limited computing resources increases training delay and reduces learning performance. Secondly, because wireless resources are limited, the transmission distance is long, the problems of unpredictable communication with a cloud server, unreliable and the like exist, training efficiency and model precision can be reduced, under the condition that the number of user equipment is huge, client scheduling is not adopted, all the user equipment participates in each round of training, and the balance between exploration and utilization is difficult to realize. Finally, in the existing model training process, it is difficult to dynamically determine the local training round number and the global training round number, and the iteration times are set in advance, which results in reduced model training efficiency and waste of computing resources.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a cloud-edge collaborative learning system and a cloud-edge collaborative learning method, which can improve learning efficiency through cloud-edge collaborative learning.
The technical problems to be solved by the invention are realized by the following technical scheme:
in a first aspect, a cloud edge collaborative learning system is provided, including:
a user equipment layer, an edge server layer and a cloud server layer;
the cloud server layer is used for controlling the cloud server to aggregate the local model from the edge server into a global model, judging the global precision of the global model, and determining whether to broadcast the global model to the edge server according to a judging result;
the edge server layer is used for controlling the edge server to receive the global model from the cloud server and broadcasting the global model as a local model to the user equipment;
the user equipment layer is used for controlling the user equipment to train the received local model to obtain a local model, and uploading the local model to the edge server;
the edge server layer is also used for controlling the edge server to aggregate the received local models, taking the aggregated models as local models, judging the local precision of the local models, uploading the local models to the cloud server for aggregation if the local precision meets the requirement, and otherwise, returning the local models to the edge server for continuous training until the local precision meets the requirement.
With reference to the first aspect, further, the cloud server layer includes a cloud receiving module, a global aggregation module, a global precision judging module and a cloud sending module;
the cloud receiving module is used for receiving and storing the local model sent by the edge server layer;
the global aggregation module is used for carrying out global aggregation on the received local models and updating the global models;
the global precision judging module is used for judging whether the aggregated global model meets the global precision requirement;
the cloud sending module is used for sending the global model to the edge server layer.
With reference to the first aspect, further, the edge server layer includes: the device comprises a user equipment selection module, an edge receiving module, a local aggregation module, a local precision judging module and an edge sending module;
the user equipment selecting module is used for selecting corresponding user equipment from the user equipment layers to form a user equipment subset;
the edge receiving module is used for receiving and storing a global model, a local model and a local model, wherein the global model is broadcasted by a cloud server, the local model is self, and the local model is sent by user equipment in the selected user equipment subset;
the local aggregation module is used for locally aggregating the received local model and updating the previous local model;
the local precision judging module is used for judging whether the local model meets the local precision requirement or not;
the edge sending module is used for sending the local model to the cloud server layer and the user equipment layer.
With reference to the first aspect, further, the user equipment layer includes: a local receiving module, a local training module and a local transmitting module;
the local receiving module is used for receiving a local model broadcast by the edge server;
the local training module is used for training the local model according to the user equipment data and updating the trained local model into a local model;
the local sending module is used for sending the local model to the edge server layer for local aggregation.
In a second aspect, a cloud edge end collaborative learning method is provided, including:
broadcasting a global model to each edge server by the cloud server;
the edge server broadcasts the received global model as a local model to each user device;
the user equipment trains the received local model based on the self data to obtain a local model and uploads the local model to the edge server;
the edge server aggregates the received local models, takes the aggregated models as local models, judges the local precision of the local models, uploads the local models to the cloud server for aggregation if the local precision meets the requirement, and returns the local models to the edge server for continuous training until the local precision meets the requirement if the local precision does not meet the requirement;
the cloud server aggregates the received local models to obtain new global models, judges global precision of the new global models, finishes model training if the global precision meets the requirement, broadcasts the new global models to all edge servers if the global precision of the new global models does not meet the requirement, and broadcasts the new global models serving as the local models to all user equipment for retraining until the global precision of the new global models meets the requirement.
With reference to the second aspect, further, the edge server selects a part of the local model from the received local models for subsequent aggregation using a client scheduling scheme of the multi-arm slot machine before aggregating the received local models.
With reference to the second aspect, further, the local accuracy requirement is as follows:
Figure SMS_1
wherein,,
Figure SMS_2
indicate->
Figure SMS_3
Loss function of local model of individual edge servers,/->
Figure SMS_4
Indicate->
Figure SMS_5
Personal user equipment->
Figure SMS_6
Model parameters for wheel training, +.>
Figure SMS_7
Is a local essenceDegree standard.
With reference to the second aspect, further, the global accuracy requirement is as follows:
Figure SMS_8
wherein,,
Figure SMS_9
loss function representing global model, +.>
Figure SMS_10
Indicate->
Figure SMS_11
Local model of the edge server +.>
Figure SMS_12
Parameters of wheel training, ++>
Figure SMS_13
Representing a global accuracy criterion.
The invention has the beneficial effects that:
according to the invention, a cloud side-end collaborative FL layered architecture is constructed, and the low-delay model training is realized by utilizing the high-performance communication and calculation advantages of the edge server and the cloud server compared with those of the user equipment.
The invention adopts a client scheduling scheme (multi-arm slot machine), reduces the training period and the time interval of each period, and realizes the minimization of the training latency of the wireless layered FL system.
The invention respectively sets the local precision and the global precision to determine the local training round number and the global training round number, so that the iteration times are dynamic, and the precision and the efficiency of model training are improved.
Drawings
FIG. 1 is a schematic diagram of a cloud edge collaborative learning system according to the present invention;
FIG. 2 is a schematic diagram of a hierarchical architecture of a cloud-edge collaborative learning system according to the present invention;
fig. 3 is a flowchart of a cloud edge collaborative learning method in the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to better understand the present invention, the following describes related technologies in the technical solution of the present invention.
Example 1:
as shown in fig. 1 and fig. 2, in this embodiment, the present invention provides a cloud edge collaborative learning system, which includes a user equipment layer, an edge server layer, and a cloud server layer, where the three layers are all connected through wireless network communication. In the present invention, we assume that the whole system has one cloud server,
Figure SMS_14
personal edge server +.>
Figure SMS_15
User equipment, in the course of learning training, every edge server has a corresponding range>
Figure SMS_16
The user equipment layer, the edge server layer and the cloud server layer are respectively used for controlling the user equipment, the edge server and the cloud server.
The cloud server layer includes:
the system comprises a cloud receiving module, a global aggregation module, a global precision judging module and a cloud sending module;
the cloud receiving module is used for receiving and storing the local model sent by the edge server layer;
the global aggregation module is used for carrying out global aggregation on the received local models and updating the global models;
the global precision judging module is used for judging whether the aggregated global model meets the global precision requirement;
the cloud sending module is used for sending the global model to the edge server layer.
The edge server layer includes:
the device comprises a user equipment selection module, an edge receiving module, a local aggregation module, a local precision judging module and an edge sending module;
the user equipment selecting module is used for selecting corresponding user equipment from the user equipment layers to form a user equipment subset. In the local aggregation process, because a plurality of close user equipment are similar to a great extent, the aggregation is not needed to be carried out on models of all the user equipment, and only some typical models are selected by the user equipment selection module, so that the operation amount of the system can be reduced to a great extent.
The edge receiving module is used for receiving and storing a global model, a local model and a local model, wherein the global model is broadcasted by the cloud server, the local model is self, and the local model is sent by user equipment in the selected user equipment subset;
the local aggregation module is used for locally aggregating the received local model and updating the previous local model;
the local precision judging module is used for judging whether the local model meets the local precision requirement;
the edge sending module is used for sending the local model to the cloud server layer and the user equipment layer.
The user equipment layer includes:
a local receiving module, a local training module and a local transmitting module;
the local receiving module is used for receiving the local model broadcast by the edge server;
the local training module is used for training the local model according to the user equipment data and updating the trained local model into a local model;
the local sending module is used for sending the local model to the edge server layer for local aggregation.
Example 2:
as shown in fig. 3, the invention also provides a cloud edge end collaborative learning method, which comprises the following steps:
firstly, a cloud server broadcasts a global model (model loss function) to all [ ] through a cloud sending module
Figure SMS_17
Edge server, which broadcasts the global model as a local model to the edge server-wide +.>
Figure SMS_18
On the individual user devices. Each user device receives and stores the model through the local receiving module, and trains the local model through the local training module based on the data set communication and the operation capacity of the local training module to obtain the local model. We will model local (based on user equipment +.>
Figure SMS_19
A loss function of a dataset model) is defined as follows:
Figure SMS_20
(1)
wherein,,
Figure SMS_24
indicate->
Figure SMS_26
The individual user devices store the size of the local data set. />
Figure SMS_30
Defining a given set of data samples as a set of I/O->
Figure SMS_23
Wherein->
Figure SMS_31
Is a kind of having->
Figure SMS_32
Input sample vector of features, +.>
Figure SMS_33
Is sample->
Figure SMS_21
Is a marked output value of (a). In a typical learning problem, for a learning with input +.>
Figure SMS_25
Sample data of->
Figure SMS_28
The task is to find the characterization output +.>
Figure SMS_29
Model parameters of>
Figure SMS_22
And loss function->
Figure SMS_27
The user equipment then transmits the model parameters via the local transmission module
Figure SMS_34
And loss function->
Figure SMS_35
Uploading to an edge server, the edge server selecting, by a user device selection module, a subset of user devices based on a client scheduling scheme, in this example using a Multi-arm band (MAB) client scheduling scheme, comparing the edge server to one player, defining instantaneous rewards of the pull rod as a reduction in training loss, and then defining an average reward of user device training loss as:
Figure SMS_36
(2)
wherein,,
Figure SMS_37
representing user equipment +.>
Figure SMS_38
In the current wheel +.>
Figure SMS_39
The number of times previously selected, +.>
Figure SMS_40
Representing user equipment +.>
Figure SMS_41
In->
Figure SMS_42
Average return of training loss.
The implementation steps of the MAB-based client scheduling scheme are as follows:
step 1: before the front part
Figure SMS_43
From the set->
Figure SMS_44
To initially estimate the training delay of the client.
Step 2: employing a greedy approach in
Figure SMS_45
The known information is utilized or the unknown information is explored in the round.
Step 3: edge server
Figure SMS_46
Such a probability selects a subset of N user equipments, wherein +.>
Figure SMS_47
Is greedy constant.
The edge server receives and stores N user equipment in the subset through an edge receiving moduleUploaded model parameters and loss functions
Figure SMS_48
And by means of a local aggregation module>
Figure SMS_49
And (5) local polymerization. Local model—edge server local loss function, defined as:
Figure SMS_50
(3)
wherein,,
Figure SMS_51
indicate->
Figure SMS_52
Loss function of the local model of the individual edge server,/->
Figure SMS_53
Representing the size of the edge server dataset, < +.>
Figure SMS_54
After obtaining the local model, we need to judge the local precision of the model according to the given local precision standard
Figure SMS_55
Performing accuracy judgment on the aggregated local model, if the local accuracy standard is not satisfied +.>
Figure SMS_56
The edge server broadcasts the updated local model to the user equipment, and carries out local iterative training until the accuracy standard is met; and if the local precision standard is met, uploading the updated local model to a cloud server by the edge server for global aggregation. If the precision requirement is met, the method is->
Figure SMS_57
The following formula is required to be satisfied:
Figure SMS_58
(4)
wherein,,
Figure SMS_59
indicate->
Figure SMS_60
Local model of personal user device +.>
Figure SMS_61
Parameters of wheel training, ++>
Figure SMS_62
Representing the gradient of the loss function.
Eventually, if all edge servers are trained to meet local accuracy
Figure SMS_63
,/>
Figure SMS_64
The edge server uses the edge sending module to send the model parameter +.>
Figure SMS_65
And loss function->
Figure SMS_66
Uploading the local model to a cloud server, and receiving and storing the local model uploaded by the edge server by the cloud server through a cloud receiving module and performing global aggregation through a global aggregation module to obtain a global model. Minimizing the loss function of the global model is shown by the following equation:
Figure SMS_67
(5)
wherein,,
Figure SMS_68
a loss function representing the global model, D representing the size of the total dataset, +.>
Figure SMS_69
The cloud server is based on a global precision judging module and according to given global precision standards
Figure SMS_70
The method comprises the steps that accuracy judgment is conducted on an aggregated global model, if the overall accuracy is not met, the cloud server broadcasts the updated global model to an edge server, and global iteration is conducted until the overall accuracy standard is met; and if the global precision is met, ending the model training. Given global accuracy criterion->
Figure SMS_71
Satisfies the following formula:
Figure SMS_72
(6)
wherein,,
Figure SMS_73
indicate->
Figure SMS_74
Local model of the edge server +.>
Figure SMS_75
Parameters of the wheel training.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (8)

1. The cloud edge end collaborative learning system is characterized by comprising: a user equipment layer, an edge server layer and a cloud server layer;
the cloud server layer is used for controlling the cloud server to aggregate the local model from the edge server into a global model, judging the global precision of the global model, and determining whether to broadcast the global model to the edge server according to a judging result;
the edge server layer is used for controlling the edge server to receive the global model from the cloud server and broadcasting the global model as a local model to the user equipment;
the user equipment layer is used for controlling the user equipment to train the received local model to obtain a local model, and uploading the local model to the edge server;
the edge server layer is also used for controlling the edge server to aggregate the received local models, taking the aggregated models as local models, judging the local precision of the local models, uploading the local models to the cloud server for aggregation if the local precision meets the requirement, and otherwise, returning the local models to the edge server for continuous training until the local precision meets the requirement.
2. The cloud edge collaborative learning system according to claim 1, wherein the cloud server layer comprises a cloud receiving module, a global aggregation module, a global precision judging module and a cloud sending module;
the cloud receiving module is used for receiving and storing the local model sent by the edge server layer;
the global aggregation module is used for carrying out global aggregation on the received local models and updating the global models;
the global precision judging module is used for judging whether the aggregated global model meets the global precision requirement;
the cloud sending module is used for sending the global model to the edge server layer.
3. The cloud-edge collaborative learning system according to claim 1, wherein the edge server layer comprises: the device comprises a user equipment selection module, an edge receiving module, a local aggregation module, a local precision judging module and an edge sending module;
the user equipment selecting module is used for selecting corresponding user equipment from the user equipment layers to form a user equipment subset;
the edge receiving module is used for receiving and storing a global model, a local model and a local model, wherein the global model is broadcasted by a cloud server, the local model is self, and the local model is sent by user equipment in the selected user equipment subset;
the local aggregation module is used for locally aggregating the received local model and updating the previous local model;
the local precision judging module is used for judging whether the local model meets the local precision requirement or not;
the edge sending module is used for sending the local model to the cloud server layer and the user equipment layer.
4. The cloud-edge collaborative learning system according to claim 1, wherein the user equipment layer comprises: a local receiving module, a local training module and a local transmitting module;
the local receiving module is used for receiving a local model broadcast by the edge server;
the local training module is used for training the local model according to the user equipment data and updating the trained local model into a local model;
the local sending module is used for sending the local model to the edge server layer for local aggregation.
5. The cloud edge end collaborative learning method is characterized by comprising the following steps of:
broadcasting a global model to each edge server by the cloud server;
the edge server broadcasts the received global model as a local model to each user device;
the user equipment trains the received local model based on the self data to obtain a local model and uploads the local model to the edge server;
the edge server aggregates the received local models, takes the aggregated models as local models, judges the local precision of the local models, uploads the local models to the cloud server for aggregation if the local precision meets the requirement, and returns the local models to the edge server for continuous training until the local precision meets the requirement if the local precision does not meet the requirement;
the cloud server aggregates the received local models to obtain new global models, judges global precision of the new global models, finishes model training if the global precision meets the requirement, broadcasts the new global models to all edge servers if the global precision of the new global models does not meet the requirement, and broadcasts the new global models serving as the local models to all user equipment for retraining until the global precision of the new global models meets the requirement.
6. The cloud-edge collaborative learning method according to claim 5, wherein the edge server selects a portion of the local models from the received local models for subsequent aggregation using a client scheduling scheme of a multi-arm slot machine prior to aggregation of the received local models.
7. The cloud edge end collaborative learning method according to claim 5, wherein the local accuracy requirement is as follows:
Figure QLYQS_1
wherein,,
Figure QLYQS_2
indicate->
Figure QLYQS_3
Loss function of local model of individual edge servers,/->
Figure QLYQS_4
Indicate->
Figure QLYQS_5
Personal user equipment->
Figure QLYQS_6
Model parameters for wheel training, +.>
Figure QLYQS_7
Is a local precision standard.
8. The cloud-edge collaborative learning method according to claim 5, wherein the global accuracy requirement is represented by the following formula:
Figure QLYQS_8
wherein,,
Figure QLYQS_9
loss function representing global model, +.>
Figure QLYQS_10
Indicate->
Figure QLYQS_11
Local model of the edge server +.>
Figure QLYQS_12
Parameters of wheel training, ++>
Figure QLYQS_13
Representing a global accuracy criterion.
CN202310620160.5A 2023-05-30 2023-05-30 Yun Bianduan collaborative learning system and method Pending CN116384513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310620160.5A CN116384513A (en) 2023-05-30 2023-05-30 Yun Bianduan collaborative learning system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310620160.5A CN116384513A (en) 2023-05-30 2023-05-30 Yun Bianduan collaborative learning system and method

Publications (1)

Publication Number Publication Date
CN116384513A true CN116384513A (en) 2023-07-04

Family

ID=86971351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310620160.5A Pending CN116384513A (en) 2023-05-30 2023-05-30 Yun Bianduan collaborative learning system and method

Country Status (1)

Country Link
CN (1) CN116384513A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117010485A (en) * 2023-10-08 2023-11-07 之江实验室 Distributed model training system and gradient protocol method in edge scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113419857A (en) * 2021-06-24 2021-09-21 广东工业大学 Federal learning method and system based on edge digital twin association
US20220351860A1 (en) * 2020-02-11 2022-11-03 Ventana Medical Systems, Inc. Federated learning system for training machine learning algorithms and maintaining patient privacy
CN115408151A (en) * 2022-08-23 2022-11-29 哈尔滨工业大学 Method for accelerating learning training of bang

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220351860A1 (en) * 2020-02-11 2022-11-03 Ventana Medical Systems, Inc. Federated learning system for training machine learning algorithms and maintaining patient privacy
CN113419857A (en) * 2021-06-24 2021-09-21 广东工业大学 Federal learning method and system based on edge digital twin association
CN115408151A (en) * 2022-08-23 2022-11-29 哈尔滨工业大学 Method for accelerating learning training of bang

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117010485A (en) * 2023-10-08 2023-11-07 之江实验室 Distributed model training system and gradient protocol method in edge scene
CN117010485B (en) * 2023-10-08 2024-01-26 之江实验室 Distributed model training system and gradient protocol method in edge scene

Similar Documents

Publication Publication Date Title
CN113242568B (en) Task unloading and resource allocation method in uncertain network environment
Chen et al. Intelligent resource allocation management for vehicles network: An A3C learning approach
US11948075B2 (en) Generating discrete latent representations of input data items
US11954879B2 (en) Methods, systems and apparatus to optimize pipeline execution
TW202131661A (en) Device and method for network optimization and non-transitory computer-readable medium
US20240135191A1 (en) Method, apparatus, and system for generating neural network model, device, medium, and program product
CN114125785A (en) Low-delay high-reliability transmission method, device, equipment and medium for digital twin network
WO2012106885A1 (en) Latent dirichlet allocation-based parameter inference method, calculation device and system
US9712612B2 (en) Method for improving mobile network performance via ad-hoc peer-to-peer request partitioning
Zhang et al. Federated learning with adaptive communication compression under dynamic bandwidth and unreliable networks
US20200118007A1 (en) Prediction model training management system, method of the same, master apparatus and slave apparatus for the same
CN103974097A (en) Personalized user-generated video prefetching method and system based on popularity and social networks
US10592578B1 (en) Predictive content push-enabled content delivery network
CN113469325A (en) Layered federated learning method, computer equipment and storage medium for edge aggregation interval adaptive control
CN116384513A (en) Yun Bianduan collaborative learning system and method
WO2023116138A1 (en) Modeling method for multi-task model, promotional content processing method, and related apparatuses
CN108376099B (en) Mobile terminal calculation migration method for optimizing time delay and energy efficiency
WO2022228390A1 (en) Media content processing method, apparatus and device, and storage medium
Saputra et al. Federated learning framework with straggling mitigation and privacy-awareness for AI-based mobile application services
Xu et al. Joint foundation model caching and inference of generative AI services for edge intelligence
CN112218114B (en) Video cache control method, device and computer readable storage medium
Wu et al. Deep reinforcement learning based vehicle selection for asynchronous federated learning enabled vehicular edge computing
CN115210717A (en) Hardware optimized neural architecture search
CN112655005B (en) Dynamic small batch size
Atan et al. Ai-empowered fast task execution decision for delay-sensitive iot applications in edge computing networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230704

RJ01 Rejection of invention patent application after publication