CN116384513A - Yun Bianduan collaborative learning system and method - Google Patents
Yun Bianduan collaborative learning system and method Download PDFInfo
- Publication number
- CN116384513A CN116384513A CN202310620160.5A CN202310620160A CN116384513A CN 116384513 A CN116384513 A CN 116384513A CN 202310620160 A CN202310620160 A CN 202310620160A CN 116384513 A CN116384513 A CN 116384513A
- Authority
- CN
- China
- Prior art keywords
- local
- model
- global
- edge
- models
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 18
- 238000012549 training Methods 0.000 claims abstract description 47
- 230000004931 aggregating effect Effects 0.000 claims abstract description 5
- 230000002776 aggregation Effects 0.000 claims description 31
- 238000004220 aggregation Methods 0.000 claims description 31
- 230000006870 function Effects 0.000 claims description 15
- 238000010586 diagram Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000010801 machine learning Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 230000005577 local transmission Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006116 polymerization reaction Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/502—Proximity
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Electrically Operated Instructional Devices (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a cloud edge end collaborative learning system, which comprises: a user equipment layer, an edge server layer and a cloud server layer; the cloud server layer is used for controlling the cloud server to aggregate the local model from the edge server into a global model, and broadcasting the global model according to the global accuracy judgment result; the edge server layer is used for controlling the edge server to receive the global model from the cloud server and broadcasting the global model as a local model to the user equipment; the user equipment layer is used for controlling the user equipment to train the received local model to obtain the local model, the edge server layer is also used for aggregating the received local model, the aggregated model is used as the local model, the local precision of the local model is judged, if the local precision meets the requirement, the local model is uploaded to the cloud server to aggregate, and otherwise, the local model is returned to the edge server to continue training until the requirement is met.
Description
Technical Field
The invention belongs to the technical field of machine learning, and particularly relates to a cloud edge end collaborative learning system and method.
Background
With the remarkable increase of the number of devices of the internet of things, the data volume generated by the edge network also increases rapidly. Most of this data is privacy sensitive in nature and processing and analyzing this data requires machine learning algorithms. Conventional machine learning algorithms require a central processor for collecting data for model training. However, due to the privacy security of the data, the user device may not be willing to share its local data. In order to solve the problem, a distributed machine learning algorithm, namely federal learning (Federated Learning, FL), has been developed, and privacy safety of users is effectively ensured by transferring a data storage and model training stage of machine learning to local users and only interacting with a central server to update a model.
The existing federal learning is to aggregate and update models by a cloud server. Firstly, when FL is implemented on a wireless network, performance of a user device is poor compared with a cloud server, and when a learning task is complex and a local model is large, training with limited computing resources increases training delay and reduces learning performance. Secondly, because wireless resources are limited, the transmission distance is long, the problems of unpredictable communication with a cloud server, unreliable and the like exist, training efficiency and model precision can be reduced, under the condition that the number of user equipment is huge, client scheduling is not adopted, all the user equipment participates in each round of training, and the balance between exploration and utilization is difficult to realize. Finally, in the existing model training process, it is difficult to dynamically determine the local training round number and the global training round number, and the iteration times are set in advance, which results in reduced model training efficiency and waste of computing resources.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a cloud-edge collaborative learning system and a cloud-edge collaborative learning method, which can improve learning efficiency through cloud-edge collaborative learning.
The technical problems to be solved by the invention are realized by the following technical scheme:
in a first aspect, a cloud edge collaborative learning system is provided, including:
a user equipment layer, an edge server layer and a cloud server layer;
the cloud server layer is used for controlling the cloud server to aggregate the local model from the edge server into a global model, judging the global precision of the global model, and determining whether to broadcast the global model to the edge server according to a judging result;
the edge server layer is used for controlling the edge server to receive the global model from the cloud server and broadcasting the global model as a local model to the user equipment;
the user equipment layer is used for controlling the user equipment to train the received local model to obtain a local model, and uploading the local model to the edge server;
the edge server layer is also used for controlling the edge server to aggregate the received local models, taking the aggregated models as local models, judging the local precision of the local models, uploading the local models to the cloud server for aggregation if the local precision meets the requirement, and otherwise, returning the local models to the edge server for continuous training until the local precision meets the requirement.
With reference to the first aspect, further, the cloud server layer includes a cloud receiving module, a global aggregation module, a global precision judging module and a cloud sending module;
the cloud receiving module is used for receiving and storing the local model sent by the edge server layer;
the global aggregation module is used for carrying out global aggregation on the received local models and updating the global models;
the global precision judging module is used for judging whether the aggregated global model meets the global precision requirement;
the cloud sending module is used for sending the global model to the edge server layer.
With reference to the first aspect, further, the edge server layer includes: the device comprises a user equipment selection module, an edge receiving module, a local aggregation module, a local precision judging module and an edge sending module;
the user equipment selecting module is used for selecting corresponding user equipment from the user equipment layers to form a user equipment subset;
the edge receiving module is used for receiving and storing a global model, a local model and a local model, wherein the global model is broadcasted by a cloud server, the local model is self, and the local model is sent by user equipment in the selected user equipment subset;
the local aggregation module is used for locally aggregating the received local model and updating the previous local model;
the local precision judging module is used for judging whether the local model meets the local precision requirement or not;
the edge sending module is used for sending the local model to the cloud server layer and the user equipment layer.
With reference to the first aspect, further, the user equipment layer includes: a local receiving module, a local training module and a local transmitting module;
the local receiving module is used for receiving a local model broadcast by the edge server;
the local training module is used for training the local model according to the user equipment data and updating the trained local model into a local model;
the local sending module is used for sending the local model to the edge server layer for local aggregation.
In a second aspect, a cloud edge end collaborative learning method is provided, including:
broadcasting a global model to each edge server by the cloud server;
the edge server broadcasts the received global model as a local model to each user device;
the user equipment trains the received local model based on the self data to obtain a local model and uploads the local model to the edge server;
the edge server aggregates the received local models, takes the aggregated models as local models, judges the local precision of the local models, uploads the local models to the cloud server for aggregation if the local precision meets the requirement, and returns the local models to the edge server for continuous training until the local precision meets the requirement if the local precision does not meet the requirement;
the cloud server aggregates the received local models to obtain new global models, judges global precision of the new global models, finishes model training if the global precision meets the requirement, broadcasts the new global models to all edge servers if the global precision of the new global models does not meet the requirement, and broadcasts the new global models serving as the local models to all user equipment for retraining until the global precision of the new global models meets the requirement.
With reference to the second aspect, further, the edge server selects a part of the local model from the received local models for subsequent aggregation using a client scheduling scheme of the multi-arm slot machine before aggregating the received local models.
With reference to the second aspect, further, the local accuracy requirement is as follows:
wherein,,indicate->Loss function of local model of individual edge servers,/->Indicate->Personal user equipment->Model parameters for wheel training, +.>Is a local essenceDegree standard.
With reference to the second aspect, further, the global accuracy requirement is as follows:
wherein,,loss function representing global model, +.>Indicate->Local model of the edge server +.>Parameters of wheel training, ++>Representing a global accuracy criterion.
The invention has the beneficial effects that:
according to the invention, a cloud side-end collaborative FL layered architecture is constructed, and the low-delay model training is realized by utilizing the high-performance communication and calculation advantages of the edge server and the cloud server compared with those of the user equipment.
The invention adopts a client scheduling scheme (multi-arm slot machine), reduces the training period and the time interval of each period, and realizes the minimization of the training latency of the wireless layered FL system.
The invention respectively sets the local precision and the global precision to determine the local training round number and the global training round number, so that the iteration times are dynamic, and the precision and the efficiency of model training are improved.
Drawings
FIG. 1 is a schematic diagram of a cloud edge collaborative learning system according to the present invention;
FIG. 2 is a schematic diagram of a hierarchical architecture of a cloud-edge collaborative learning system according to the present invention;
fig. 3 is a flowchart of a cloud edge collaborative learning method in the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to better understand the present invention, the following describes related technologies in the technical solution of the present invention.
Example 1:
as shown in fig. 1 and fig. 2, in this embodiment, the present invention provides a cloud edge collaborative learning system, which includes a user equipment layer, an edge server layer, and a cloud server layer, where the three layers are all connected through wireless network communication. In the present invention, we assume that the whole system has one cloud server,personal edge server +.>User equipment, in the course of learning training, every edge server has a corresponding range>The user equipment layer, the edge server layer and the cloud server layer are respectively used for controlling the user equipment, the edge server and the cloud server.
The cloud server layer includes:
the system comprises a cloud receiving module, a global aggregation module, a global precision judging module and a cloud sending module;
the cloud receiving module is used for receiving and storing the local model sent by the edge server layer;
the global aggregation module is used for carrying out global aggregation on the received local models and updating the global models;
the global precision judging module is used for judging whether the aggregated global model meets the global precision requirement;
the cloud sending module is used for sending the global model to the edge server layer.
The edge server layer includes:
the device comprises a user equipment selection module, an edge receiving module, a local aggregation module, a local precision judging module and an edge sending module;
the user equipment selecting module is used for selecting corresponding user equipment from the user equipment layers to form a user equipment subset. In the local aggregation process, because a plurality of close user equipment are similar to a great extent, the aggregation is not needed to be carried out on models of all the user equipment, and only some typical models are selected by the user equipment selection module, so that the operation amount of the system can be reduced to a great extent.
The edge receiving module is used for receiving and storing a global model, a local model and a local model, wherein the global model is broadcasted by the cloud server, the local model is self, and the local model is sent by user equipment in the selected user equipment subset;
the local aggregation module is used for locally aggregating the received local model and updating the previous local model;
the local precision judging module is used for judging whether the local model meets the local precision requirement;
the edge sending module is used for sending the local model to the cloud server layer and the user equipment layer.
The user equipment layer includes:
a local receiving module, a local training module and a local transmitting module;
the local receiving module is used for receiving the local model broadcast by the edge server;
the local training module is used for training the local model according to the user equipment data and updating the trained local model into a local model;
the local sending module is used for sending the local model to the edge server layer for local aggregation.
Example 2:
as shown in fig. 3, the invention also provides a cloud edge end collaborative learning method, which comprises the following steps:
firstly, a cloud server broadcasts a global model (model loss function) to all [ ] through a cloud sending moduleEdge server, which broadcasts the global model as a local model to the edge server-wide +.>On the individual user devices. Each user device receives and stores the model through the local receiving module, and trains the local model through the local training module based on the data set communication and the operation capacity of the local training module to obtain the local model. We will model local (based on user equipment +.>A loss function of a dataset model) is defined as follows:
wherein,,indicate->The individual user devices store the size of the local data set. />Defining a given set of data samples as a set of I/O->Wherein->Is a kind of having->Input sample vector of features, +.>Is sample->Is a marked output value of (a). In a typical learning problem, for a learning with input +.>Sample data of->The task is to find the characterization output +.>Model parameters of>And loss function->。
The user equipment then transmits the model parameters via the local transmission moduleAnd loss function->Uploading to an edge server, the edge server selecting, by a user device selection module, a subset of user devices based on a client scheduling scheme, in this example using a Multi-arm band (MAB) client scheduling scheme, comparing the edge server to one player, defining instantaneous rewards of the pull rod as a reduction in training loss, and then defining an average reward of user device training loss as:
wherein,,representing user equipment +.>In the current wheel +.>The number of times previously selected, +.>Representing user equipment +.>In->Average return of training loss.
The implementation steps of the MAB-based client scheduling scheme are as follows:
Step 2: employing a greedy approach inThe known information is utilized or the unknown information is explored in the round.
Step 3: edge serverSuch a probability selects a subset of N user equipments, wherein +.>Is greedy constant.
The edge server receives and stores N user equipment in the subset through an edge receiving moduleUploaded model parameters and loss functionsAnd by means of a local aggregation module>And (5) local polymerization. Local model—edge server local loss function, defined as:
wherein,,indicate->Loss function of the local model of the individual edge server,/->Representing the size of the edge server dataset, < +.>。
After obtaining the local model, we need to judge the local precision of the model according to the given local precision standardPerforming accuracy judgment on the aggregated local model, if the local accuracy standard is not satisfied +.>The edge server broadcasts the updated local model to the user equipment, and carries out local iterative training until the accuracy standard is met; and if the local precision standard is met, uploading the updated local model to a cloud server by the edge server for global aggregation. If the precision requirement is met, the method is->The following formula is required to be satisfied:
wherein,,indicate->Local model of personal user device +.>Parameters of wheel training, ++>Representing the gradient of the loss function.
Eventually, if all edge servers are trained to meet local accuracy,/>The edge server uses the edge sending module to send the model parameter +.>And loss function->Uploading the local model to a cloud server, and receiving and storing the local model uploaded by the edge server by the cloud server through a cloud receiving module and performing global aggregation through a global aggregation module to obtain a global model. Minimizing the loss function of the global model is shown by the following equation:
wherein,,a loss function representing the global model, D representing the size of the total dataset, +.>。
The cloud server is based on a global precision judging module and according to given global precision standardsThe method comprises the steps that accuracy judgment is conducted on an aggregated global model, if the overall accuracy is not met, the cloud server broadcasts the updated global model to an edge server, and global iteration is conducted until the overall accuracy standard is met; and if the global precision is met, ending the model training. Given global accuracy criterion->Satisfies the following formula:
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.
Claims (8)
1. The cloud edge end collaborative learning system is characterized by comprising: a user equipment layer, an edge server layer and a cloud server layer;
the cloud server layer is used for controlling the cloud server to aggregate the local model from the edge server into a global model, judging the global precision of the global model, and determining whether to broadcast the global model to the edge server according to a judging result;
the edge server layer is used for controlling the edge server to receive the global model from the cloud server and broadcasting the global model as a local model to the user equipment;
the user equipment layer is used for controlling the user equipment to train the received local model to obtain a local model, and uploading the local model to the edge server;
the edge server layer is also used for controlling the edge server to aggregate the received local models, taking the aggregated models as local models, judging the local precision of the local models, uploading the local models to the cloud server for aggregation if the local precision meets the requirement, and otherwise, returning the local models to the edge server for continuous training until the local precision meets the requirement.
2. The cloud edge collaborative learning system according to claim 1, wherein the cloud server layer comprises a cloud receiving module, a global aggregation module, a global precision judging module and a cloud sending module;
the cloud receiving module is used for receiving and storing the local model sent by the edge server layer;
the global aggregation module is used for carrying out global aggregation on the received local models and updating the global models;
the global precision judging module is used for judging whether the aggregated global model meets the global precision requirement;
the cloud sending module is used for sending the global model to the edge server layer.
3. The cloud-edge collaborative learning system according to claim 1, wherein the edge server layer comprises: the device comprises a user equipment selection module, an edge receiving module, a local aggregation module, a local precision judging module and an edge sending module;
the user equipment selecting module is used for selecting corresponding user equipment from the user equipment layers to form a user equipment subset;
the edge receiving module is used for receiving and storing a global model, a local model and a local model, wherein the global model is broadcasted by a cloud server, the local model is self, and the local model is sent by user equipment in the selected user equipment subset;
the local aggregation module is used for locally aggregating the received local model and updating the previous local model;
the local precision judging module is used for judging whether the local model meets the local precision requirement or not;
the edge sending module is used for sending the local model to the cloud server layer and the user equipment layer.
4. The cloud-edge collaborative learning system according to claim 1, wherein the user equipment layer comprises: a local receiving module, a local training module and a local transmitting module;
the local receiving module is used for receiving a local model broadcast by the edge server;
the local training module is used for training the local model according to the user equipment data and updating the trained local model into a local model;
the local sending module is used for sending the local model to the edge server layer for local aggregation.
5. The cloud edge end collaborative learning method is characterized by comprising the following steps of:
broadcasting a global model to each edge server by the cloud server;
the edge server broadcasts the received global model as a local model to each user device;
the user equipment trains the received local model based on the self data to obtain a local model and uploads the local model to the edge server;
the edge server aggregates the received local models, takes the aggregated models as local models, judges the local precision of the local models, uploads the local models to the cloud server for aggregation if the local precision meets the requirement, and returns the local models to the edge server for continuous training until the local precision meets the requirement if the local precision does not meet the requirement;
the cloud server aggregates the received local models to obtain new global models, judges global precision of the new global models, finishes model training if the global precision meets the requirement, broadcasts the new global models to all edge servers if the global precision of the new global models does not meet the requirement, and broadcasts the new global models serving as the local models to all user equipment for retraining until the global precision of the new global models meets the requirement.
6. The cloud-edge collaborative learning method according to claim 5, wherein the edge server selects a portion of the local models from the received local models for subsequent aggregation using a client scheduling scheme of a multi-arm slot machine prior to aggregation of the received local models.
7. The cloud edge end collaborative learning method according to claim 5, wherein the local accuracy requirement is as follows:
8. The cloud-edge collaborative learning method according to claim 5, wherein the global accuracy requirement is represented by the following formula:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310620160.5A CN116384513A (en) | 2023-05-30 | 2023-05-30 | Yun Bianduan collaborative learning system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310620160.5A CN116384513A (en) | 2023-05-30 | 2023-05-30 | Yun Bianduan collaborative learning system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116384513A true CN116384513A (en) | 2023-07-04 |
Family
ID=86971351
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310620160.5A Pending CN116384513A (en) | 2023-05-30 | 2023-05-30 | Yun Bianduan collaborative learning system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116384513A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117010485A (en) * | 2023-10-08 | 2023-11-07 | 之江实验室 | Distributed model training system and gradient protocol method in edge scene |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113419857A (en) * | 2021-06-24 | 2021-09-21 | 广东工业大学 | Federal learning method and system based on edge digital twin association |
US20220351860A1 (en) * | 2020-02-11 | 2022-11-03 | Ventana Medical Systems, Inc. | Federated learning system for training machine learning algorithms and maintaining patient privacy |
CN115408151A (en) * | 2022-08-23 | 2022-11-29 | 哈尔滨工业大学 | Method for accelerating learning training of bang |
-
2023
- 2023-05-30 CN CN202310620160.5A patent/CN116384513A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220351860A1 (en) * | 2020-02-11 | 2022-11-03 | Ventana Medical Systems, Inc. | Federated learning system for training machine learning algorithms and maintaining patient privacy |
CN113419857A (en) * | 2021-06-24 | 2021-09-21 | 广东工业大学 | Federal learning method and system based on edge digital twin association |
CN115408151A (en) * | 2022-08-23 | 2022-11-29 | 哈尔滨工业大学 | Method for accelerating learning training of bang |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117010485A (en) * | 2023-10-08 | 2023-11-07 | 之江实验室 | Distributed model training system and gradient protocol method in edge scene |
CN117010485B (en) * | 2023-10-08 | 2024-01-26 | 之江实验室 | Distributed model training system and gradient protocol method in edge scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113242568B (en) | Task unloading and resource allocation method in uncertain network environment | |
Chen et al. | Intelligent resource allocation management for vehicles network: An A3C learning approach | |
US11948075B2 (en) | Generating discrete latent representations of input data items | |
US11954879B2 (en) | Methods, systems and apparatus to optimize pipeline execution | |
TW202131661A (en) | Device and method for network optimization and non-transitory computer-readable medium | |
US20240135191A1 (en) | Method, apparatus, and system for generating neural network model, device, medium, and program product | |
CN114125785A (en) | Low-delay high-reliability transmission method, device, equipment and medium for digital twin network | |
WO2012106885A1 (en) | Latent dirichlet allocation-based parameter inference method, calculation device and system | |
US9712612B2 (en) | Method for improving mobile network performance via ad-hoc peer-to-peer request partitioning | |
Zhang et al. | Federated learning with adaptive communication compression under dynamic bandwidth and unreliable networks | |
US20200118007A1 (en) | Prediction model training management system, method of the same, master apparatus and slave apparatus for the same | |
CN103974097A (en) | Personalized user-generated video prefetching method and system based on popularity and social networks | |
US10592578B1 (en) | Predictive content push-enabled content delivery network | |
CN113469325A (en) | Layered federated learning method, computer equipment and storage medium for edge aggregation interval adaptive control | |
CN116384513A (en) | Yun Bianduan collaborative learning system and method | |
WO2023116138A1 (en) | Modeling method for multi-task model, promotional content processing method, and related apparatuses | |
CN108376099B (en) | Mobile terminal calculation migration method for optimizing time delay and energy efficiency | |
WO2022228390A1 (en) | Media content processing method, apparatus and device, and storage medium | |
Saputra et al. | Federated learning framework with straggling mitigation and privacy-awareness for AI-based mobile application services | |
Xu et al. | Joint foundation model caching and inference of generative AI services for edge intelligence | |
CN112218114B (en) | Video cache control method, device and computer readable storage medium | |
Wu et al. | Deep reinforcement learning based vehicle selection for asynchronous federated learning enabled vehicular edge computing | |
CN115210717A (en) | Hardware optimized neural architecture search | |
CN112655005B (en) | Dynamic small batch size | |
Atan et al. | Ai-empowered fast task execution decision for delay-sensitive iot applications in edge computing networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230704 |
|
RJ01 | Rejection of invention patent application after publication |