CN112817653A - Cloud-side-based federated learning calculation unloading computing system and method - Google Patents

Cloud-side-based federated learning calculation unloading computing system and method Download PDF

Info

Publication number
CN112817653A
CN112817653A CN202110089708.9A CN202110089708A CN112817653A CN 112817653 A CN112817653 A CN 112817653A CN 202110089708 A CN202110089708 A CN 202110089708A CN 112817653 A CN112817653 A CN 112817653A
Authority
CN
China
Prior art keywords
local
edge
cloud
data
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110089708.9A
Other languages
Chinese (zh)
Inventor
伍卫国
张祥俊
柴玉香
杨诗园
王雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202110089708.9A priority Critical patent/CN112817653A/en
Publication of CN112817653A publication Critical patent/CN112817653A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a resource allocation system and method for federated learning calculation unloading based on cloud side end, which can make accurate decision for calculation task unloading and resource allocation, eliminate the need of solving the problem of combination optimization and greatly reduce the calculation complexity; the method is characterized in that the proximity advantage of an edge node from a terminal is comprehensively utilized based on 3-layer federal learning of a cloud edge, the powerful computing resources of a cloud computing center are also utilized, the problem of insufficient computing resources of the edge node is solved, a local model is trained at each of a plurality of clients for predicting unloading tasks, a global model is formed by periodically performing parameter aggregation at the edge end, the cloud end performs parameter aggregation once after the edge performs periodic aggregation, and thus a global BilTM model is formed until convergence.

Description

Cloud-side-based federated learning calculation unloading computing system and method
Technical Field
The invention relates to computing unloading and resource allocation of a mobile edge computing network under the drive of a 5G network, in particular to a system and a method for computing unloading based on cloud side federal learning computing.
Background
In recent years, data generated at the edge of a network is in explosive growth under the promotion of the popularization of the internet of things. The inability to guarantee low latency and location awareness undermines the capabilities of traditional cloud computing solutions. According to IDC predictions, over 500 billion terminals and devices are networked by the end of 2020, with over 50% of the data needing to be analyzed, processed and stored at the network edge. The traditional "cloud two-body cooperative computing" mode cannot meet the requirements of low latency and high bandwidth. Mobile Edge Computing (MEC) is becoming a new and compelling computing paradigm that drives cloud computing power closer to end users, supporting a variety of computationally intensive but delay sensitive applications such as face recognition, natural language processing, and interactive games. One of the key functions of an MEC is task offloading (also known as computation offloading), which can offload a compute-intensive task of a mobile application from a User Equipment (UE) to an MEC host at the edge of a network, thereby breaking resource limitations of the mobile device, expanding the computing power, battery capacity, storage capacity, and the like of the mobile device. Although the edge server may provide cloud functionality for the terminals, it may not be able to provide services for all terminals due to its inherent limited wireless and computing capabilities. On the one hand, the uncertain offload task data size and time-varying channel conditions make computational offloading difficult to make accurate decisions. On the other hand, the individual sensitive information and the like of the user in the unloading process in the distributed heterogeneous edge infrastructure have the risks of being intercepted and leaked.
Disclosure of Invention
The invention aims to provide a federated learning calculation unloading computing system and method based on cloud side ends, so as to overcome the defects of the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a resource allocation method for unloading of federal learning computing based on cloud side comprises the following steps:
s1, constructing a global model, and broadcasting the initialized global model and the tasks to local equipment selected by the tasks;
s2, based on the initialized global model, each local device updates the local model parameters by using the local data and the local device parameters, when the set iteration times are reached, edge parameter aggregation is carried out on the local model parameters corresponding to the important gradients of which the gradient data calculated by all local devices exceed the threshold, the global model parameters are updated according to the edge parameter aggregation result parameters, and then the updated global model is fed back to each local device;
s3, when the aggregation of the edge parameters reaches the set aggregation times, carrying out one-time cloud parameter aggregation;
s4, repeating the steps S2-S3 until the global loss function converges or reaches the set training precision, and finishing the global model training;
and S6, predicting the information quantity of each calculation unloading task by using the trained global model to obtain the size of the calculation unloading data quantity, and performing resource allocation with minimum cost according to the size of the calculation unloading data quantity.
Further, the local device updates the local model parameters using its local data and local device parameters, respectively:
Figure BDA0002911954130000021
t is the current iteration index, i is the ith local device, and the goal of the local device i in the current iteration index t is to find the loss-causing function
Figure BDA0002911954130000022
Minimum optimum parameter
Figure BDA0002911954130000023
Further, after the set iteration number is reached, uploading important gradients of which the gradient data calculated by all local devices exceed a threshold value to the edge server.
Further, the cloud parameter aggregation specific process is as follows:
Figure BDA0002911954130000031
{Dζrepresents the aggregated data set under the edge server ζ, and unloads the data set of the task to
Figure BDA0002911954130000032
Is distributed over N clients, where | DiAnd | D | respectively represent the local training sample i and the total training sample number.
Further, parameter sparsification is carried out in the iteration process, a standard dispersion random gradient descent method is adopted, and local equipment passes through a local data set DiUpdating local parameters
Figure BDA0002911954130000033
Is composed of
Figure BDA0002911954130000034
t denotes the current iteration, wtRepresents the value of the parameter w at iteration t; f (x, w)t) Representing the input data x and the current parameter wt(ii) a calculated loss; gk,tIndicating that node k iterates over w t timestA gradient of (a); spark (g)k,t) Denotes gk,tA sparse gradient of; wherein eta is the learning rate,
Figure BDA0002911954130000035
representing a small batch of data samples from client i
Figure BDA0002911954130000036
The resulting gradient fi(w) estimating; namely:
Figure BDA0002911954130000037
further, at the beginning of each iteration t, at the working node k, from the current parameter wtAnd from the local data block Bk,tThe sampled data calculates the loss f (x, w)t) Then, the gradient f (x, w) can be determinedt) Let us order
Figure BDA0002911954130000038
f(x,wt),Bk,tThe local data block of the working node k is b; and sorting the gradient elements of each parameter according to absolute values, and uploading the important gradient of which the calculated gradient data exceeds a threshold value to an edge server.
Further, the resource allocation is realized by performing one-dimensional double-section search by using dual variables associated with the resource allocation constraint.
A computing offload resource allocation system comprises a local device, an edge server and a cloud server;
the edge server is used for broadcasting the initialized global model and the tasks to the local equipment selected by the tasks; the local equipment updates local model parameters according to local data and local equipment parameters of the local equipment based on the initialized global model, and feeds the updated local model back to the edge server;
the edge server performs parameter aggregation on the local models fed back by different local devices, updates the global model parameters according to the edge parameter aggregation result parameters, feeds the updated global model back to each local device, and performs primary cloud parameter aggregation according to the global model aggregated by the edge server when the edge parameter aggregation reaches the set aggregation times; the local device inputs the tasks into the edge server, the edge server predicts the information quantity of each calculation unloading task by using a final global model to obtain the size of the calculation unloading data quantity, and resource allocation is carried out according to the size of the calculation unloading data quantity with minimum cost.
Further, the present inventionGlobal model for ground devices based on initialization
Figure BDA0002911954130000041
Updating local model parameters based on its own local data and local device parameters
Figure BDA0002911954130000042
Where t is the current iteration index, i is the ith local device, and the goal of the local device i in the current iteration index t is to find the loss-causing function
Figure BDA0002911954130000043
The minimum optimal parameters, namely:
Figure BDA0002911954130000044
when proceeding with k1After the iteration learning is performed for a round (namely, after the set iteration times are reached), the important gradient of which the calculated gradient data exceeds the threshold value is uploaded to the edge server.
Furthermore, one cloud server is connected with a plurality of edge servers, each edge server is connected with a plurality of local devices, and parameters of each edge server are aggregated with local model parameters of the local devices connected with the edge server; and aggregating the cloud parameters of the cloud server to the aggregated global model parameters of the plurality of edge server parameters connected with the cloud server.
Compared with the prior art, the invention has the following beneficial technical effects:
the resource allocation method based on the cloud edge federal learning calculation unloading can meet the practical requirement of training and learning on multi-party data on the premise of not sharing private data, so that accurate decision is made on calculation task unloading and resource allocation, the requirement of solving a combined optimization problem is eliminated, and the calculation complexity is greatly reduced; the method is characterized in that the proximity advantage of an edge node from a terminal is comprehensively utilized based on 3-layer federal learning of a cloud edge, powerful computing resources of a cloud computing center are also utilized, the problem of insufficient computing resources of the edge node is solved, a local model is trained at each of a plurality of clients for predicting unloading tasks, a global model is formed by periodically performing parameter aggregation at the edge end, the cloud end performs parameter aggregation once after the edge performs periodic aggregation, and thus a global BilTM model is formed until convergence.
Furthermore, in order to optimize the communication traffic of federal learning, the data sets owned by the federate learning are learned locally, after multiple rounds of training, the front s% of the sparse gradient is compressed and uploaded to the edge parameter server, and the edge parameter server uploads the parameters to the cloud server for aggregation after round aggregation until convergence.
Furthermore, parameters adopted by the uploaded gradients are thinned, namely, the important gradients are only compressed and then uploaded to the central server to optimize and combine the global model each time, so that the communication overhead of the federal learning client and the server is greatly reduced, the aggregation of the models is accelerated more efficiently, and the convergence speed of the models is accelerated.
The invention relates to a calculation unloading resource allocation system, which adopts a cloud-edge-end 3-layer federal learning framework, utilizes the natural proximity and real-time calculation advantages of an edge server and a terminal node, overcomes the defect of limited calculation resources of the edge server, uses a Bi-LSTM-based federal learning mechanism to locally train a BilsTM model prediction task for terminal equipment participating in calculation unloading, and then executes parameter aggregation at the cloud end and the edge end respectively at regular intervals, thereby eliminating the need of solving a combination optimization problem, and greatly reducing the calculation complexity.
Drawings
Fig. 1 is a diagram of a Bi-LSTM based cloud-edge-end federal learning framework in an embodiment of the present invention.
FIG. 2 is a diagram of Bi-LSTM prediction task prediction in an embodiment of the present invention.
Fig. 3 is a cloud-edge-end federal learning sequence diagram in an embodiment of the present invention.
FIG. 4 is a diagram of a two-stage solution optimization in an embodiment of the invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
a resource allocation method for unloading of federal learning computing based on cloud side comprises the following steps:
s1, constructing a global model and initializing the global model
Figure BDA0002911954130000061
Broadcasting the tasks to the local equipment selected by the tasks;
the global model is used to compute the target application and corresponding data requirements.
S2, initializing based global model
Figure BDA0002911954130000062
Each local device uses its local data and local device parameters to update the global model received by the local device, i.e. update the local model parameters
Figure BDA0002911954130000063
Where t is the current iteration index, i is the ith local device, and the goal of the local device i in the current iteration index t is to find the loss-causing function
Figure BDA0002911954130000064
The minimum optimal parameters, namely:
Figure BDA0002911954130000065
when proceeding with k1After iterative learning is performed, uploading the calculated important gradient of which the gradient data exceeds a threshold value to an edge server; the local device transmits only the important gradients whose absolute values exceed the gradient threshold and accumulates the remaining gradients locally during the learning process, i.e. accumulates gradients whose absolute values are below the gradient threshold.
S3, when the local device executes k1After iterative learning, the edge server collects the uploaded local model parameters from each local device i
Figure BDA0002911954130000066
And the local model parameters of each local device are combined
Figure BDA0002911954130000067
Summarizing to form parameter sets, and then summarizing the parameter sets Wn=[w1,w2,...wM]Performing edge parameter aggregation to update global model parameters
Figure BDA0002911954130000071
And feeding back the uploaded local model parameters to the local devices (i.e. each local device);
s4, when the edge server executes k2After the wheel edge parameters are aggregated, cloud parameter aggregation is performed once through a cloud server; i.e. local client per k1Iterative learning round, edge server execution k2After the round of edge parameter aggregation, the cloud server executes one-time parameter aggregation; parameter updating calculation of each round of cloud server
Figure BDA0002911954130000072
The process of (2) is shown in formula (1):
Figure BDA0002911954130000073
{Dζrepresents the aggregated data set under each edge server ζ, the data set of the off-load task to
Figure BDA0002911954130000074
Is distributed over N clients, where | DiAnd | D | respectively represent the local training sample i and the total training sample number. Wherein the content of the first and second substances,
Figure BDA0002911954130000075
these distributed data sets are not directly accessible to the parameter server. F (w) is global penalty, in terms of local data set DiLocal loss function ofi(w) a weighted average form calculation. Wherein, F (w) and Fi(w):
Figure BDA0002911954130000076
Figure BDA0002911954130000077
S5, repeating the steps S2 to S4 until the global loss function converges or reaches the set training precision, and finishing the global model
Figure BDA0002911954130000078
Training;
s6, using the global model completed by training
Figure BDA0002911954130000079
And predicting the information quantity of each calculation unloading task to obtain calculation unloading, and making a calculation unloading and resource allocation decision more accurately according to the size of the data quantity of the calculation unloading.
Parameter sparsification is carried out in the iterative process, a standard scattered random gradient (DSGD) method is adopted, and local equipment passes through a local data set DiUpdating local parameters
Figure BDA0002911954130000081
Comprises the following steps:
Figure BDA0002911954130000082
t denotes the current iteration, wtRepresents the value of the parameter w at iteration t; f (x, w)t) Representing the input data x and the current parameter wt(ii) a calculated loss; local data set DiIncluding local data and local device parameters; gk,tIndicating that node k iterates over w t timestOf the gradient of (c). spark (g)k,t) Denotes gk,tA sparse gradient of; wherein eta is the learning rate,
Figure BDA0002911954130000083
representing a small batch of data samples from client i
Figure BDA0002911954130000084
The resulting gradient fi(w) estimating; namely:
Figure BDA0002911954130000085
as shown in FIG. 1, at the beginning of each iteration t, the mobile device is considered as N distributed working nodes (working nodes), and a working node k (1 ≦ k ≦ N) has its local data block Bk,tAnd the size is b. On the working node k, from the current parameter wtAnd from the local data block Bk,tThe sampled data calculates the loss f (x, w)t) Then, the gradient f (x, w) can be determinedt). Order to
Figure BDA0002911954130000086
f(x,wt) We do not transmit the complete gradient gk,tDetermining the compression rate s% first, then sorting the gradient elements of each parameter according to absolute values, wherein only the elements which are arranged in front of s% in all elements of the gradient are exchanged among nodes, and s is a gradient threshold; i.e. uploading important gradients for which the calculated gradient data exceeds a threshold to the edge server. Here, spark (g) is usedk,t) Representing a sparse gradient; residual gradient gk,t-sparse(gk,t) Build locally, wait to grow large enough to exchange.
The global model is based on federal learning, and the data input size of a given task is based on the task predicted by the global model
Figure BDA0002911954130000087
The original optimization problem (P1) can then be reduced to the resource allocation problem of the convex problem (P2), and the optimal time allocation { a, P } of the convex problem (P2) can be effectively solved, for example, by performing a one-dimensional dual-section search with dual variables associated with resource allocation constraints at o (n) complexity.
As shown in fig. 1, a cloud-edge-based federated learning computation offload computing system includes a local device, an edge server and a cloud server,
global model for edge servers to initialize
Figure BDA0002911954130000091
Broadcasting the tasks to the local equipment selected by the tasks; local device initialization-based global model
Figure BDA0002911954130000092
Updating local model parameters according to local data and local equipment parameters of the local model, and feeding back the updated local model to the edge server;
the edge server performs parameter aggregation on the local models fed back by different local devices, updates the global model parameters according to the edge parameter aggregation result parameters, feeds the updated global model back to each local device, and performs primary cloud parameter aggregation according to the global model aggregated by the edge server when the edge parameter aggregation reaches the set aggregation times; the local device inputs the tasks into the edge server, the edge server predicts the information quantity of each calculation unloading task by using a final global model to obtain the size of the calculation unloading data quantity, and resource allocation is carried out according to the size of the calculation unloading data quantity with minimum cost.
Local device initialization-based global model
Figure BDA0002911954130000093
Updating local model parameters based on its own local data and local device parameters
Figure BDA0002911954130000094
Where t is the current iteration index, i is the ith local device, and the goal of the local device i in the current iteration index t is to find the loss-causing function
Figure BDA0002911954130000095
The minimum optimal parameters, namely:
Figure BDA0002911954130000096
when proceeding with k1After iterative learning is performed in turn (namely after the set iteration times are reached), uploading the important gradient of which the calculated gradient data exceeds the threshold value to an edge server; the local device transmits only the important gradients whose absolute values exceed the gradient threshold and accumulates the remaining gradients locally during the learning process, i.e. accumulates gradients whose absolute values are below the gradient threshold.
When the local device executes k1After iterative learning, the edge server collects the uploaded local model parameters from each local device i
Figure BDA0002911954130000097
And the local model parameters of each local device are combined
Figure BDA0002911954130000098
Summarizing to form parameter sets, and then summarizing the parameter sets Wn=[w1,w2,...wM]Performing edge parameter aggregation to update global model parameters
Figure BDA0002911954130000101
And feeding back the uploaded local model parameters to the local devices (i.e. each local device);
when the edge server executes k2After the wheel edge parameters are aggregated, cloud parameter aggregation is performed once through a cloud server; i.e. local client per k1Iterative learning round, edge server execution k2After the wheel edge parameters are aggregated, the cloud server pairPerforming primary parameter aggregation on the data subjected to the edge server parameter aggregation; parameter updating calculation of each round of cloud server
Figure BDA0002911954130000102
The process of (2) is shown in formula (1):
Figure BDA0002911954130000103
until the global loss function is converged or reaches the set training precision, the global model is completed
Figure BDA0002911954130000104
And (5) training.
One cloud server is connected with L edge servers, each edge server is represented by zeta, and the client set to which each edge server belongs is
Figure BDA0002911954130000105
{DζDenotes the aggregated data set under each edge server ζ, each edge server aggregating local model parameters from its clients.
Utilizing a trained global model
Figure BDA0002911954130000106
And predicting the information quantity of each calculation unloading task to obtain calculation unloading, and making a calculation unloading and resource allocation decision more accurately according to the size of the data quantity of the calculation unloading.
User input (i.e., input via local device) the user input based on the Federal learning algorithm is a sample X of local data and local device parametersmIt includes the data size of the tasks that each user has historically requested at different time periods; order to
Figure BDA0002911954130000107
Wherein KmFor the number of data samples of user m, for each data sample
Figure BDA0002911954130000108
Figure BDA0002911954130000109
Wherein the content of the first and second substances,
Figure BDA00029119541300001010
is the location of the user at the current time.
User output (i.e., output via local device): the local model after the local device update iteration, i.e. the trained local Bi-LSTM model (local model) output is a vector wmA Bi-LSTM model-related parameter representing information for determining the size of the task data volume of the user m with the local device;
the edge server inputs: Bi-LSTM-based federated learning algorithm-based matrix Wn=[w1,...,wm]As an input, wherein wmIs a model parameter accepted from user m;
and (3) outputting by the edge server: the received gradient data of each client is summarized, parameter aggregation is executed to form a global model, the aggregated update parameters are sent to each client, the client receives the parameters and then covers the local model parameters, and the next iteration is carried out;
i.e. each client MU i executes k2Secondary local model updates, each edge server aggregating models of clients connected to it; after each k2Secondary edge model aggregation, the cloud server aggregates the models of all edge servers, which means every k1k2The local update is performed once to communicate with the cloud. As shown in the timing diagram, the proposed cloud-side federated learning algorithm mainly comprises the following steps:
FIG. 2 is a Bi-LSTM prediction task prediction, with the purpose of predicting for a computational write task:
the calculation tasks of N mobile users need to be unloaded to an edge server associated with a cellular network of the N mobile users for processing; and predicting the calculation task by adopting a Bi-LSTM-based deep learning algorithm. As shown in FIG. 2, input x at a given time step TtUnder the condition, the hidden layer output H of the BiLSTM unittThe following formula can be calculated:
gt=tanh*(WxgVt+Whght-1+bg)
it=sigmoid*(WxiVt+Whiht-1+bi)
ft=sigmoid*(WxfVt+Whfht-1+bf)
ot=sigmoid*(WxoVt+Whoht-1+bo)
Figure BDA0002911954130000111
ht=ottanh(Ct)=sigmoid*(WxoVt+Whoht-1+bo)*tanh(Ct)
where i, f, o, c represent the input gate, forget gate, output gate, and cell state vector, respectively, and W represents each weight matrix (e.g., WixWeight matrix for input to the input gates), xtRepresenting the model input value at each time instant, b represents the bias term vector. Since the input value of sigmod function is [0,1 ]]The balance information can be used as an index of the degree of forgetting or memorizing the balance information, so that the threshold units are all used as activation functions;
finally, the last complete connection layer integrates the previously extracted features to obtain an output sequence
Figure BDA0002911954130000121
Wherein the content of the first and second substances,
Figure BDA0002911954130000122
representing the predicted data size of the computational task l. Predicted data size obtained here
Figure BDA0002911954130000123
Will be used for subsequent offload policy computations. Therefore, the optimization goal of the algorithm is to improve the input data size accuracy of the prediction task as much as possible
Figure BDA0002911954130000124
Fig. 3 is a sequence diagram of cloud-edge-end federal learning, which can visually describe the interaction process of the whole cloud-edge-end federal learning method, as can be seen from fig. 3, that is, each client MU i executes k1Secondary local model updates, each edge server aggregating models of its customers; after each k2Secondary edge model aggregation, the cloud server aggregates the models of all edge servers, which means every k1k2The local update is performed once to communicate with the cloud.
Finally, fig. 4 is a flow chart, which is different from many existing deep learning methods that all system parameters are optimized simultaneously to generate infeasible solutions, and the present application proposes a two-stage optimization scheme based on intelligent task prediction and resource allocation, i.e., a complex optimization problem is decomposed into intelligent task prediction, and then accurate calculation, unloading decision and resource allocation are performed through predicted task information. Therefore, it completely eliminates the need to solve the complex mip (mixed integer programming) problem, without the computational complexity exploding as the network size increases.
Example (b):
firstly, training a BilSTM model by utilizing a historical unloading task at each local device (client), and forming a global model by aggregation at an edge server and a cloud server; when a new unloading task of a next task arrives, a global model formed by aggregation is adopted to predict the task, the predicted output is used as a guidance for calculating an unloading decision and resource allocation, and gradient data of each time is compressed and uploaded by a data sparsization method in a training process, so that the communication overhead is greatly reduced, and the complexity of model convergence and calculation decision and resource allocation is accelerated.
According to the invention, by establishing a set of complete model training prediction communication optimization method, calculation unloading and resource allocation can be rapidly solved. The framework we consider may correspond to the next static internet of things network of the current 5G driven MEC network, with the transmitter and receiver of the network fixed in a certain location. Taking an MEC network with N-30 as an example, the convergence time of the BiFOS algorithm designed by us is 0.061 seconds on average, which is an acceptable overhead for field deployment. The BiFOS algorithm therefore makes real-time offloading and resource allocation of the wireless MEC network feasible in a channel fading environment.
The invention discloses a federated learning calculation unloading and resource allocation method based on cloud side ends, which firstly provides a federated learning intelligent task prediction mechanism based on BilSTM, and each side server participating in calculation independently trains a model locally without uploading data to an edge server. And then parameter aggregation is carried out at the edge and the cloud regularly, and the constructed purpose is to jointly train a universal global Bi-directional Long Short-Term Memory (BilSTM) model to predict the data volume information of the calculation task and the like, so that the calculation unloading decision and the resource allocation are guided more accurately. The mechanism eliminates the need to solve combinatorial optimization problems, greatly reducing computational complexity, especially in large networks. And the mechanism ensures that user personal sensitive information and the like participating in the offloading process in the distributed heterogeneous edge infrastructure is not intercepted and revealed. In order to further reduce network communication overhead of federal learning in a model optimization process, a FAVG algorithm is improved, a 3-layer federal learning framework of a cloud-edge-end is designed, uploaded gradients are subjected to parameter sparsification, and important gradients are only compressed and then uploaded to a parameter server each time. The framework comprehensively utilizes the proximity advantage of the edge server and the terminal equipment and the strong computing resources of the cloud computing center, and overcomes the defect of insufficient computing resources of the edge server. Finally, experimental results show that under the condition of not collecting user private data, the prediction accuracy of the algorithm is superior to that of other unloading algorithms based on learning, and the energy efficiency can be reduced by 30%.

Claims (10)

1. A resource allocation method for unloading of federal learning computing based on cloud side is characterized by comprising the following steps:
s1, constructing a global model, and broadcasting the initialized global model and the tasks to local equipment selected by the tasks;
s2, based on the initialized global model, each local device updates the local model parameters by using the local data and the local device parameters, when the set iteration times are reached, edge parameter aggregation is carried out on the local model parameters corresponding to the important gradients of which the gradient data calculated by all local devices exceed the threshold, the global model parameters are updated according to the edge parameter aggregation result parameters, and then the updated global model is fed back to each local device;
s3, when the edge parameter aggregation reaches the set aggregation times, performing one-time cloud parameter aggregation;
s4, repeating the steps S2-S3 until the global loss function converges or reaches the set training precision, and finishing the global model training;
and S6, predicting the information quantity of each calculation unloading task by using the trained global model to obtain the size of the calculation unloading data quantity, and performing resource allocation with minimum cost according to the size of the calculation unloading data quantity.
2. The cloud-edge-based federated learning computing offload resource allocation method according to claim 1, wherein the local devices update local model parameters using their local data and local device parameters, respectively:
Figure FDA0002911954120000011
t is the current iteration index, i is the ith local device, and the goal of the local device i in the current iteration index t is to find the loss-causing function
Figure FDA0002911954120000012
Minimum optimum parameter
Figure FDA0002911954120000013
3. The cloud-edge-based federated learning computing offload resource allocation method according to claim 1, wherein after a set number of iterations is reached, important gradients with gradient data calculated by all local devices exceeding a threshold are uploaded to an edge server.
4. The method for allocating resource to be offloaded in federal learning computing based on a cloud edge as claimed in claim 1, wherein the specific process of cloud parameter aggregation is as follows:
Figure FDA0002911954120000021
{Dζrepresents the aggregated data set under the edge server ζ, and unloads the data set of the task to
Figure FDA0002911954120000022
Is distributed over N clients, where | DiAnd | D | respectively represent the local training sample i and the total training sample number.
5. The method for allocating resource for offloading federal learning calculation based on cloud edge as claimed in claim 1, wherein parameter sparsification is performed in an iterative process, a standard dispersion random gradient descent method is adopted, and local equipment passes through a local data set DiUpdating local parameters
Figure FDA0002911954120000023
Is composed of
Figure FDA0002911954120000024
t denotes the current iteration, wtRepresents the value of the parameter w at iteration t; f (x, w)t) Representing input data xAnd a current parameter wt(ii) a calculated loss; gk,tIndicating that node k iterates over w t timestA gradient of (a); spark (g)k,t) Denotes gk,tA sparse gradient of; wherein eta is the learning rate,
Figure FDA0002911954120000025
representing a small batch of data samples from client i
Figure FDA0002911954120000026
The resulting gradient fi(w) estimating; namely:
Figure FDA0002911954120000027
6. the cloud-edge-based federated learning computing offload resource allocation method according to claim 5, wherein at the beginning of each iteration t, on a working node k, from a current parameter wtAnd from the local data block Bk,tThe sampled data calculates the loss f (x, w)t) Then, the gradient f (x, w) can be determinedt) Let us order
Figure FDA0002911954120000028
f(x,wt),Bk,tThe local data block of the working node k is b; and sorting the gradient elements of each parameter according to absolute values, and uploading the important gradient of which the calculated gradient data exceeds a threshold value to an edge server.
7. The method for resource allocation based on cloud-edge federated learning computing offload as claimed in claim 1, wherein resource allocation is implemented by performing one-dimensional dual-section search using dual variables associated with resource allocation constraints.
8. A computing offload resource allocation system based on the cloud-edge-based federated learning computing offload resource allocation method of claim 1, comprising a local device, an edge server, and a cloud server;
the edge server is used for broadcasting the initialized global model and the tasks to the local equipment selected by the tasks; the local equipment updates local model parameters according to local data and local equipment parameters of the local equipment based on the initialized global model, and feeds the updated local model back to the edge server;
the edge server performs parameter aggregation on the local models fed back by different local devices, updates the global model parameters according to the edge parameter aggregation result parameters, feeds the updated global model back to each local device, and performs primary cloud parameter aggregation according to the global model aggregated by the edge server when the edge parameter aggregation reaches the set aggregation times; the local device inputs the tasks into the edge server, the edge server predicts the information quantity of each calculation unloading task by using a final global model to obtain the size of the calculation unloading data quantity, and resource allocation is carried out according to the size of the calculation unloading data quantity with minimum cost.
9. The computing offload resource allocation system according to claim 8, wherein the local device is based on an initialized global model
Figure FDA0002911954120000031
Updating local model parameters based on its own local data and local device parameters
Figure FDA0002911954120000032
Where t is the current iteration index, i is the ith local device, and the goal of the local device i in the current iteration index t is to find the loss-causing function
Figure FDA0002911954120000033
The minimum optimal parameters, namely:
Figure FDA0002911954120000034
when proceeding with k1After the iteration learning is performed for a round (namely, after the set iteration times are reached), the important gradient of which the calculated gradient data exceeds the threshold value is uploaded to the edge server.
10. The system according to claim 8, wherein a cloud server is connected to a plurality of edge servers, each edge server is connected to a plurality of local devices, and each edge server parameter aggregates local model parameters of the local devices connected thereto; and aggregating the cloud parameters of the cloud server to the aggregated global model parameters of the plurality of edge server parameters connected with the cloud server.
CN202110089708.9A 2021-01-22 2021-01-22 Cloud-side-based federated learning calculation unloading computing system and method Pending CN112817653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110089708.9A CN112817653A (en) 2021-01-22 2021-01-22 Cloud-side-based federated learning calculation unloading computing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110089708.9A CN112817653A (en) 2021-01-22 2021-01-22 Cloud-side-based federated learning calculation unloading computing system and method

Publications (1)

Publication Number Publication Date
CN112817653A true CN112817653A (en) 2021-05-18

Family

ID=75858849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110089708.9A Pending CN112817653A (en) 2021-01-22 2021-01-22 Cloud-side-based federated learning calculation unloading computing system and method

Country Status (1)

Country Link
CN (1) CN112817653A (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191504A (en) * 2021-05-21 2021-07-30 电子科技大学 Federated learning training acceleration method for computing resource heterogeneity
CN113312180A (en) * 2021-06-07 2021-08-27 北京大学 Resource allocation optimization method and system based on federal learning
CN113361694A (en) * 2021-06-30 2021-09-07 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113408675A (en) * 2021-08-20 2021-09-17 深圳市沃易科技有限公司 Intelligent unloading optimization method and system based on federal learning
CN113469367A (en) * 2021-05-25 2021-10-01 华为技术有限公司 Method, device and system for federated learning
CN113504999A (en) * 2021-08-05 2021-10-15 重庆大学 Scheduling and resource allocation method for high-performance hierarchical federated edge learning
CN113610303A (en) * 2021-08-09 2021-11-05 北京邮电大学 Load prediction method and system
CN113642700A (en) * 2021-07-05 2021-11-12 湖南师范大学 Cross-platform multi-modal public opinion analysis method based on federal learning and edge calculation
CN113761525A (en) * 2021-09-07 2021-12-07 广东电网有限责任公司江门供电局 Intelligent intrusion detection method and system based on federal learning
CN113839838A (en) * 2021-10-20 2021-12-24 西安电子科技大学 Business type identification method for federal learning based on cloud edge cooperation
CN113852692A (en) * 2021-09-24 2021-12-28 中国移动通信集团陕西有限公司 Service determination method, device, equipment and computer storage medium
CN113873047A (en) * 2021-12-03 2021-12-31 江苏电力信息技术有限公司 Cooperative computing method for streaming data
CN113971090A (en) * 2021-10-21 2022-01-25 中国人民解放军国防科技大学 Layered federal learning method and device of distributed deep neural network
CN114040425A (en) * 2021-11-17 2022-02-11 中国电信集团系统集成有限责任公司 Resource allocation method based on global resource availability optimization
CN114118437A (en) * 2021-09-30 2022-03-01 电子科技大学 Model updating synchronization method for distributed machine learning in micro cloud
CN114116198A (en) * 2021-10-21 2022-03-01 西安电子科技大学 Asynchronous federal learning method, system, equipment and terminal for mobile vehicle
CN114143212A (en) * 2021-11-26 2022-03-04 天津大学 Social learning method for smart city
CN114282646A (en) * 2021-11-29 2022-04-05 淮阴工学院 Light power prediction method and system based on two-stage feature extraction and improved BilSTM
CN114363911A (en) * 2021-12-31 2022-04-15 哈尔滨工业大学(深圳) Wireless communication system for deploying layered federated learning and resource optimization method
CN114357676A (en) * 2021-12-15 2022-04-15 华南理工大学 Aggregation frequency control method for hierarchical model training framework
CN114363923A (en) * 2021-11-30 2022-04-15 山东师范大学 Industrial Internet of things resource allocation method and system based on federal edge learning
CN114462573A (en) * 2022-01-20 2022-05-10 内蒙古工业大学 Efficient hierarchical parameter transmission delay optimization method oriented to edge intelligence
CN114465900A (en) * 2022-03-01 2022-05-10 北京邮电大学 Data sharing delay optimization method and device based on federal edge learning
CN114650228A (en) * 2022-03-18 2022-06-21 南京邮电大学 Federal learning scheduling method based on computation unloading in heterogeneous network
CN114818446A (en) * 2021-12-22 2022-07-29 安徽继远软件有限公司 Power service decomposition method and system facing 5G cloud edge-end cooperation
CN114916013A (en) * 2022-05-10 2022-08-16 中南大学 Method, system and medium for optimizing unloading time delay of edge task based on vehicle track prediction
CN115080249A (en) * 2022-08-22 2022-09-20 南京可信区块链与算法经济研究院有限公司 Vehicle networking multidimensional resource allocation method and system based on federal learning
WO2023061500A1 (en) * 2021-10-15 2023-04-20 Huawei Technologies Co., Ltd. Methods and systems for updating parameters of a parameterized optimization algorithm in federated learning
CN116166406A (en) * 2023-04-25 2023-05-26 合肥工业大学智能制造技术研究院 Personalized edge unloading scheduling method, model training method and system
WO2023134065A1 (en) * 2022-01-14 2023-07-20 平安科技(深圳)有限公司 Gradient compression method and apparatus, device, and storage medium
CN116644802A (en) * 2023-07-19 2023-08-25 支付宝(杭州)信息技术有限公司 Model training method and device
CN117076132A (en) * 2023-10-12 2023-11-17 北京邮电大学 Resource allocation and aggregation optimization method and device for hierarchical federal learning system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴琪 等: "边缘学习:关键技术、应用与挑战", 《无线电通信技术》 *

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191504A (en) * 2021-05-21 2021-07-30 电子科技大学 Federated learning training acceleration method for computing resource heterogeneity
CN113191504B (en) * 2021-05-21 2022-06-28 电子科技大学 Federated learning training acceleration method for computing resource isomerism
CN113469367B (en) * 2021-05-25 2024-05-10 华为技术有限公司 Federal learning method, device and system
WO2022247683A1 (en) * 2021-05-25 2022-12-01 华为技术有限公司 Federated learning method, apparatus, and system
CN113469367A (en) * 2021-05-25 2021-10-01 华为技术有限公司 Method, device and system for federated learning
CN113312180A (en) * 2021-06-07 2021-08-27 北京大学 Resource allocation optimization method and system based on federal learning
CN113361694B (en) * 2021-06-30 2022-03-15 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113361694A (en) * 2021-06-30 2021-09-07 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113642700A (en) * 2021-07-05 2021-11-12 湖南师范大学 Cross-platform multi-modal public opinion analysis method based on federal learning and edge calculation
CN113504999A (en) * 2021-08-05 2021-10-15 重庆大学 Scheduling and resource allocation method for high-performance hierarchical federated edge learning
CN113504999B (en) * 2021-08-05 2023-07-04 重庆大学 Scheduling and resource allocation method for high-performance hierarchical federal edge learning
CN113610303A (en) * 2021-08-09 2021-11-05 北京邮电大学 Load prediction method and system
CN113610303B (en) * 2021-08-09 2024-03-19 北京邮电大学 Load prediction method and system
CN113408675A (en) * 2021-08-20 2021-09-17 深圳市沃易科技有限公司 Intelligent unloading optimization method and system based on federal learning
CN113761525A (en) * 2021-09-07 2021-12-07 广东电网有限责任公司江门供电局 Intelligent intrusion detection method and system based on federal learning
CN113852692A (en) * 2021-09-24 2021-12-28 中国移动通信集团陕西有限公司 Service determination method, device, equipment and computer storage medium
CN113852692B (en) * 2021-09-24 2024-01-30 中国移动通信集团陕西有限公司 Service determination method, device, equipment and computer storage medium
CN114118437A (en) * 2021-09-30 2022-03-01 电子科技大学 Model updating synchronization method for distributed machine learning in micro cloud
CN114118437B (en) * 2021-09-30 2023-04-18 电子科技大学 Model updating synchronization method for distributed machine learning in micro cloud
WO2023061500A1 (en) * 2021-10-15 2023-04-20 Huawei Technologies Co., Ltd. Methods and systems for updating parameters of a parameterized optimization algorithm in federated learning
CN113839838B (en) * 2021-10-20 2023-10-20 西安电子科技大学 Business type identification method based on cloud edge cooperation and federal learning
CN113839838A (en) * 2021-10-20 2021-12-24 西安电子科技大学 Business type identification method for federal learning based on cloud edge cooperation
CN114116198A (en) * 2021-10-21 2022-03-01 西安电子科技大学 Asynchronous federal learning method, system, equipment and terminal for mobile vehicle
CN113971090A (en) * 2021-10-21 2022-01-25 中国人民解放军国防科技大学 Layered federal learning method and device of distributed deep neural network
CN113971090B (en) * 2021-10-21 2022-09-13 中国人民解放军国防科技大学 Layered federal learning method and device of distributed deep neural network
CN114040425A (en) * 2021-11-17 2022-02-11 中国电信集团系统集成有限责任公司 Resource allocation method based on global resource availability optimization
CN114040425B (en) * 2021-11-17 2024-03-15 中电信数智科技有限公司 Resource allocation method based on global resource utility rate optimization
CN114143212A (en) * 2021-11-26 2022-03-04 天津大学 Social learning method for smart city
CN114282646B (en) * 2021-11-29 2023-08-25 淮阴工学院 Optical power prediction method and system based on two-stage feature extraction and BiLSTM improvement
CN114282646A (en) * 2021-11-29 2022-04-05 淮阴工学院 Light power prediction method and system based on two-stage feature extraction and improved BilSTM
CN114363923B (en) * 2021-11-30 2024-03-26 山东师范大学 Industrial Internet of things resource allocation method and system based on federal edge learning
CN114363923A (en) * 2021-11-30 2022-04-15 山东师范大学 Industrial Internet of things resource allocation method and system based on federal edge learning
CN113873047B (en) * 2021-12-03 2022-02-15 江苏电力信息技术有限公司 Cooperative computing method for streaming data
CN113873047A (en) * 2021-12-03 2021-12-31 江苏电力信息技术有限公司 Cooperative computing method for streaming data
CN114357676B (en) * 2021-12-15 2024-04-02 华南理工大学 Aggregation frequency control method for hierarchical model training framework
CN114357676A (en) * 2021-12-15 2022-04-15 华南理工大学 Aggregation frequency control method for hierarchical model training framework
CN114818446A (en) * 2021-12-22 2022-07-29 安徽继远软件有限公司 Power service decomposition method and system facing 5G cloud edge-end cooperation
CN114363911B (en) * 2021-12-31 2023-10-17 哈尔滨工业大学(深圳) Wireless communication system for deploying hierarchical federal learning and resource optimization method
CN114363911A (en) * 2021-12-31 2022-04-15 哈尔滨工业大学(深圳) Wireless communication system for deploying layered federated learning and resource optimization method
WO2023134065A1 (en) * 2022-01-14 2023-07-20 平安科技(深圳)有限公司 Gradient compression method and apparatus, device, and storage medium
CN114462573A (en) * 2022-01-20 2022-05-10 内蒙古工业大学 Efficient hierarchical parameter transmission delay optimization method oriented to edge intelligence
CN114462573B (en) * 2022-01-20 2023-11-14 内蒙古工业大学 Edge intelligence-oriented efficient hierarchical parameter transmission delay optimization method
CN114465900B (en) * 2022-03-01 2023-03-21 北京邮电大学 Data sharing delay optimization method and device based on federal edge learning
CN114465900A (en) * 2022-03-01 2022-05-10 北京邮电大学 Data sharing delay optimization method and device based on federal edge learning
CN114650228A (en) * 2022-03-18 2022-06-21 南京邮电大学 Federal learning scheduling method based on computation unloading in heterogeneous network
CN114650228B (en) * 2022-03-18 2023-07-25 南京邮电大学 Federal learning scheduling method based on calculation unloading in heterogeneous network
CN114916013A (en) * 2022-05-10 2022-08-16 中南大学 Method, system and medium for optimizing unloading time delay of edge task based on vehicle track prediction
CN114916013B (en) * 2022-05-10 2024-04-16 中南大学 Edge task unloading delay optimization method, system and medium based on vehicle track prediction
CN115080249B (en) * 2022-08-22 2022-12-16 南京可信区块链与算法经济研究院有限公司 Vehicle networking multidimensional resource allocation method and system based on federal learning
CN115080249A (en) * 2022-08-22 2022-09-20 南京可信区块链与算法经济研究院有限公司 Vehicle networking multidimensional resource allocation method and system based on federal learning
CN116166406B (en) * 2023-04-25 2023-06-30 合肥工业大学智能制造技术研究院 Personalized edge unloading scheduling method, model training method and system
CN116166406A (en) * 2023-04-25 2023-05-26 合肥工业大学智能制造技术研究院 Personalized edge unloading scheduling method, model training method and system
CN116644802A (en) * 2023-07-19 2023-08-25 支付宝(杭州)信息技术有限公司 Model training method and device
CN117076132A (en) * 2023-10-12 2023-11-17 北京邮电大学 Resource allocation and aggregation optimization method and device for hierarchical federal learning system
CN117076132B (en) * 2023-10-12 2024-01-05 北京邮电大学 Resource allocation and aggregation optimization method and device for hierarchical federal learning system

Similar Documents

Publication Publication Date Title
CN112817653A (en) Cloud-side-based federated learning calculation unloading computing system and method
Yu et al. Toward resource-efficient federated learning in mobile edge computing
CN112351503B (en) Task prediction-based multi-unmanned aerial vehicle auxiliary edge computing resource allocation method
CN113543176B (en) Unloading decision method of mobile edge computing system based on intelligent reflecting surface assistance
CN111629380A (en) Dynamic resource allocation method for high-concurrency multi-service industrial 5G network
CN112788605B (en) Edge computing resource scheduling method and system based on double-delay depth certainty strategy
CN114051254B (en) Green cloud edge collaborative computing unloading method based on star-ground fusion network
CN112511336B (en) Online service placement method in edge computing system
CN114650228B (en) Federal learning scheduling method based on calculation unloading in heterogeneous network
CN115374853A (en) Asynchronous federal learning method and system based on T-Step polymerization algorithm
WO2022242468A1 (en) Task offloading method and apparatus, scheduling optimization method and apparatus, electronic device, and storage medium
Jia et al. Learning-based queuing delay-aware task offloading in collaborative vehicular networks
Binucci et al. Dynamic resource allocation for multi-user goal-oriented communications at the wireless edge
CN113778550B (en) Task unloading system and method based on mobile edge calculation
Li et al. Anycostfl: Efficient on-demand federated learning over heterogeneous edge devices
CN113961204A (en) Vehicle networking computing unloading method and system based on multi-target reinforcement learning
Jeong et al. Deep reinforcement learning-based task offloading decision in the time varying channel
CN113676357A (en) Decision method for edge data processing in power internet of things and application thereof
CN115756873B (en) Mobile edge computing and unloading method and platform based on federation reinforcement learning
CN115936110A (en) Federal learning method for relieving isomerism problem
Huang et al. WorkerFirst: Worker-centric model selection for federated learning in mobile edge computing
CN112910716B (en) Mobile fog calculation loss joint optimization system and method based on distributed DNN
Wang et al. Adaptive Compute Offloading Algorithm for Metasystem Based on Deep Reinforcement Learning
Li et al. ESMO: Joint frame scheduling and model caching for edge video analytics
Huang et al. Worker-centric model allocation for federated learning in mobile edge computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210518

WD01 Invention patent application deemed withdrawn after publication