CN115278708A - Mobile edge computing resource management method for federal learning - Google Patents
Mobile edge computing resource management method for federal learning Download PDFInfo
- Publication number
- CN115278708A CN115278708A CN202210878004.4A CN202210878004A CN115278708A CN 115278708 A CN115278708 A CN 115278708A CN 202210878004 A CN202210878004 A CN 202210878004A CN 115278708 A CN115278708 A CN 115278708A
- Authority
- CN
- China
- Prior art keywords
- edge
- edge server
- server
- local
- resource allocation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007726 management method Methods 0.000 title claims abstract description 12
- 238000013468 resource allocation Methods 0.000 claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 28
- 238000004891 communication Methods 0.000 claims abstract description 27
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 22
- 238000005457 optimization Methods 0.000 claims abstract description 21
- 238000004364 calculation method Methods 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims description 26
- 230000002776 aggregation Effects 0.000 claims description 18
- 238000004220 aggregation Methods 0.000 claims description 18
- 230000005540 biological transmission Effects 0.000 claims description 12
- 238000005265 energy consumption Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 6
- 238000010801 machine learning Methods 0.000 claims description 5
- 238000012821 model calculation Methods 0.000 claims description 5
- 230000007423 decrease Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 3
- 238000011144 upstream manufacturing Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 abstract description 2
- 230000007547 defect Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W16/00—Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
- H04W16/22—Traffic simulation tools or models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/06—Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention relates to the field of edge computing, in particular to a mobile edge computing resource management method facing federal learning. The invention firstly provides a federal edge learning framework, and a model for comprehensively considering calculation, communication resource allocation and edge association is formulated so as to minimize the global learning cost. The optimization problem is then decomposed into two sub-problems: and resource allocation and edge association are carried out, and an efficient resource scheduling algorithm is designed according to the resource allocation and edge association. The optimal strategy of the sub-problem of resource allocation of the training set of a given single edge server is solved, and then a feasible strategy is solved for the multi-edge server edge association problem through iteration of complexity reduction, so that the optimal solution of the original problem is efficiently approached. Compared with the traditional federal study, the framework provided by the invention is superior to a benchmark scheme in the aspect of global cost optimization, and better training performance is realized.
Description
Technical Field
The invention relates to the field of edge computing, in particular to a mobile edge computing resource management method facing federal learning.
Background
With the continuous evolution of mobile communication networks, the Beyond-five (B5G) and the 6G (the 6th Generation) networks will bring new service scenarios, such as automatic driving, industrial control, enhancement/virtual reality, etc., which put higher demands on the indexes such as bandwidth, delay, power consumption, reliability, etc. Efficient and rapid resource scheduling required by corresponding massive wireless access devices also brings huge challenges to the network.
To solve the above problem, a Moving Edge Computing (MEC) concept is proposed. Through edge computing, the terminal equipment can unload part or all of computing tasks to network edge nodes such as a base station and the like, and computing capacity of the terminal equipment is expanded. Compared with a calculation method concentrated to a cloud end, the method can effectively reduce task processing time delay, reduce the flow pressure of a core network, and ensure data privacy and safety. The core idea of the mobile edge network based on edge computing is to migrate the resources, contents and functions of the network to the edge of the network, thereby improving the overall resource scheduling efficiency of the network, and the resource management method has an important influence on the performance of the system.
In the face of the development trend of network complexity in the future, artificial intelligence and data-driven mobile edge network resource scheduling methods become the key impetus of the next generation intelligent network and are receiving wide attention. The use of a Federated Learning (FL) approach enables geographically distributed devices to collaboratively perform resource scheduling model training while keeping raw data processed locally, thereby significantly reducing communication overhead while avoiding transmission of privacy sensitive data over wireless channels. However, the convergence speed and prediction accuracy of the federal learning system will be severely reduced due to problems such as resource-constrained edge devices and aggregation errors. Therefore, there is a need for an efficient artificial intelligence approach to accomplish scheduling of resources.
In order to solve the development state of the prior art, the prior patents and the prior documents are searched, compared and analyzed, and the following technical information with high relevance to the invention is screened out:
patent scheme 1: CN113791895A is based on the edge calculation and resource optimization method of federal learning, and discloses an edge calculation and resource optimization method based on federal learning. And deploying an antenna array at the small base station, acquiring channel information of a downlink, forming a channel and precoding as training data of an input-output pair, performing federal learning under the data support, and finally achieving the purpose of inputting the channel information to obtain corresponding precoding information. In the process, in order to obtain a stable learning alliance and control the system energy consumption in the lowest state, user selection is carried out, namely users with stable computing capability and communication capability are selected from a plurality of users to participate in training through the physical characteristics of each node; and a contract mechanism is introduced to reward the users participating in training, the income and paid training cost of each user are calculated to obtain a utility function, and the resources are distributed to the users to enable the utility of the whole system to be maximum. The method has the following defects: according to the scheme, user selection is carried out according to the quality of the physical characteristics of each node, so that a stable learning alliance and low energy consumption are obtained. But the selection necessarily results in the reduction of coverage users, and the service coverage and the training quality need to be balanced.
Patent scheme 2: CN109862592A is a resource management and scheduling method in a mobile edge computing environment based on multi-base station cooperation, and discloses a resource management and scheduling method in a mobile edge computing environment based on multi-base station cooperation. The method comprises the following steps: the mobile edge computing intelligent base station utilizes the five units of receiving, controlling, buffering, computing and transmitting to allocate and schedule resources. When the mobile terminal has a new calculation task, uploading a migration request to the intelligent base station to which the mobile terminal belongs; the task is determined by the management algorithm to be performed in the smart base station, the neighboring base station or the cloud. If the task data is cached, directly executing the task; if the task data is not cached, the data request of the task is sent to the cloud. The invention can simultaneously optimize the aspects of transmission and calculation time delay, cache allocation, system gain and the like. The method has the following defects: the scheme is characterized in that the base stations cooperate with each other, and the expenses such as time delay caused by data request are reduced in a cache mode. However, when a large-scale network scenario is faced, the optimal solution of the base station caching strategy and the network load management may conflict, the global nature of the decision remains to be questioned, and meanwhile, the communication overhead between the base stations may also increase the network load.
Patent scheme 3: CN114168328A a method and system for scheduling a computation task of a mobile edge node based on federal learning, which discloses a method and system for scheduling a computation task of a mobile edge node based on federal learning, specifically comprising the following steps: initializing information parameters; carrying out local training on the DQN network deployed at each mobile edge node; judging whether the number of updating rounds meets the aggregation frequency or not in the DQN network training process; if the number of the updating rounds meets the aggregation frequency, updating global parameters; judging whether the training rounds reach the specified times or not in the DQN network training process; and if the number of training rounds reaches the specified number, outputting a result. The patent provides a method for scheduling computation tasks in a mobile edge computing system from the viewpoint of the execution sequence of the computation tasks, and the computation task completion time is shortened by utilizing the cooperation of a plurality of mobile edge nodes. And (3) defect: according to the scheme, the execution sequence of the calculation tasks queued at the edge end is reasonably planned, so that the federal learning average time delay is effectively shortened. However, the basis for whether each training is completed depends on the DQN algorithm itself, and whether the training task is suitable for other scenes cannot be determined.
Disclosure of Invention
The invention aims to provide a mobile edge computing resource management method facing to federal learning, which is based on a scene of using a federal learning framework to carry out machine learning in an edge network and faces to the development requirement of the future edge network, so that the overall cost of a federal learning system is reduced.
The technical scheme adopted by the invention is as follows:
a mobile edge computing resource management method facing federal learning comprises the following steps:
(1) Constructing a federated edge learning framework, which comprises a cloud server S, a group of mobile edge servers K, a plurality of equipment sets N and a data set D owned by each equipmentn;
(2) The equipment performs local model calculation and then transmits the calculated local model to the associated edge server, and the edge server performs edge aggregation to obtain the total energy cost and the communication delay;
(3) Establishing an optimization problem based on the total energy cost and the communication delay;
(4) The optimization problem is decomposed into two key sub-problems: resource allocation within a single edge server and edge correlation problems across multiple edge servers, calculate an optimal resource allocation.
Further, the step (2) specifically comprises the following steps:
s1, local model calculation: let q benThe average CPU cycle required to process a single data sample when training the local model for device n, in total, | D is required in one local iterationn|qnProcessing training data samples D for one CPU cyclenA 1 is to fnRepresenting the CPU frequency allocated for the device for task computation; to achieve a local precision θ e (0,1) common to all devices of the same model, the number of iterations of the mobile device n isWherein the constant μ depends on the data size and the machine learning task; thereby deriving the time taken for a local calculationComprises the following steps:
s2, uploading a local model: after completing L (θ) local iterations, each device n transmits its local model update to the selected edge server i, and then for edge server i, the associated device set with edge server i is characterized as
Transmission rate r of device nnComprises the following steps:
wherein B isiIs the total bandwidth, β, of the edge server ii:nIs the bandwidth allocation ratio of the device N, N0Is background noise, pnIs the transmission power, hnIs the channel gain of the device n, set at h within a round of local learning timenIs constant;
the data size of the local model parameter updated by the device n is dnThus deriving the time of transmission to the edge serverComprises the following steps:
energy consumption of device n for transmitting local model parametersComprises the following steps:
s3, edge model aggregation: device set M from which each edge server i connectsiAfter receiving the updated model parameters, the edge server i updates the aggregation model, i.e. the global model, located in the server by using the parameters, and then broadcasts the aggregation model to the device set M connected with the edge server iiAnd returning to the step S1 until the edge server i reaches the same edge precision epsilon for all the servers, wherein the edge iteration times are as follows:
where δ is a constant that depends on the learning task;
after I (E, θ) edge iterations, the total energy cost E of the edge server I is:
the calculation of the edge precision epsilon and the communication delay T realized by the edge server i are as follows:
further, the optimization problem in step (3) is as follows:
where (C1) and (C2) represent upstream communication resource constraints and computing power constraints, respectively, (C3) and (C4) represent ensuring that all devices in the system participate in model training, (C5) represents requiring that each device need to be associated with an edge server, λe,λt∈[0,1]Representing the importance weighting indicators of energy and delay, respectively.
Further, the step (4) specifically comprises the following steps:
the optimization problem is decomposed into two key sub-problems: resource allocation problems within a single edge server and edge association problems across multiple edge servers, ultimately achieving the goal of minimizing latency and energy consumption; the resource allocation problem is solved by adopting an algorithm 1, and the edge association problem is solved by adopting an algorithm 2;
the algorithm 1 is specifically as follows:
considering no edge association, namely neglecting constraints C4-C6, focusing on the optimal overhead minimization problem in a single edge server, and under the condition of a given device set, solving the computation resource allocation sub-problem under the edge server by using a convex optimization solver;
the allocation of communication resources is derived from the following equation, deriving an optimal bandwidth allocation
En=L(θ)qn|Dn| (12)
BiIs the bandwidth of the edge server i,is thatCube of (A)n Bn EnAre constants related to the specific settings of the device n itself;
the algorithm 2 is specifically as follows:
s1: firstly, executing an initial edge association strategy, connecting each device to the nearest edge server, wherein each device has a corresponding edge server at the moment, and then calling an algorithm 1 by each edge server to solve the optimal resource allocation problem of the devices in the edge server;
s2: after determining the resource allocation condition in each edge server, considering the edge association condition of the equipment under the coverage of different edge servers; calculating the indexIf the "transfer or swap" of the device would cause this value to decrease, then adjustment is allowed, otherwise no adjustment is made; wherein the communication is from a connected edge server device set MiDevice set M moved to another edge serverjIn that the exchange is a device set M of two edge serversiAnd MjThe devices in the system are wholly exchanged into the association set of the other side;
s3: and repeating the step S2 until the equipment cannot be adjusted any more, so as to obtain the optimal edge association strategy.
Compared with the prior art, the invention has the advantages that:
the present invention aims to optimize federal learning resource allocation and device association in a mobile edge network. Firstly, aiming at the development trend of edge network fusion machine learning and the defects of the existing research, a three-layer framework of cloud side end federal learning is provided, and is analyzed on the basis of the three-layer framework, and an optimization model comprehensively considering calculation, communication resource allocation and edge association is formulated. Then the problem is decomposed into two key subproblems, and an algorithm is designed in sequence to solve the problem, so that efficient approximate solution of the original problem is finally realized. Compared with the traditional federal study, the framework provided by the invention is superior to a benchmark scheme in the aspect of global cost optimization, and better training performance is realized.
Drawings
FIG. 1 is an overall flow chart of the present invention.
Fig. 2 is a diagram of the cloud-side federal learning architecture of the present invention.
FIG. 3 is a flowchart of the joint edge correlation resource allocation algorithm of the present invention.
FIG. 4 is a diagram comparing the method of the present invention with the conventional method.
Detailed Description
The invention is further illustrated with reference to the accompanying figures 1-4.
A method for managing mobile edge computing resources facing Federal learning is disclosed, as shown in FIG. 1, and comprises the following steps:
(1) Constructing a federated edge learning framework, which comprises a cloud server S, a group of mobile edge servers K, a plurality of equipment sets N and a data set D owned by each equipmentn(ii) a As shown in fig. 2;
(2) The equipment performs local model calculation and then transmits the calculated result to a related edge server, and the edge server performs edge aggregation to obtain total energy cost and communication delay;
the method specifically comprises the following steps:
s1 local modelAnd (3) calculating: let q benThe average CPU cycle required to process a single data sample when training the local model for device n, in total, | D is required in one local iterationn|qnProcessing training data samples D for one CPU cyclenA 1 to fnRepresenting the CPU frequency allocated for the device for task computation; to achieve a local precision θ e (0,1) common to all devices of the same model, the number of iterations of the mobile device n isWherein the constant μ depends on the data size and the machine learning task; thereby deriving the time taken for a local calculationComprises the following steps:
s2, uploading a local model: after completing L (θ) local iterations, each device n transmits its local model update to the selected edge server i, for which the associated device set of edge server i is then characterized as
Transmission rate of device nrnComprises the following steps:
wherein B isiIs the total bandwidth, β, of the edge server ii:nIs the bandwidth allocation ratio of the device N, N0Is background noise, pnIs the transmission power, hnIs the channel gain of the device n, set at h within a round of local learning timenIs constant;
the data size of the local model parameter updated by the device n is dnThus deriving the time of transmission to the edge serverComprises the following steps:
energy consumption of device n for transmitting local model parametersComprises the following steps:
s3, edge model aggregation: set of devices M from which each edge server i is connectediAfter receiving the updated model parameters, the edge server i updates the aggregation model, i.e. the global model, located in the server by using the parameters, and then broadcasts the aggregation model to the device set M connected with the edge server iiAnd returning to the step S1 until the edge server i reaches the same edge precision epsilon for all the servers, wherein the edge iteration times are as follows:
where δ is a constant that depends on the learning task;
after I (ε, θ) edge iterations, the total energy cost E for the edge server I is:
the calculation of the edge precision epsilon and the communication delay T realized by the edge server i are as follows:
(3) Establishing an optimization problem based on the total energy cost and the communication delay;
the optimization problem is as follows:
where (C1) and (C2) represent upstream communication resource constraints and computing power constraints, respectively, (C3) and (C4) represent ensuring that all devices in the system participate in model training, (C5) represents requiring that each device need to be associated with an edge server, λe,λt∈[0,1]Representing the importance weighting indicators of energy and delay, respectively.
(4) The optimization problem is decomposed into two key sub-problems: resource allocation within a single edge server and edge correlation problems across multiple edge servers, calculate an optimal resource allocation.
As shown in fig. 3, the method specifically includes the following steps:
the optimization problem is decomposed into two key sub-problems: resource allocation problems within a single edge server and edge association problems across multiple edge servers, ultimately achieving the goal of minimizing latency and energy consumption; the resource allocation problem is solved by adopting an algorithm 1, and the edge association problem is solved by adopting an algorithm 2;
the algorithm 1 is specifically as follows:
considering no edge association, namely neglecting constraints C4-C6, focusing on the optimal overhead minimization problem in a single edge server, and under the condition of a given device set, solving the computation resource allocation sub-problem under the edge server by using a convex optimization solver;
the allocation of communication resources is derived from the following equation, deriving an optimal bandwidth allocation
En=L(θ)qn|Dn| (12)
BiIs the bandwidth of the edge server i,is thatCube of (A)n Bn EnAre constants related to the specific settings of the device n itself;
the algorithm 2 is specifically as follows:
s1: firstly, executing an initial edge association strategy, connecting each device to the nearest edge server, wherein each device has a corresponding edge server at the moment, and then calling an algorithm 1 by each edge server to solve the optimal resource allocation problem of the devices in the edge server;
s2: after determining resource allocation within each edge server, differences are consideredEdge association of devices under edge server coverage; calculating an indexIf the "transfer or swap" of the device would cause this value to decrease, then adjustment is allowed, otherwise no adjustment is made; wherein the communication is from a connected edge server device set MiDevice set M moved to another edge serverjIn that the exchange is a device set M of two edge serversiAnd MjThe devices in the system are wholly exchanged into the association set of the other side;
s3: and repeating the step S2 until the equipment cannot be adjusted any more, so as to obtain the optimal edge association strategy.
The specific analysis steps of the inventive examples are as follows:
s11, the method simulates to evaluate the performance of the proposed cloud side federal learning framework in the aspects of testing accuracy, training accuracy and training loss and the performance of the proposed resource scheduling algorithm. The simulation experiment set up as follows, with all devices and edge servers randomly distributed throughout a 500M x 500M area.
Table 1 simulation parameter setting table
S12, as shown in fig. 4, the left graph shows the great advantage of the wide area network communication efficiency of the algorithm proposed by the present patent over the traditional device-cloud centralized FL. Without edge aggregation, there are N devices whose local model parameters are transmitted over the wide area network to the remote cloud. In the framework proposed by the present invention, after edge aggregation, only K (usually K < < N) edge models of edge servers are provided, and each edge model has a size similar to that of the local model and is transmitted to the cloud. Considerable communication overhead can be saved by edge model aggregation. The right diagram shows that the wireless communication overhead in the framework proposed by the invention decreases as the number of local iterations increases. Whereas the wireless overhead of the legacy device-cloud FL remains low, because conventionally each device transmits the local model to the edge server over only one wireless connection, the newly proposed framework requires frequent communication between the edge server and the device, which results in the consumption of more wireless data transmission resources. If the goal is to minimize the device training overhead, a tradeoff should be made between the number of local iterations and the number of edge iterations.
Claims (4)
1. A mobile edge computing resource management method facing federal learning is characterized by comprising the following steps:
(1) Constructing a federated edge learning framework, which comprises a cloud server S, a group of mobile edge servers K, a plurality of equipment sets N and a data set D owned by each equipmentn;
(2) The equipment performs local model calculation and then transmits the calculated local model to the associated edge server, and the edge server performs edge aggregation to obtain the total energy cost and the communication delay;
(3) Establishing an optimization problem based on the total energy cost and the communication delay;
(4) The optimization problem is decomposed into two key sub-problems: resource allocation within a single edge server and edge correlation problems across multiple edge servers, calculate an optimal resource allocation.
2. The method for managing mobile edge computing resources for federal learning as claimed in claim 1, wherein step (2) comprises the following steps:
s1, local model calculation: let q benThe average CPU cycle required to process a single data sample when training the local model for device n, in total, | D is required in one local iterationn|qnProcessing training data samples D for one CPU cyclenA 1 is to fnRepresenting the CPU frequency allocated for the device for task computation; iteration of moving device n in order to achieve local precision θ e (0,1) common to all devices of the same modelThe number of times isWherein the constant μ depends on the data size and the machine learning task; thereby deriving the time taken for a local calculationComprises the following steps:
s2, uploading a local model: after completing L (θ) local iterations, each device n transmits its local model update to the selected edge server i, and then for edge server i, the associated device set with edge server i is characterized as
Transmission rate r of device nnComprises the following steps:
wherein B isiIs an edge server iTotal bandwidth, betai:nIs the bandwidth allocation ratio of the device N, N0Is background noise, pnIs the transmission power, hnIs the channel gain of the device n, set at h within a round of local learning timenIs constant;
the data size of the local model parameter updated by the device n is dnThus deriving the time of transmission to the edge serverComprises the following steps:
energy consumption of device n for transmitting local model parametersComprises the following steps:
s3, edge model aggregation: device set M from which each edge server i connectsiAfter receiving the updated model parameters, the edge server i updates the aggregation model, i.e. the global model, located in the server by using the parameters, and then broadcasts the aggregation model to the device set M connected with the edge server iiAnd returning to the step S1 until the edge server i reaches the same edge precision epsilon for all the servers, wherein the edge iteration times are as follows:
where δ is a constant that depends on the learning task;
after I (ε, θ) edge iterations, the total energy cost E for the edge server I is:
the calculation of the edge accuracy epsilon and the communication delay T realized by the edge server i are as follows:
3. the method for managing mobile edge computing resources facing federal learning as claimed in claim 2, wherein the optimization problem in step (3) is:
where (C1) and (C2) represent upstream communication resource constraints and computing power constraints, respectively, (C3) and (C4) represent ensuring that all devices in the system participate in model training, (C5) represents requiring that each device need to be associated with an edge server, λe,λt∈[0,1]Representing the importance weighting indicators of energy and delay, respectively.
4. The method for managing mobile edge computing resources for federated learning as defined in claim 3, wherein step (4) includes the following steps:
the optimization problem is decomposed into two key sub-problems: resource allocation problems within a single edge server and edge association problems across multiple edge servers, ultimately achieving the goal of minimizing latency and energy consumption; the resource allocation problem is solved by adopting an algorithm 1, and the edge association problem is solved by adopting an algorithm 2;
the algorithm 1 is specifically as follows:
considering no edge association, namely neglecting constraints C4-C6, focusing on the optimal overhead minimization problem in a single edge server, and under the condition of a given device set, solving the computation resource allocation sub-problem under the edge server by using a convex optimization solver;
the allocation of communication resources is derived from the following equation, deriving an optimal bandwidth allocation
En=L(θ)qnDn(12)
BiIs the bandwidth of the edge server i,is thatCube of (A)nBnEnAre constants related to the specific settings of the device n itself;
the algorithm 2 is specifically as follows:
s1: firstly, executing an initial edge association strategy, connecting each device to the nearest edge server, wherein each device has a corresponding edge server at the moment, and then calling an algorithm 1 by each edge server to solve the optimal resource allocation problem of the devices in the edge server;
s2: after determining the resource allocation condition in each edge server, considering the edge association condition of the equipment under the coverage of different edge servers; calculating an indexIf the "transfer or swap" of the device would cause this value to decrease, then adjustment is allowed, otherwise no adjustment is made; wherein the communication is from a connected edge server device set MiDevice set M moved to another edge serverjIn that the exchange is a set M of devices connecting two edge serversiAnd MjThe whole equipment in the system is switched to the association set of the other side;
s3: and repeating the step S2 until the equipment cannot be adjusted any more, so as to obtain the optimal edge association strategy.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210878004.4A CN115278708B (en) | 2022-07-25 | 2022-07-25 | Mobile edge computing resource management method oriented to federal learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210878004.4A CN115278708B (en) | 2022-07-25 | 2022-07-25 | Mobile edge computing resource management method oriented to federal learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115278708A true CN115278708A (en) | 2022-11-01 |
CN115278708B CN115278708B (en) | 2024-05-14 |
Family
ID=83768849
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210878004.4A Active CN115278708B (en) | 2022-07-25 | 2022-07-25 | Mobile edge computing resource management method oriented to federal learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115278708B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115696403A (en) * | 2022-11-04 | 2023-02-03 | 东南大学 | Multilayer edge computing task unloading method assisted by edge computing node |
CN117596605A (en) * | 2024-01-18 | 2024-02-23 | 北京交通大学 | Intelligent application-oriented deterministic network architecture and working method thereof |
CN117808123A (en) * | 2024-02-28 | 2024-04-02 | 东北大学 | Edge server allocation method based on multi-center hierarchical federal learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113194489A (en) * | 2021-04-01 | 2021-07-30 | 西安电子科技大学 | Minimum-maximum cost optimization method for effective federal learning in wireless edge network |
CN113504999A (en) * | 2021-08-05 | 2021-10-15 | 重庆大学 | Scheduling and resource allocation method for high-performance hierarchical federated edge learning |
CN114327860A (en) * | 2021-11-18 | 2022-04-12 | 北京邮电大学 | Design and resource allocation method of wireless federal learning system |
WO2022105714A1 (en) * | 2020-11-23 | 2022-05-27 | 华为技术有限公司 | Data processing method, machine learning training method and related apparatus, and device |
CN114745383A (en) * | 2022-04-08 | 2022-07-12 | 浙江金乙昌科技股份有限公司 | Mobile edge calculation assisted multilayer federal learning method |
-
2022
- 2022-07-25 CN CN202210878004.4A patent/CN115278708B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022105714A1 (en) * | 2020-11-23 | 2022-05-27 | 华为技术有限公司 | Data processing method, machine learning training method and related apparatus, and device |
CN113194489A (en) * | 2021-04-01 | 2021-07-30 | 西安电子科技大学 | Minimum-maximum cost optimization method for effective federal learning in wireless edge network |
CN113504999A (en) * | 2021-08-05 | 2021-10-15 | 重庆大学 | Scheduling and resource allocation method for high-performance hierarchical federated edge learning |
CN114327860A (en) * | 2021-11-18 | 2022-04-12 | 北京邮电大学 | Design and resource allocation method of wireless federal learning system |
CN114745383A (en) * | 2022-04-08 | 2022-07-12 | 浙江金乙昌科技股份有限公司 | Mobile edge calculation assisted multilayer federal learning method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115696403A (en) * | 2022-11-04 | 2023-02-03 | 东南大学 | Multilayer edge computing task unloading method assisted by edge computing node |
CN117596605A (en) * | 2024-01-18 | 2024-02-23 | 北京交通大学 | Intelligent application-oriented deterministic network architecture and working method thereof |
CN117596605B (en) * | 2024-01-18 | 2024-04-12 | 北京交通大学 | Intelligent application-oriented deterministic network architecture and working method thereof |
CN117808123A (en) * | 2024-02-28 | 2024-04-02 | 东北大学 | Edge server allocation method based on multi-center hierarchical federal learning |
Also Published As
Publication number | Publication date |
---|---|
CN115278708B (en) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115278708A (en) | Mobile edge computing resource management method for federal learning | |
CN109413724B (en) | MEC-based task unloading and resource allocation scheme | |
CN110267338A (en) | Federated resource distribution and Poewr control method in a kind of D2D communication | |
CN113435472A (en) | Vehicle-mounted computing power network user demand prediction method, system, device and medium | |
CN114143891A (en) | FDQL-based multi-dimensional resource collaborative optimization method in mobile edge network | |
CN103262593A (en) | Apparatus and method for determining a core network configuration of a wireless communication system | |
Qi et al. | Energy-efficient resource allocation for UAV-assisted vehicular networks with spectrum sharing | |
Zhang et al. | Joint resource allocation and multi-part collaborative task offloading in MEC systems | |
CN110856268A (en) | Dynamic multichannel access method for wireless network | |
CN116456493A (en) | D2D user resource allocation method and storage medium based on deep reinforcement learning algorithm | |
Xi et al. | Real-time resource slicing for 5G RAN via deep reinforcement learning | |
Lin et al. | Deep reinforcement learning-based task scheduling and resource allocation for NOMA-MEC in Industrial Internet of Things | |
Fang et al. | Smart collaborative optimizations strategy for mobile edge computing based on deep reinforcement learning | |
CN109981340B (en) | Method for optimizing joint resources in fog computing network system | |
Zheng et al. | Data synchronization in vehicular digital twin network: A game theoretic approach | |
Drainakis et al. | From centralized to federated learning: Exploring performance and end-to-end resource consumption | |
CN117459112A (en) | Mobile edge caching method and equipment in LEO satellite network based on graph rolling network | |
Zhao et al. | Multi-agent deep reinforcement learning based resource management in heterogeneous V2X networks | |
Ren et al. | Collaborative task offloading and resource scheduling framework for heterogeneous edge computing | |
Li et al. | Deep reinforcement learning for collaborative computation offloading on internet of vehicles | |
Ren et al. | Joint spectrum allocation and power control in vehicular communications based on dueling double DQN | |
Foukalas | Federated-learning-driven radio access networks | |
Peng et al. | How to Tame Mobility in Federated Learning over Mobile Networks? | |
Tian et al. | Hierarchical federated learning with adaptive clustering on non-IID data | |
CN115250156A (en) | Wireless network multichannel frequency spectrum access method based on federal learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |