CN115633383A - Multi-cooperation server deployment method in edge computing scene - Google Patents

Multi-cooperation server deployment method in edge computing scene Download PDF

Info

Publication number
CN115633383A
CN115633383A CN202211238060.8A CN202211238060A CN115633383A CN 115633383 A CN115633383 A CN 115633383A CN 202211238060 A CN202211238060 A CN 202211238060A CN 115633383 A CN115633383 A CN 115633383A
Authority
CN
China
Prior art keywords
server
deployment
stage
edge
edge computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211238060.8A
Other languages
Chinese (zh)
Inventor
赵志为
闵革勇
丛荣
张林元齐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211238060.8A priority Critical patent/CN115633383A/en
Publication of CN115633383A publication Critical patent/CN115633383A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies
    • H04W28/0933Management thereof using policies based on load-splitting ratios

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a multi-cooperation server deployment method in an edge computing scene. The invention deploys an edge server according to a two-stage incremental deployment method, which comprises the following two stages: the first stage is a server increment deployment stage based on a greedy strategy; the second stage is a load distribution stage based on a convex optimization approximation method; and (4) iterating the two steps by the self-increment of the number of the servers to find an optimal server deployment scheme. Aiming at the problems of redundant deployment resources of the edge server and excessive cost, the invention provides a deployment framework of cooperative service and a two-stage incremental deployment method, so that the resource utilization rate of the server is improved, and the total deployment cost of the server is reduced. The method is suitable for the field of edge calculation.

Description

Multi-cooperation server deployment method in edge computing scene
Technical Field
The invention belongs to the technical field of Internet of things and edge computing, and particularly relates to a multi-cooperation server deployment method in an edge computing scene.
Background
With the continuous development of 5G communication technology, more and more computation-intensive and delay-sensitive applications, such as real-time video stream analysis, auto-driving, augmented/virtual reality, etc., are appearing in the field of view of people. However, internet of things (IoT) terminal devices do not have the computing power for locally running such applications, and although cloud computing has strong computing power, the end-to-end delay is too large due to too long transmission distance, and the requirement of the applications for extremely low delay cannot be met. The combination between 5G and edge computing is considered a promising solution to give the internet of things intensive computing power.
Edge computing is a computing paradigm for sinking computing resources to the edge of a network, overcomes the defect of too long transmission delay of cloud computing, and can meet the requirements of high computing amount and extremely low delay of application. The development of the 5G communication technology greatly reduces the transmission delay of the last hop in the edge calculation, and further improves the performance of the edge network. But at the same time, the new characteristics introduced by the 5G communication technology also bring many challenges to edge calculation.
The excessive deployment cost is one of the main challenges facing 5G edge computing, that is, a large number of computationally powerful edge servers are required to complete the full coverage of the target network and the full satisfaction of the terminal tasks. As the 5G communication distance is greatly reduced, more servers need to be deployed to cover the same range to achieve the same coverage effect of 4G, which increases the deployment cost. Considering the drastic increase of delay-sensitive applications in edge computing, servers with more dense deployment are needed to meet the extremely high requirements of the applications on Quality-of-Service (QoS), and the actual deployment cost may be higher.
In addition, in the edge calculation, the computing resource demand of the terminal node for a peak at a certain time is far greater than the average computing resource demand, and if all the edge servers perform resource deployment according to the peak resource demand of the terminal node, the resource utilization rate is actually low in most of the time, which is also a large reason for increasing the deployment overhead. On the premise of ensuring the user experience, it becomes a challenge to reduce the overall deployment cost (especially for 5G scenarios).
Based on this, a multi-cooperation server deployment method oriented to an edge computing scene is needed in the field, which can reduce the deployment cost of an edge server on the premise of meeting the real-time experience requirements of users.
Disclosure of Invention
The invention provides a deployment architecture and a deployment method of multiple cooperative servers in an edge computing scene, which are used for cooperatively processing the fluctuating load of a terminal in an interlaced area by allowing multiple edge servers to utilize the real-time load difference between the multiple edge servers, so that the problem of the waste of deployment resources caused by the overlarge difference between the peak demand and the average demand of the terminal resources in a 5G edge network is solved, the overall deployment cost is reduced, and the resource utilization rate is improved.
The invention is realized by the following technical scheme:
a multi-cooperative server deployment method under an edge computing scene comprises the following steps: step S10: carrying out area discretization according to the effective communication range of the edge server and the position information of the area nodes; step S20: according to the two-stage incremental deployment method, deploying the edge server, where step S20 specifically includes the following two stages: the first stage is a server increment deployment stage based on a greedy strategy; the second stage is a load distribution stage based on a convex optimization approximation method; step S30: and according to the deployment scheme obtained by the two-stage incremental deployment method, completing the full coverage of the terminal nodes in the target edge network.
In one embodiment, the regions are discretized, and the regions are discretized according to the average workload of the regions.
In one embodiment, in the first phase, the location with the greatest optimization probability and greatest collaborative potential is selected as the deployment location for the new server for incremental deployment.
In one embodiment, in the first stage, the minimum number of servers is used to satisfy full coverage of the internet of things device, and then the candidate set location with the highest optimization possibility is selected on the basic deployment scheme for incremental deployment of the servers, so as to reduce the total deployment overhead of the servers.
In an embodiment, in the first stage, the method further includes: selecting a server with the highest optimization possibility; the method specifically comprises the following steps: and calculating an optimization possibility index of each server, and searching the deployed server with the most cooperation possibility.
In a certain embodiment, the optimization possibility index of each server is calculated, specifically as follows: calculating a difference between the peak calculation resource request amount and the average calculation resource request amount of the server; the larger the difference, the greater the optimization probability of the server.
In a certain embodiment, after the selected server is determined, the deployment position of the newly added server is determined according to the cooperation capability indexes of all candidate sets around the server, and the deployment position is a candidate deployment position with the maximum cooperation potential.
In a certain embodiment, calculating the cooperation capacity index of all candidate sets around a server according to the difference between the peak value sum of independent calculation requests of all internet of things devices which can be covered by the deployment server at a candidate deployment position and the peak value calculation request after the server aggregation; the larger the difference, the more suitable the candidate deployment location is for deploying the new server.
In a certain embodiment, in the second stage, through multiple rounds of random server selection, optimization is performed on multiple time slots with the highest resource request number of the selected servers, so as to obtain a scheme of the minimum overall computing resource deployment number.
In a certain embodiment, the scheme for obtaining the minimum total computing resource deployment number specifically includes the following steps:
step 3.1), modeling the problem of balancing the workload of each time slot into a non-convex optimization problem;
step 3.2), for the non-convex optimization problem, converting the non-convex optimization problem into a convex optimization problem by using a log-sum-exp approximate function;
step 3.3), solving the convex optimization problem after conversion by using a KKT condition.
Based on the two-stage incremental deployment method, the invention provides an implementation scheme of server position and resource number for the server deployment cost of the least resources of a given network.
The invention has the following advantages and beneficial effects:
1. aiming at the problem of resource waste caused by overlarge difference between peak value demand and average demand of terminal resources in a 5G edge network, the invention provides a scheme for cooperatively processing the fluctuating load of a shared terminal by using a plurality of edge servers by utilizing the real-time load difference of each other, thereby effectively improving the resource utilization rate and reducing the overall deployment cost of the edge servers.
2. The invention provides a two-stage incremental deployment method to plan the server deployment problem, jointly optimizes the deployment position of the server and two coupled sub-problems of workload distribution, decouples the two sub-problems, and obtains a good compromise between multi-server cooperation and service coverage.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of a collaboration mechanism of a multi-collaboration edge server according to the present invention;
FIG. 2 is a schematic flow chart of a two-stage incremental deployment method proposed by the present invention;
FIG. 3 is an example of an incremental server location selection operation in the first phase of a two-phase incremental deployment method.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and the accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not used as limiting the present invention.
First, a deployment architecture of multiple collaboration servers facing an edge computing scenario in the present invention is introduced, and the edge computing scenario in the present invention takes a 5G edge computing scenario as an example, but not limited thereto. The architecture of the embodiment allows multiple edge servers to cooperatively process the fluctuating loads of the shared terminals by using the real-time load difference of each other, so that the computing requirements of the terminals in the area are met by using fewer hardware resources and higher resource utilization rate, the problem of resource waste caused by overlarge difference between the peak requirement and the average requirement of the terminal resources in the 5G edge network is solved, the overall deployment cost is reduced, and the resource utilization rate is improved.
In an embodiment of the present invention, the 5G edge architecture specifically has the following features:
1. the total amount of computing resources of the edge server is a basic unit according to a computing power unit commonly seen in the market, the resources of the total server are multiples of the basic unit, and the deployment of the edge server conforms to the ETSI standard.
2. The workload of different types of terminal equipment can be unloaded to different edge servers for cooperation processing. The workload request quantity of each terminal device changes along with time, and the peak workload is distributed to a plurality of peripheral edge servers in the edge server cooperation mode, so that the deployment of redundant resources can be effectively reduced, and the deployment cost is reduced.
In addition, in the specific implementation, the overall deployment cost needs to be divided into two blocks, namely, the infrastructure cost brought by the deployment number of the edge servers and the computational cost brought by the resource number.
In the edge server deployment stage, the opportunity of multi-server cooperation scheduling is considered, the balance between the server coverage and the server cooperation opportunity is effectively carried out, the deployment scale, the deployment position and the hardware resource of the edge server and the load distribution among the servers are reasonably planned, and the efficient and economic edge server deployment strategy is realized.
Fig. 1 shows a schematic diagram of a cooperation mechanism of a multi-cooperation edge server, and specifically shows two load distribution modes. In an embodiment of the present invention, a cooperative processing mechanism of multiple cooperative servers in an edge architecture in the above embodiments is utilized. The edge server supports two load distribution mechanisms of 'single-terminal equipment task simultaneously unloading to a plurality of edge servers' or/and 'single-terminal equipment task unloading to a single edge server'.
1. The mechanism of unloading tasks of single-terminal equipment to a plurality of edge servers simultaneously comprises the following steps: the invention innovatively utilizes the characteristic that the tasks of the terminal equipment can be in data parallel, namely, the calculation load can be divided into a plurality of subareas and unloaded to a plurality of edge servers to execute parallel processing. The mechanism utilizes the characteristic that the resource quantity requested by different terminal equipment in the same period of time is in weak correlation, and can effectively process the peak value computing resource request of the terminal equipment through the idea of load balancing so as to reduce the deployment of redundant resources and further reduce the overall deployment cost.
2. The mechanism of 'single terminal equipment task unloading to single edge server': in the invention, not all terminal nodes execute a mechanism of unloading tasks of a single terminal device to a plurality of edge servers at the same time, but more nodes execute a traditional unloading mechanism. For the case that only one edge server exists in the communication range of the terminal device, and for the case that a plurality of edge devices exist in the communication range of the terminal device but the resource request of the terminal device is stable, the terminal device can perform single-to-single unloading according to a traditional mechanism.
Referring to fig. 2, this embodiment is a specific deployment method of the edge deployment architecture, namely a two-stage incremental deployment method, which is specifically as follows:
the two-stage incremental deployment method determines the number and the deployment positions of the servers through self-incremental traversal, so that the resource utilization rate is improved on the premise that the servers meet the service quality and the resource request of the terminal equipment of the Internet of things, and the aim of minimizing the overall server deployment cost is fulfilled. Namely iterative optimization server deployment and load distribution, a tradeoff is made between partner opportunity and coverage. The execution process of the method is shown in fig. 2, and specifically comprises the following steps:
step S10: and discretizing the continuous target area into a candidate set according to the area node position information and the area average workload.
First, a continuous target area needs to be discretized into a candidate set, and unlike other works using peak workload as a criterion for area discretization, the present invention uses area average workload and evaluates the cooperation opportunities of the candidate set by the optimized potential and cooperation capability. Namely, the area discretization is carried out according to the effective communication range of the edge server and the position information of the area nodes or/and the area average workload. Then, step two is executed.
Step S20: according to the two-stage incremental deployment method, the edge server is deployed. The step is divided into an inner layer and an outer layer, wherein the outer layer is a cycle from the number of the servers to the maximum deployment number, namely the total number of the edge servers. The inner layer is a two-stage method which selects a position with the maximum optimization space and the maximum cooperation potential as a deployment position of a new server to perform incremental deployment according to a deployment scheme of the previous round and then searches for the minimum deployment cost through the allocation of calculation tasks.
The two-phase incremental deployment method comprises a first phase and a second phase. An example of the incremental server location selection operation in the first phase of the two-phase incremental deployment method is shown in fig. 3.
The first stage is a server incremental deployment stage based on a greedy strategy. The method comprises the steps of firstly using a minimum number of servers to meet the full coverage of the Internet of things equipment, and then selecting a candidate set position with the maximum optimization cooperation potential on the basis of a deployment scheme to perform incremental deployment of the servers, so as to reduce the total deployment overhead of the servers. The selection of the deployment position of the incremental server is guided by the aid of two indexes, namely deployment optimization potential and server cooperation capacity.
The method specifically comprises the following steps:
step 2.1), selecting a server with the maximum optimized cooperation potential.
It is necessary to find the deployed server with the highest possibility of cooperation, and herein, it is necessary to calculate an optimization possibility index for each server to perform the selection of the server, specifically, a difference between the peak computing resource request amount and the average computing resource request amount of the server. Obviously, for a server with a large difference between the peak value and the average value, if the peak load can be balanced by a new deployment node, the overall deployment overhead can be reduced.
And 2.2) selecting the deployment position of the newly added server capable of being optimized.
The candidate deployment position with the maximum cooperation potential needs to be found in the periphery of the server selected in the previous step, and herein, cooperation capacity indexes of all candidate sets in the periphery of the server need to be calculated to select the deployment position of the newly added server, and the specific index is calculated by a difference value between the peak value sum of independent calculation requests of all internet of things devices which can be covered by the server at the candidate deployment position and the peak value calculation request after the server aggregation.
Obviously, if the two are equal, the peak request time of the nodes in the coverage area of the server is coincident, and no space capable of being optimized exists; and the larger the difference between the two is, the more effective the load balance can be realized by deploying the server at the position, and the method is suitable for deploying a new server.
The second stage is a load distribution stage based on a convex optimization approximation method. Namely, a plurality of rounds of random selection servers are optimized aiming at a plurality of time slots with the highest resource request number of the selected servers, so that a continuous time optimization problem is converted into a workload distribution problem in one time slot, and a scheme for obtaining the minimum total calculation resource deployment number specifically comprises the following steps:
step 3.1), modeling the problem of balancing the workload of each time slot as a non-convex optimization problem.
Step 3.2), for the non-convex optimization problem, converting the non-convex optimization problem into a convex optimization problem by using a log-sum-exp approximate function.
Step 3.3), solving the convex optimization problem after conversion by using a KKT condition.
And finally, after the execution of each step in the cycle is finished, obtaining an optimal solution of the deployment cost corresponding to the number of the servers, and storing the deployment scheme of the optimal solution after the comparison and the updating.
In other words, the method can be formally expressed as a mixed integer nonlinear programming problem, which comprises two coupled sub-problems of server deployment and load distribution, and the optimization aim is to minimize the deployment overhead of the edge server.
And (4) finding an optimal server deployment scheme by the self-increasing number of the servers and iterating the two steps.
Step S30: and according to the deployment scheme obtained by the two-stage incremental deployment method, completing the full coverage of the terminal nodes in the target edge network.
In the above solutions, what is not specifically described, such as modeling as a non-convex optimization problem, converting the non-convex optimization problem into a convex optimization problem, solving the KKT condition, discretizing the region, etc., how to implement itself (but not meaning the whole solution formed by it) is a conventional technical means in the art, and the present invention is not described in detail again.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A multi-cooperative server deployment method under an edge computing scene is characterized by comprising the following steps:
step S10: carrying out area discretization according to the effective communication range of the edge server and the position information of the area nodes;
step S20: according to the two-stage incremental deployment method, deploying the edge server, where step S20 specifically includes the following two stages:
the first stage is a server increment deployment stage based on a greedy strategy;
the second stage is a load distribution stage based on a convex optimization approximation method;
step S30: and according to the deployment scheme obtained by the two-stage incremental deployment method, completing the full coverage of the terminal nodes in the target edge network.
2. The multi-collaboration server deployment method in the edge computing scenario as claimed in claim 1, wherein:
and carrying out region discretization according to the region average workload.
3. The multi-collaboration server deployment method in the edge computing scenario as claimed in claim 1, wherein:
in the first phase, the position with the maximum optimization possibility and the maximum cooperation potential is selected as the deployment position of the new server for incremental deployment.
4. The multi-collaboration server deployment method in an edge computing scenario as claimed in claim 3 wherein:
in the first stage, the minimum number of servers is used to satisfy the full coverage of the internet of things equipment, and then the candidate set position with the highest optimization possibility is selected on the basic deployment scheme for incremental deployment of the servers, so that the total deployment overhead of the servers is reduced.
5. The multi-collaboration server deployment method in the edge computing scenario as claimed in claim 4, wherein: in the first stage, further comprising: selecting a server with the highest optimization possibility, which specifically comprises the following steps:
and calculating an optimization possibility index of each server, and searching the deployed server with the highest cooperation possibility.
6. The multi-collaboration server deployment method under the edge computing scenario of claim 5, wherein: calculating an optimization possibility index of each server, specifically:
calculating a difference between the peak calculation resource request amount and the average calculation resource request amount of the server; the larger the difference, the greater the optimization probability of the server.
7. The multi-collaboration server deployment method in the edge computing scenario as claimed in any one of claims 3 to 6 wherein:
and after the selected server is determined, determining the deployment position of the newly added server according to the cooperation capability indexes of all candidate sets around the server, wherein the deployment position is the candidate deployment position with the maximum cooperation potential.
8. The multi-collaboration server deployment method under the edge computing scenario of claim 7, wherein:
calculating cooperation capacity indexes of all candidate sets around a server according to the difference between the peak value sum of independent calculation requests of all internet of things equipment which can be covered by the deployment server at the candidate deployment position and the peak value calculation request after the server aggregation; the larger the difference, the more suitable the candidate deployment location is for deploying the new server.
9. The multi-collaboration server deployment method under the edge computing scenario of any one of claims 1 to 6 or 8, wherein:
in the second stage, through multiple rounds of random selection of servers, optimization is performed on multiple time slots with the highest resource request number of the selected servers, so that a scheme with the smallest overall computing resource deployment number is obtained.
10. The multi-collaboration server deployment method in an edge computing scenario as claimed in claim 9, wherein:
the scheme for obtaining the minimum total computing resource deployment number specifically comprises the following steps:
step 3.1), modeling the problem of balancing the workload of each time slot into a non-convex optimization problem;
step 3.2), for the non-convex optimization problem, converting the non-convex optimization problem into a convex optimization problem by using a log-sum-exp approximate function;
step 3.3), solving the convex optimization problem after conversion by using a KKT condition.
CN202211238060.8A 2022-10-10 2022-10-10 Multi-cooperation server deployment method in edge computing scene Pending CN115633383A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211238060.8A CN115633383A (en) 2022-10-10 2022-10-10 Multi-cooperation server deployment method in edge computing scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211238060.8A CN115633383A (en) 2022-10-10 2022-10-10 Multi-cooperation server deployment method in edge computing scene

Publications (1)

Publication Number Publication Date
CN115633383A true CN115633383A (en) 2023-01-20

Family

ID=84904048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211238060.8A Pending CN115633383A (en) 2022-10-10 2022-10-10 Multi-cooperation server deployment method in edge computing scene

Country Status (1)

Country Link
CN (1) CN115633383A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117527807A (en) * 2023-11-21 2024-02-06 扬州万方科技股份有限公司 Multi-micro-cloud task scheduling method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117527807A (en) * 2023-11-21 2024-02-06 扬州万方科技股份有限公司 Multi-micro-cloud task scheduling method, device and equipment
CN117527807B (en) * 2023-11-21 2024-05-31 扬州万方科技股份有限公司 Multi-micro-cloud task scheduling method, device and equipment

Similar Documents

Publication Publication Date Title
CN107995660B (en) Joint task scheduling and resource allocation method supporting D2D-edge server unloading
CN112492626B (en) Method for unloading computing task of mobile user
Yu et al. Computation offloading with data caching enhancement for mobile edge computing
CN111930436B (en) Random task queuing unloading optimization method based on edge calculation
Tran et al. COSTA: Cost-aware service caching and task offloading assignment in mobile-edge computing
CN109151864B (en) Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network
CN111372314A (en) Task unloading method and task unloading device based on mobile edge computing scene
CN110941667A (en) Method and system for calculating and unloading in mobile edge calculation network
CN111447619A (en) Joint task unloading and resource allocation method in mobile edge computing network
CN110096362B (en) Multitask unloading method based on edge server cooperation
WO2023024219A1 (en) Joint optimization method and system for delay and spectrum occupancy in cloud-edge collaborative network
CN110069341B (en) Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing
CN109756912B (en) Multi-user multi-base station joint task unloading and resource allocation method
CN111552564A (en) Task unloading and resource optimization method based on edge cache
Wang et al. Computational offloading with delay and capacity constraints in mobile edge
Zhang et al. DMRA: A decentralized resource allocation scheme for multi-SP mobile edge computing
CN110519776A (en) Balanced cluster and federated resource distribution method in a kind of mist computing system
US11977929B2 (en) Resource allocation method and apparatus based on edge computing
CN109639833A (en) A kind of method for scheduling task based on wireless MAN thin cloud load balancing
CN111935205B (en) Distributed resource allocation method based on alternating direction multiplier method in fog computing network
CN112822707A (en) Task unloading and resource allocation method in computing resource limited MEC
CN115633383A (en) Multi-cooperation server deployment method in edge computing scene
CN114189521B (en) Method for collaborative computing offloading in F-RAN architecture
CN113159539B (en) Method for combining green energy scheduling and dynamic task allocation in multi-layer edge computing system
CN103618674A (en) A united packet scheduling and channel allocation routing method based on an adaptive service model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination