CN111585916B - LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation - Google Patents

LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation Download PDF

Info

Publication number
CN111585916B
CN111585916B CN201911365711.8A CN201911365711A CN111585916B CN 111585916 B CN111585916 B CN 111585916B CN 201911365711 A CN201911365711 A CN 201911365711A CN 111585916 B CN111585916 B CN 111585916B
Authority
CN
China
Prior art keywords
task
computing
cloud
edge node
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911365711.8A
Other languages
Chinese (zh)
Other versions
CN111585916A (en
Inventor
李欢
孙峰
刘扬
王东东
卢盛阳
杨智斌
任帅
李桐
佟昊松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Liaoning Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Liaoning Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Liaoning Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201911365711.8A priority Critical patent/CN111585916B/en
Publication of CN111585916A publication Critical patent/CN111585916A/en
Application granted granted Critical
Publication of CN111585916B publication Critical patent/CN111585916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0044Arrangements for allocating sub-channels of the transmission path allocation of payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0058Allocation criteria
    • H04L5/006Quality of the received signal, e.g. BER, SNR, water filling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention belongs to the technical field of power, and particularly relates to a cloud edge cooperation-based LTE power wireless private network task unloading and resource allocation method. The invention calculates all time delays of equipment and establishes a mathematical model for the problem of minimizing the total time delay of all mobile equipment according to four different time delays in a cloud edge cooperative system; and considering a task unloading scheme according to four different system scenes, and solving an optimal allocation value of computing resources in the cloud edge cooperative system. Under all conditions, the performance of the method is optimal, and by executing the cloud edge cooperative strategy, the task unloading and resource allocation strategy provided by the invention can optimally allocate the computing capacities of the edge nodes and the cloud server, so that the minimization of average system time delay is realized.

Description

LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation
Technical Field
The invention belongs to the technical field of power, and particularly relates to a cloud edge cooperation-based LTE power wireless private network task unloading and resource allocation method.
Background
With the rapid development of the LTE power wireless private network, the number of access devices in the network is continuously increased, which brings great challenges to the traditional cloud computing network. The task delay is larger and the core network load is larger due to the multi-hop data transmission in the traditional cloud computing. Mobile edge computing is considered as a key technology for next generation wireless communication, and by deploying in a mobile edge computing center at the edge of a network, low-latency computing services are provided and the computing power of the network is improved, so as to meet the increasing demands of users.
In order to use services provided by an edge network, how a mobile device offloads a task borne to an edge server to make an efficient and reasonable offloading decision has become a main research direction of the current edge computing problem. However, with the development of technologies such as the internet of things, the data computing service scale increases in an explosive manner, task data gushes into a computing network, and the limited computing resources and communication resources of a mobile edge computing center are extremely challenged.
The prior art is as follows:
technical scheme 1: the patent publication number is CN109302709A, the name is a mobile edge computing-oriented vehicle networking task offloading and resource allocation strategy, and the task offloading mode decision and resource allocation method based on (MEC) in a vehicle heterogeneous network is mainly completed through two steps: firstly, clustering request vehicles according to different QoS by adopting an improved K-means algorithm so as to determine a communication mode; second, with Contention Free Period (CFP) based LTE-U, a distributed Q-Learning algorithm is used for channel and power allocation in combination with Carrier Aggregation (CA) techniques.
Technical scheme 2: patent publication number CN109698861A, named a computing task unloading algorithm based on cost optimization, relates to a computing task unloading algorithm based on cost optimization, and is mainly completed by four steps: firstly, constructing a new edge cloud computing model; the Bian Yun calculation of the new model involves calculating three important costs: the execution cost of the computing tasks, the communication cost between the computing tasks at the same end and the asymmetric communication cost between the computing tasks at the cross end; secondly, expanding a new edge cloud computing model; thirdly, merging calculation cost; fourth, an optimization offload policy is solved based on a greedy criterion.
Technical scheme 3: patent publication number CN110489176A, named as a multi-access edge computing task unloading method based on the boxing problem, is mainly completed by three steps: firstly, calculating the capacity of each edge server and the ratio of the size of input data of each terminal task to the required calculation resource; then forming two queues from large to small according to the capacity and the task ratio; and finally, sequentially taking out the tasks in the task queue, and repeating the operation until the task queue is empty, wherein the tasks are configured on an edge server with the maximum capacity and the residual computing resources in the container queue.
The technical scheme 1 aims at the high bandwidth and low time delay advantages brought by the IT service environment and cloud computing capability provided by Mobile Edge Computing (MEC), and combines with LTE unlicensed spectrum (LTE-U) technology, and the problem of task offloading mode decision and resource allocation based on (MEC) in a vehicle heterogeneous network is studied. Considering the link differentiation requirements, i.e., high capacity of the vehicle-to-roadside unit (V2I) link and high reliability of the vehicle-to-vehicle (V2V) link, we model user quality of service (QoS) as a combination of capacity and latency. However, such methods have the disadvantage of being difficult to develop in another context.
The technical scheme 2 relates to a computing task unloading algorithm based on cost optimization, which solves the problem of unloading optimization of computing tasks in a frame combining edge computing and cloud computing. However, this method does not fully consider the different types of delays existing in the cloud-edge cooperative system.
According to the technical scheme 3, the multi-access edge computing task unloading method based on the boxing problem is provided, a user terminal and an edge server are regarded as task containers, tasks are regarded as articles, so that task unloading decision-making problems in edge computing are converted into a boxing problem, the number of enabled edge servers in a network is minimized through a heuristic method, and task unloading decisions are solved. A disadvantage of such an approach is that no distinction is considered between different types of systems.
Therefore, how to make efficient task offloading and resource allocation decisions in a limited resource context becomes a critical issue.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a cloud-edge-collaboration-based LTE power wireless private network task unloading and resource allocation method, which aims to study different time delays existing in a cloud-edge collaboration system and realize the minimization of the total time delay of all mobile equipment through task unloading and calculation resource allocation under limited resources.
The technical scheme adopted for solving the technical problems is as follows:
the LTE power wireless private network task unloading and resource allocation method based on cloud edge coordination is characterized in that four different time delays exist in a cloud edge coordination system, all time delays of equipment are calculated, and a mathematical model is established for the problem that the total time delay of all mobile equipment is minimized; the task unloading scheme is considered in four different system scenes, and the optimal allocation value of the computing resource in the cloud edge cooperative system is solved; the method comprises the following steps:
step 1, establishing a cloud edge cooperative system;
step 2, establishing an MEC cache model;
step 3, analyzing the time delay existing in the cloud edge cooperative system;
step 4, constructing a problem model according to the time delay analysis result;
step 5, decomposing the constructed problems;
and 6, providing a solution to the constructed problem.
The cloud edge cooperative system is established in the step 1, and is composed of a central cloud server and M SCeNB, and is represented by a set M= {1,2, & gt, J; each SCeNB is provided with an MEC server, and the MEC server uses limited resources for data processing, buffering and storage; each combination of an sceb and MEC server is referred to as an edge node; within the coverage area of the jth SCeNB, there is a set χ j X is represented by j Each mobile device has a computing task with different latency requirements; assuming that each user is already connected with one base station, the specific connection relation is determined by a user communication strategy; each mobile device is connected with a corresponding base station through a wireless channel, and the edge node transmits data to the cloud server through different backhaul links; in the system, each computing task is assumed to be capable of processing on both the edge node and the cloud server; according to [1 ]]In (1), assuming that all tasks are of the same type and arrive at the same time, using q i,j =(s i,j ,c i,j ) Representing a computational task generated by an ith mobile device connected to a jth edge node, where s i,j Representing the size of the computational task, c i,j Representing the CPU calculation period required to calculate the task; n represents the total task number requested by the system in a certain time period; the computing power of the MEC server in the jth edge node and the computing power of the cloud server are respectively defined as F j e And F c The method comprises the steps of carrying out a first treatment on the surface of the Computing resources of each MEC server and cloud server are allocated to mobile devices through virtual machine technology.
Step 2, establishing an MEC cache model, including: for each task, the cache vector of the MEC server is defined as Y i,j =[y 1,1 ,y 1,2 ,...,y i,j ]The method comprises the steps of carrying out a first treatment on the surface of the If y i,j =1, then represents the calculation result of the MEC server cached task, and y i,j =0 indicates that the corresponding task result is not cached; when y is i,j When=1, the MEC server directly transmits the calculation result to the mobile device without calculation; since the transmission power of the SCeNB is larger than that of the mobile equipment, the data volume of the calculation result is smaller thanThe data volume of the task itself ignores the transmission delay on the wireless downlink, and considers the transmission delay required by the MEC caching scheme to be 0; the MEC cache model directly returns the result of frequently requested tasks to the mobile equipment, so that time delay is reduced and energy consumption is reduced.
And 3, performing time delay analysis existing in the cloud edge cooperative system, wherein the time delay analysis comprises the following steps: in a cloud-side-end task allocation model, each mobile device is assumed to send a task request due to limited computing capacity and electric quantity, and whether a result of the task exists in a cache is judged; if yes, directly returning a task result; if not, the mobile device uploads the task to the edge node; and determining whether the task is processed by the edge node alone or cooperatively processed by the edge node and the cloud server by the corresponding edge node.
If the task does not exist, the mobile device uploads the task to an edge node, and the edge node is also responsible for determining the unloading proportion of each task on the cloud edge.
And 3, analyzing time delay existing in the cloud edge cooperative system, and specifically comprising the following steps:
step (1), the mobile equipment sends a corresponding task request to the connected edge node, and the corresponding task request is used for judging whether the piece of data is wanted by the mobile equipment or not, and the edge node judges whether the result of the task exists in a cache or not; if yes, directly returning a task result, and ending task processing; otherwise, continuing the next step;
step (2), the mobile equipment directly uploads the whole time delay sensitive task to the connected edge node through a wireless channel;
dividing the received task into two parts by the MEC server positioned at each edge node, wherein one part of the received task is left on the MEC server, and the other part of the received task is unloaded to the cloud server;
the MEC server distributes available computing resources to each computing task and uploads partial data to the cloud server through a backhaul link;
the cloud server distributes computing resources to corresponding computing tasks so as to realize parallel computing;
And (6) collecting the calculation result by each edge node, caching the calculation result through a task caching strategy, and returning the calculation result to each mobile device.
The delay in the calculation process includes:
a. transmission delay of mobile device: MT is calculated according to the shannon formula i The upload rate r of (2) i,j Expressed as:
wherein B represents the channel bandwidth, p i Representation MT i Transmission power σ of (a) 2 Representing the noise power of a mobile device, I i,j Indicating interference power in cell g i,j Representing channel gain of the communication;
sc is communicated with the MEC through an optical fiber, and the transmission rate c between the sc and the MEC is far greater than that of MT i The upload rate r of (2) i,j The distance between the two is very small, and the communication time delay between sc and MEC is ignoredTo indicate that the mobile device will task q i Upload to sc j The time delay generated is as follows:
in the above formula: MT (Mobile telephone) i The upload rate r of (2) i,j ,S i Representing the size of the computing task;
b. computing time delay of edge node: after the edge node successfully receives the complete computing tasks sent from the mobile device, the MEC server immediately executes an unloading strategy to divide each computing task into two parts, wherein one part is executed by the MEC server, and the other part is executed by the cloud server; assuming that each computing task is arbitrarily split without considering the task content, corresponding to video compression and Scenes such as speech recognition []The method comprises the steps of carrying out a first treatment on the surface of the Definition epsilon i,j ∈[0,1]To calculate the division ratio of the task ε i , j Representing the proportion of computational task data performed at the MEC server; by usingRepresenting the computing resources allocated by the jth edge node to the ith mobile device, and representing the time delay generated by the task execution at the edge node as:
in the above formula: c (C) i,j Representing the CPU calculation period required to calculate the task;
c. transmission delay of edge node: separating a communication module and a computing module (CPU/GPU) for each edge node; the communication module is a transceiver, and the calculation module is a CPU/GPU;
in the edge node, the calculation of the task and the transmission of the task are executed in parallel; all edge nodes are connected with the cloud server through different backhaul links; providing an optimal cooperative strategy of edge calculation and cloud calculation, and assuming that a resource scheduling strategy and a routing algorithm are determined; will H j Backhaul communication capacity represented as each device associated with the jth edge node; the average backhaul transmission delay is proportional to the size of the transmitted data, expressed as:
wherein the method comprises the steps ofRepresenting the time required to transmit 1-bit sized data over the backhaul link, S i Representing the size, ε, of a computing task i,j Representing the proportion of computational task data performed at the MEC server;
d. Computing time delay of cloud server: when the cloud service is successfully received from the edge nodeThe cloud server distributes available computing resources to each task to realize parallel processing according to the transmitted task data; (1-epsilon) i,j )s i,j c i,j Representing the number of CPUs required to perform tasks offloaded to the cloud server, i.e., computing resources; by usingRepresenting cloud computing resources allocated by an ith mobile device serviced by a jth edge node; the computation delay of the cloud server is expressed as:
in the above formula: c (C) i,j Representing the CPU computation cycles required to compute this task.
In step 1, different time delays exist in the cloud edge cooperative system, including: the method comprises the steps of transmitting delay of mobile equipment, calculating delay of an edge node, transmitting delay of the edge node and calculating delay of a cloud server; the total delay generated by each device includes:
suppose 1: since the offloading of tasks depends on the specific parameters of each task;
suppose 2: the task calculation depends on a specific data structure and correlation between adjacent data, and the cloud server starts to process the task until transmission between the edge node and the cloud server is finished;
based on the above assumption, the total delay incurred by the ith mobile device served by the jth edge node can be expressed as:
In the above formula:indicating that the mobile device will task q i Upload to sc j Delay of production->Representing the time delay created by the execution of a task at an edge node, < >>Representing the delay of the edge node uploading the task to the cloud server,/for the task>Representing the time delay generated by the processing of the computing task in the cloud server;
the method realizes the minimization of the total time delay of all mobile equipment by task unloading and computing resource allocation under the limited resources, and the targets are expressed as follows:
in the above formula: y is i,j Representing the cache state of the task;
wherein (7 a) and (7 b) represent the number of computing resources that are not likely to exceed the maximum number of computing resources per se for both the edge computing resources and the cloud computing resources allocated to the mobile device; the optimization variables include the offload ratio { ε } of the task i,j Distribution of computing resources
Decomposing the constructed problem in the step 5 comprises the following steps:
q1 is broken down into two parts:
task offloading and computing resource allocation only affect the solution of the latter part, dividing the problem:
wherein the method comprises the steps of
The solution to the problem of construction described in step 6 includes:
(1) Task caching strategy:
before task offloading decision is made, problem Q2 is further simplified, and a task caching strategy is utilized to calculate a caching vector Y i,j =[y 1,1 ,y 1,2 ,...,y i,j ]The algorithm process is as shown in fig. 4:
Assuming that the number of requests of the mobile device for the task Q follows the Poisson distribution, the requirement is satisfiedC represents the number of requests of the task, and lambda represents the average occurrence rate of the task in unit time; setting a vector group Y to be empty during initialization, for each task, firstly determining the task type according to poisson distribution, then judging whether the task exists in a task cache table, if so, directly returning an execution result to the mobile equipment by the MEC server, and Y i,j If the task is set to be 1, otherwise, continuously judging whether the task exists in the task history list; if present in the table, will y i,j Setting 1, directly returning an execution result, storing the task into a task cache table, and updating a task history record table; if neither table records task q i Then y is i,j Set to 0, so traverse the subtask space,finally, a vector group Y is obtained i,j =[y 1,1 ,y 1,2 ,...,y i,j ];
Using vector groups Y i,j =[y 1,1 ,y 1,2 ,...,y i,j ]Simplifying Q2 can obtain:
wherein the method comprises the steps of
(2) Task offloading policies:
determining an optimal task offloading ratio ε i,j Hold during analysisIs unchanged;
calculating delay by analysisAnd a task allocation ratio epsilon i,j Monotonicity between determines the optimal division ratio +.>First, according to the formula->Deducing->Is monotonically increasing with epsilon i,j Is increased by an increase in (a); when epsilon i,j ∈[0,1]Obtain->Second, according to the formula->Deducing->Is monotonically decreasing with epsilon i,j The increase and decrease of the function value range is obtainedObservation formula->In combination with the previous discussion, find +.>First follow epsilon i,j Decreasing with increasing epsilon i,j Is increased by an increase in (a); when (when)When get +.>Is the minimum value of (2):wherein->
Each mobile device defines two important parameters including:
(2.1) normalized backhaul communication capacity is defined as the ratio of backhaul communication capacity to edge computing capability, i.e
(2.2) normalized cloud computing Capacity is defined as the ratio of cloud computing Capacity to edge computing Capacity, i.e
Based on the definition, obtaining an optimal task unloading strategy; in cloud edge cooperative system, optimal task unloading strategyExpressed as:
in the above formula: mu (mu) i,j Representing the ratio of backhaul communication capacity to edge computing capacity, v i,j Representing a ratio of cloud computing power to edge computing power;
the different systems present in the formula (10) are as follows:
1 st: communication limited system: the edge node and the cloud server have sufficient computing resources, but the communication capacity is insufficient;
in this case the number of the elements to be formed is,this occurs when one edge node connects a large number of mobile devices and the capacity of the backhaul link is insufficient, simplifying the optimal task offload ratio:
Equation (11) shows that in this case, the optimal task offload ratio is determined only by the normalized backhaul communication capacity, unaffected by the normalized cloud computing capacity; in this case, the communication resources reduce the main bottleneck of the end-to-end delay of each device; when mu i,j 0, optimal task off-load ratio1, all calculation tasks are executed at the edge node;
2 nd: computing a constrained system: the edge node and the cloud server have sufficient communication capacity, but the computing resources are insufficient;
in this case the number of the elements to be formed is,simplifying the optimal task unloading ratio:
the optimal task offloading ratio is determined only by the normalized cloud computing capability, in which case, since the backhaul communication capacity is relatively sufficient, the latency generated by the communication becomes small, and the magnitude of the computation latency determines the magnitude of the overall latency of the mobile device; in this case, the edge node and cloud server are considered as a whole, based on their ratio of computing power, i.e. v i,j Splitting tasks according to a proportion; if the computing power of the edge node is greater than the computing power of the cloud server, i.e., v i,j < 1, then offload more data to the edge node; otherwise, offloading more data into the cloud, at v i,j In the special case of =1, the data is equally distributed between the edge node and the cloud server;
3 rd: edge-guided systems: the edge node allocates far more computing resources to the mobile device than the cloud server, i.eObtain v i,j The case 0 corresponds to a large-scale small cellular network, where a cloud server serves many edge nodes; in this system, the optimal task offload ratio is reduced as follows:
indicating that the whole task should be executed on the edge node, the whole task only needs to be unloaded to the edge node;
4 th: cloud-oriented system: the cloud server has sufficient computing resources, while the edge node has limited computing resources, i.eObtain v i,j The system corresponds to a scene with strong computing power of a cloud server and weak computing power of an edge node; will v i,j The →infinity substitution formula can be obtained:
according to the formula, it is seen that the optimal task offload ratio is determined only by the normalized backhaul communication capacity; if backhaul communication capacity H j Increasing or edge node computing powerThe optimal task unloading rate is reduced along with the reduction; because of v i,j When in → infinity, the time delay generated by the cloud executing task is negligible compared with the time delay generated by the edge node executing task; thus when mu i,j When < 1, the time delay generated by the backhaul transmission will dominate the overall time delay of the mobile device, thereby leading to a larger proportion of task data offloaded to the edge side processing, i.e +.>Otherwise, when mu i,j More data should be offloaded to the cloud server for execution when > 1;
(3) Computing a resource allocation policy: unloading optimal task ratioSubstituting formula to calculate time delay +.>The simplification is as follows:
problem Q3 rewrites as:
theorem 1: q4 is a convex optimization problem;
and (3) proving: only the restriction conditions of the objective function and the objective function are proved to be convex functions, and the theorem is proved; looking at constraints (7 a) and (7 b), it can be seen that constraints (7 c) and (7 d) are affine reflecting their convexity; the following demonstrates that the objective function is also a convex function:
expressed as a hessian matrix:
all front main formulas of W are obtained through mathematical calculation:
according to linear algebra theory, when the front main sub-formula of a matrix is positive definite, the matrix is positive definite; the product can be obtained by the method,based on the variables->And->Is a convex function of (2); and objective function +.>Is the sum of a series of convex functions, and the obtained objective function is also a convex function; theorem 1 obtains evidence;
according to theorem 1, obtaining that Q4 meets the KKT condition, and obtaining an optimal distribution value of computing resources in the cloud-edge cooperative system by using a Lagrange operator:
Wherein the method comprises the steps ofAnd θ is the optimal variable for lagrangian.
The invention has the following advantages and beneficial technical effects:
the invention provides a Cloud-Edge cooperative task offloading and resource allocation (Cloud-Edge-TORD) scheme, which establishes a Cloud-Edge cooperative system and an MEC cache model and analyzes time delay existing in the Cloud-Edge cooperative system. And under the condition of limited resources, the aim of minimizing the total time delay of all the mobile equipment is realized through task unloading and calculation resource allocation, a problem model is established, and the problem is decomposed to obtain a task caching, unloading and resource allocation strategy.
The invention is based on the established cloud edge cooperative system and MEC buffer model, analyzes the time delay in the cloud edge cooperative system and aims at minimizing the time delay. Simulation shows that in all cases, the Cloud-Edge-TORD scheme performance is optimal, because by executing the Cloud Edge cooperative strategy, the computing capacities of the Edge nodes and the Cloud server can be optimally allocated by adopting the task unloading and resource allocation strategy, and the average system time delay is minimized.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the following description will briefly explain the drawings of the embodiments of the present invention. Wherein the showings are for the purpose of illustrating some embodiments of the invention only and not for the purpose of limiting the same.
FIG. 1 is a schematic diagram of a cloud edge collaboration system of the present invention;
FIG. 2 is a schematic diagram of a task execution flow of the cloud edge cooperative system of the present invention;
FIG. 3 is a schematic diagram of an MEC cache model according to the present invention;
FIG. 4 is a process diagram of a cache vector algorithm in the present invention;
FIG. 5 is a graph of time delay versus 4 schemes when the computing power of the cloud server of the present invention increases;
fig. 6 is a time delay comparison diagram of 4 schemes when the computing power of the cloud server is reduced.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
The invention relates to an LTE power wireless private network task unloading and resource allocation method based on cloud edge coordination, which is used for calculating all time delays of equipment and establishing a mathematical model for the problem of minimizing the total time delay of all mobile equipment in four different time delays existing in a cloud edge coordination system. The invention considers task unloading schemes in four different system scenes and solves the optimal allocation value of the computing resources in the cloud edge cooperative system.
The invention discloses a cloud edge cooperation-based LTE power wireless private network task unloading and resource allocation method, which comprises the following steps:
step 1, establishing a cloud edge cooperative system;
step 2, establishing an MEC cache model;
step 3, analyzing the time delay existing in the cloud edge cooperative system;
step 4, constructing a problem model according to the time delay analysis result;
step 5, decomposing the constructed problems;
and 6, providing a solution to the constructed problem.
In the invention, in step 1, a cloud edge cooperative system is established, as shown in fig. 1, and fig. 1 is a schematic diagram of the cloud edge cooperative system of the invention. The cloud edge collaboration system is composed of one central cloud server and M scebs, and is represented by the set m= {1, 2. The SCeNB represents a small cell. Each SCeNB deploys a MEC server that uses limited resources for data processing, caching and storage. Each combination of an SCeNB and an MEC server is called an edge node. Within the coverage area of the jth SCeNB, there is a set χ j X is represented by j Each mobile device has a computing task with different latency requirements. Assuming that each user has been connected to a base station, the specific connection relationship may be determined by some user communication policy. In addition, each mobile device connects to a respective base station over a wireless channel, and the edge nodes transmit data to the cloud server over different backhaul links. In this system, it is assumed that each computing task can be processed on both edge nodes and cloud servers according to [1 ] ]Model in [1 ]]Y.Mao,C.You,J.Zhang,K.Huang,andK.B.Letaief,“Asurvey onmobile edge computing:The communication perspective,”IEEE Commun.Surv.Tut.,vol.19,no.4,pp.2322–2358,Aug.2017。
We assume that all tasks are of the same type and arrive at the same time, so q can be used i,j =(s i,j ,c i,j ) Representing a computational task generated by an ith mobile device connected to a jth edge node, where s i,j Representing the size of the computational task, c i,j Representing the CPU computation cycles required to compute this task. The total task number requested by the system in a certain time period is represented by N, and furthermore, the computing power of the MEC server in the jth edge node and the computing power of the cloud server are respectively defined asAnd F c . Computing resources of each MEC server and cloud server may be allocated to mobile devices through virtual machine technology. Fig. 2 is a schematic diagram of task execution flow of the cloud-edge cooperative system according to the present invention, as shown in fig. 2.
Step 2, establishing an MEC cache model, as shown in fig. 3, and fig. 3 is a schematic diagram of the MEC cache model according to the present invention. For each task, the cache vector of the MEC server may be defined as Y i,j =[y 1,1 ,y 1,2 ,...,y i,j ]. If y i,j =1, then represents the calculation result of the MEC server cached task, and y i,j And=0 indicates that the corresponding task result is not cached. In addition, when y i,j When=1, the MEC server directly transmits the calculation result to the mobile device without performing calculation. Since the transmission power of the SCeNB is much larger than that of the mobile device and the data amount of the calculation result is much smaller than that of the task itself, the transmission delay on the wireless downlink can be ignored, and the transmission delay required for the MEC buffering scheme is considered to be 0. The MEC cache model can directly return the result of frequently requested tasks to the mobile equipment, so that time delay and energy consumption can be effectively reduced.
In the time delay analysis existing in the cloud-edge cooperative system in the step 3, in the cloud-edge-end task allocation model, it is assumed that each mobile device does not directly process tasks and does not have local calculation due to limited calculation capacity and electric quantity. The mobile device first sends a task request to determine whether the result of the task is in the cache. If so, the task result is directly returned, and if not, the mobile device uploads the task to the edge node. Then, whether the task is handled by the edge node alone or in conjunction with the cloud server is determined by the corresponding edge node. In the latter case, the edge node is also responsible for determining the offload rate of each task at the cloud edge.
And 3, analyzing time delay existing in the cloud edge cooperative system, and specifically comprising the following steps:
step (1) the mobile device first sends a corresponding task request to the connected edge node, which is equivalent to that the https protocol only sends the head part of the data at first, so as to judge whether the data is wanted by itself, and the edge node judges whether the result of the task exists in the cache. If the task exists, directly returning a task result, and ending task processing; otherwise, the next step is continued.
Step (2) the mobile device directly uploads the entire delay sensitive task to the connected edge node over the wireless channel without local computation.
And (3) dividing the received task into two parts by the MEC server at each edge node, wherein one part of the received task is left on the MEC server, and the other part of the received task is unloaded to the cloud server.
Step (4) the MEC server allocates its available computing resources to each computing task and simultaneously uploads part of the data to the cloud server over the backhaul link.
The cloud server in step (5) also realizes parallel computation by allocating computing resources to corresponding computing tasks.
And (6) finally, collecting the calculation result by each edge node, caching the calculation result through a task caching strategy, and returning the calculation result to each mobile device.
It should be noted that the time to divide the tasks is very short compared to the corresponding computation and communication delays and therefore negligible. Furthermore, the amount of data for both task requests and computation results is very small, so the send and return delays can be ignored, which corresponds to many practical computing scenarios, such as face recognition, virus detection, and video analysis, etc.
Wherein the four delays present in the calculation process include the following:
a. transmission delay of mobile device: as mentioned above, each computing task is uploaded to the corresponding edge node via a wireless channel. MT can be determined according to the shannon formula i The upload rate r of (2) i,j Expressed as:
wherein B represents the channel bandwidth, p i Representation MT i Transmission power σ of (a) 2 Representing the noise power of a mobile device, I i,j Indicating interference power in cell g i,j Representing the channel gain of the communication.
Note that sc is in communication with the MEC via optical fiber, the transmission rate c between the two being much greater than MT i The upload rate r of (2) i,j The distance between the two is very small, so that the communication time delay of sc and MEC can be ignoredTo indicate that the mobile device will task q i Upload to sc j The time delay generated is as follows:
in the above formula: MT (Mobile telephone) i The upload rate r of (2) i,j ,S i Representing the size of the computational task.
b. Computing time delay of edge node: after the edge node successfully receives the complete computing tasks sent from the mobile device, the MEC server will immediately execute the offload policies, dividing each computing task into two parts, one part being executed by the MEC server and the other part being executed by the cloud server. It is assumed herein that each computational task can be arbitrarily split without regard to task content, which corresponds to video compression and speech recognition scenarios [ ]. Definition epsilon i,j ∈[0,1]To calculate the division ratio of the task ε i,j Representing the proportion of computational task data that is performed at the MEC server. By usingRepresenting the computing resources allocated by the jth edge node to the ith mobile device, the latency incurred in performing tasks at the edge node can therefore be expressed as:
in the above formula: c (C) i,j Representing the CPU computation cycles required to compute this task.
c. Transmission delay of edge node: for each edge node, the communication module and the computing module (CPU/GPU) are typically separated. The communication module is a transceiver, and the calculation module is a CPU/GPU.
Thus, in the edge node, the computation of the task and the transmission of the task may be performed in parallel. And, all edge nodes are connected to the cloud server through different backhaul links, which are typically equipped with extremely high bandwidths. In practice, the backhaul link is user-shared, so it is difficult to model its delay due to the randomness of packet arrival, multi-user scheduling, complex routing algorithms, and other factors. That is, an optimal cooperative strategy of edge calculation and cloud calculation is proposed, and it is assumed that a resource scheduling strategy and a routing algorithm are determined. Thus, H is j Represented as backhaul communication capacity of each device associated with the jth edge node. Thus, similar to the average transmission delay in (2), the average backhaul transmission delay is also proportional to the size of the data transmitted, and can be expressed as:
Wherein the method comprises the steps ofRepresenting the time required to transmit 1-bit sized data over the backhaul link, S i Representing the size, ε, of a computing task i,j Representing the proportion of computational task data that is performed at the MEC server.
d. Computing time delay of cloud server: when the cloud service successfully receives task data sent from the edge node, the cloud server allocates available computing resources to each task to achieve parallel processing. (1-epsilon) i,j )s i,j c i,j Representing offloading of execution to cloudThe number of CPUs required for the tasks of the server, i.e. the computing resources. By usingRepresenting cloud computing resources allocated by an ith mobile device serviced by a jth edge node. Thus, the computation latency of the cloud server can be expressed as:
in the above formula: c (C) i,j Representing the CPU computation cycles required to compute this task.
Step 4, aiming at a time delay analysis result, constructing a problem model:
the invention analyzes four different time delays existing in a cloud edge cooperative system, and comprises the following steps: transmission delay for mobile devicesComputation delay +.>Transmission delay ∈of edge node>Computing time delay of cloud serverIn order to determine the total delay generated by each device, several reasonable assumptions are made:
suppose 1: since the offloading of tasks depends on specific parameters of each task, such as its data size and the amount of work required for computation, the MEC servers in the edge nodes can offload tasks by the task offloading policy only after receiving the task data.
Suppose 2: in a practical system, task computation may depend on a specific data structure and correlation between neighboring data, such as video analysis in a multimedia system. In order to ensure the reliability of the calculation result, the cloud server cannot start processing tasks until the transmission between the edge node and the cloud server is finished.
Based on the above assumption, the total delay incurred by the ith mobile device served by the jth edge node can be expressed as:
in the above formula:indicating that the mobile device will task q i Upload to sc j Delay of production->Representing the time delay created by the execution of a task at an edge node, < >>Representing the delay of the edge node uploading the task to the cloud server,/for the task>Representing the latency of the computing task as it is processed at the cloud server.
The invention aims to realize the minimization of the total time delay of all mobile equipment through task unloading and calculation resource allocation under the condition of limited resources, and finally, the aim can be expressed as:
/>
in the above formula: y is i,j Representing the cache state of the task.
Where (7 a) and (7 b) represent the maximum number of computing resources that neither the edge computing resources nor the cloud computing resources allocated to the mobile device may exceed. The optimization variables include the offload ratio { ε } of the task i,j Distribution of computing resources
Decomposing the constructed problem in the step 5 comprises the following steps:
to solve the problem Q1, first, the structural features of the function are specifically analyzed. According to equation (2), the transmission delay of the mobile device is only related to the nature of the task itself of the transmission, without additional optimization variables. Meanwhile, the transmission delay from the edge node to the cloud server is equal to the calculation delay of the edge node, and the calculation delay of the cloud server is irrelevant to the transmission delay of the mobile equipment. Thus, Q1 can be broken down into two parts:
the task offloading and computing resource allocation discussed in this invention only affects the solution of the latter part, so the problem can also be divided:
wherein the method comprises the steps of
The solution to the problem of construction described in step 6 includes:
(1) Task caching strategy.
Before task offloading decision is made, the problem Q2 can be further simplified, and the cache vector Y is obtained by using the task cache policy proposed in the previous section i,j =[y 1,1 ,y 1,2 ,...,y i,j ]The algorithm process is as shown in fig. 4:
assuming that the number of requests of the mobile device for the task Q follows the Poisson distribution, the requirement is satisfiedC represents the number of requests for a task, and lambda represents the average occurrence rate of tasks per unit time. Setting a vector group Y to be empty during initialization, for each task, firstly determining the task type according to poisson distribution, then judging whether the task exists in a task cache table, if so, directly returning an execution result to the mobile equipment by the MEC server, and Y i,j If the task is 1, if the task is not present in the task history table, the task is not required to be processed, and y is determined i,j Setting 1, directly returning the execution result, storing the task into a task cache table, updating a task history table, and if neither table records the task q i Then y is i,j Set to 0, traverse the subtask space in this way, and finally obtain the vector set Y i,j =[y 1,1 ,y 1,2 ,...,y i,j ]。
Using vector groups Y i,j =[y 1,1 ,y 1,2 ,...,y i,j ]Simplifying Q2 can obtain:
wherein the method comprises the steps of
(2) Task offloading policies.
Consider whether the problem Q3 is too complexAnd the solution is still not direct. Thus, an optimal task offloading ratio ε is first determined i,j Hold during analysisThe value of (2) is unchanged.
The delay can be calculated by analysisAnd a task allocation ratio epsilon i,j Monotonicity between to determine an optimal division ratio +.>First, according to the formula->It can be inferred that +.>Is monotonically increasing with epsilon i,j Increasing with increasing, therefore, when ∈ i,j ∈[0,1]Can get +.>Next, according to the formulaIt can be inferred that +.>Is monotonically decreasing with epsilon i,j The increase and decrease of the function can be obtained as well as the value range +.>Careful observation of the formulaIn combination with the previous discussion, it can be found that +.>Will follow epsilon first i,j Decreasing with increasing epsilon i,j And increases with increasing numbers of (c). Thus, when->When get +.>Is the minimum value of (2):wherein->
For ease of illustration, two important parameters are defined for each mobile device.
(2.1) normalized backhaul communication capacity is defined as the ratio of backhaul communication capacity to edge computing capability, i.e
(2.2) normalized cloud computing Capacity is defined as the ratio of cloud computing Capacity to edge computing Capacity, i.e
Based on the above definition, we can get the optimal task offloading strategy. In cloud edge cooperative system, optimal task unloading strategyCan be expressed as:
in the above formula: mu (mu) i,j Representing the ratio of backhaul communication capacity to edge computing capacity, v i,j Representing the ratio of cloud computing power to edge computing power.
Equation (10) shows that the optimal task offloading strategy depends only on two ratios: normalized backhaul communication capacity and normalized cloud computing capacity. Furthermore, the optimal task splitting strategy is determined by the harmonic mean of these two ratios. It can be easily verified that the proportion of task data processed at the edge node will follow μ i,j Or v i,j And decreases with increasing numbers. Thus, when a mobile device is allocated little edge computing resources while having enough cloud computing, it should offload more task data to the cloud server. Conversely, if cloud computing resources are very scarce, the edge nodes should be allowed to process more task data, for which a conclusion can be readily reached: offloading more tasks to a more powerful server is an effective solution to reduce overall latency of a mobile device.
The 4 different systems that exist for equation (10) are as follows:
1 st: communication limited system: the edge nodes and cloud servers have sufficient computing resources, but insufficient communication capacity.
In this case the number of the elements to be formed is,this occurs when one edge node connects a large number of mobile devices and the capacity of the backhaul link is insufficient. The optimal task offload ratio can be reduced as follows:
equation (11) shows that in this case, the optimal task offload ratio is determined only by the normalized backhaul communication capacity, and is not affected by the normalized cloud computing capacity. In this case, the communication resource is the main bottleneck reducing the end-to-end delay of each device, becauseThis is relatively unimportant in terms of edge computing power and cloud computing power size. Consider the special case when mu i,j 0, optimal task off-load ratioIt is 1, which indicates that all computing tasks are performed at the edge node, and no task data will be uploaded to the cloud server for execution. This is in line with our expectation that tasks are not offloaded to cloud servers in extreme cases, as offloading tasks to cloud servers exacerbates network congestion and results in longer transmission delays when the communication capacity of the backhaul link is insufficient.
2 nd: computing a constrained system: the edge nodes and cloud servers have sufficient communication capacity but insufficient computing resources.
In this case the number of the elements to be formed is,the optimal task offload ratio can be reduced as follows:
from the formula, it can be seen that the optimal task offload ratio is determined only by the normalized cloud computing capabilities. In this case, since the backhaul communication capacity is relatively sufficient, the delay generated by the communication becomes very small, and the size of the calculation delay determines the size of the overall delay of the mobile device. Note that in this case, the edge node and cloud server can be considered as a whole, and therefore should be based on their ratio of computing power, i.e., v i,j The tasks are split proportionally. More specifically, if the computing power of the edge node is greater than the computing power of the cloud server, i.e., v i,j < 1, then offload more data to the edge node; otherwise, more data should be offloaded into the cloud, at v i,j In the special case of =1, the data should be equally distributed between the edge node and the cloud server.
3 rd: edge-guided systems: the edge node allocates far more computing resources to the mobile device than the cloud server, i.e Can obtain v i,j This case corresponds to a large-scale small cellular network, where a cloud server serves many edge nodes. In this system, the optimal task offload ratio can be reduced as follows:
this indicates that the entire task should be performed on the edge node without having to be offloaded to the cloud server. Because offloading task data to the cloud server may create additional transmission delays and result in longer computation delays when the cloud computation power is much less than the edge node's computation power. Even if the normalized backhaul communication capacity is large enough, the cloud computing latency can still have a major impact on the overall latency. Thus, the entire task only needs to be offloaded to the edge node.
4 th: cloud-oriented system: the cloud server has sufficient computing resources, while the edge node has limited computing resources, i.eCan obtain v i,j Such a system corresponds to a scenario where cloud servers are computationally powerful and edge nodes are computationally weak. Will v i,j The →infinity substitution formula can be obtained:
from the formula, it can be seen that the optimal task offload ratio is determined only by the normalized backhaul communication capacity. If backhaul communication capacity H j Increasing or edge node computing power The optimal task off-load ratio will decrease as well. This is because when v i,j And when in → infinity, the time delay generated by the cloud executing task is negligible compared with the time delay generated by the edge node executing task. Thus when mu i,j When < 1, the time delay generated by the backhaul transmission will dominate the overall time delay of the mobile device, thereby leading to a larger proportion of task data offloaded to the edge side processing, i.e +.>Otherwise, when mu i,j More data should be offloaded to the cloud server for execution at > 1.
(3) A resource allocation policy is calculated.
First, the optimal task unloading ratio obtained in the previous section is obtainedSubstituting formula to calculate time delay +.>The simplification is as follows:
problem Q3 can also be rewritten as:
theorem 1: q4 is a convex optimization problem
And (3) proving: only the object function and the limiting condition of the object function are proved to be convex functions, and the theorem is proved. Looking at constraints (7 a) and (7 b), it can be seen that constraints (7 c) and (7 d) are affine, reflecting their convexity. It is only necessary to prove that the objective function is also a convex function.
The heise matrix of (c) can be expressed as:
all of the front-major formulas of W can be obtained by simple mathematical calculations:
according to linear algebraic theory, a matrix is a positive definite matrix when its front major sub-components are all positive definite. This results in the fact that, Is based on the variables->And->Is a convex function of (a). And objective function +.>Is the sum of a series of convex functions, from which it can be derived that the objective function is also a convex function. Theorem 1 is proved.
According to theorem 1, it can be seen that Q4 satisfies the KKT condition, and then the lagrangian operator is used to calculate the optimal allocation value of the computing resource in the cloud edge cooperative system:
wherein the method comprises the steps ofAnd θ is the optimal variable for lagrangian.
Example 2
According to the invention, a MATLAB simulation tool is utilized to simulate and verify the proposed Cloud-Edge-collaboration-based task unloading and resource allocation (Cloud-Edge-TORD) scheme under a Cloud-Edge-collaboration framework. To verify the superiority of this scheme, the proposed scheme will be compared with the other three schemes, scheme 1: the Only edge, the task is all unloaded to MEC server, all processed by MEC server; scheme 2: the Only group task is completely unloaded to the cloud server and is processed by the cloud server; the solution 3:Simple fixed cloud and edge, without an effective task allocation policy, simply divides the task in half, half being executed by the MEC server and the other half being executed by the cloud server.
The parameters of this embodiment are set as follows:
in simulation experiments, it is assumed that each MEC server fixedly serves 25 mobile devices, and the coverage radius of each MEC server is 500 meters. The size of the task data volume and the number of calculation cycles required for execution follow a uniform distribution, the range s of the distribution i,j ∈[0.2,1]Mbits,c i,j ∈[500,2000]Hz. Computing power of MEC server connected to scComputing capability F of cloud server c ∈[200,600]GHz, the transmission bandwidth b=20mhz of the mobile device, and the channel gain and noise power of the communication are g respectively i,j =10 -5 W,σ 2 =10 -9 Transmission rate c=1gbps of optical fiber, capacity H of backhaul link j ∈[5,50]Mbps。
The performance analysis of this example is as follows:
as shown in fig. 5, fig. 5 is a time delay comparison diagram of 4 schemes when the computing power of the cloud server is increased. The figure shows the comparison of the time delays of the 4 schemes when the number of mobile devices is changed. As can be seen from the figure, the delay of the Only Edge scheme is always kept within a small range, because each MEC server fixedly serves 25 mobile devices, and as the number of mobile devices increases, the number of MEC servers simultaneously increases in corresponding proportion, so that the computing resources allocated to each mobile device are approximately unchanged, and thus the delay is always kept within a range; in addition, when the number of mobile devices is small, the performance of the Only closed scheme is better than that of the Only edge and Simple fixed cloud and edge schemes, because when the number of mobile devices is small, the number of edge nodes is small, so that the cloud computing resources allocated to each mobile device are always more than the edge computing resources, but as the number of edge nodes increases, the performance of the Only edge scheme is better than that of the Only closed scheme because the cloud computing resources are limited. Especially when the number of mobile devices is particularly large, the Only edge solution performs better than the Simple fixed cloud and edge solution, which means that more computing tasks should be offloaded on the edge side rather than the cloud; in all cases, the Cloud-Edge-TORD scheme performance is optimal, because by executing the Cloud Edge cooperative strategy, the computing capacities of the Edge nodes and the Cloud server can be optimally allocated by adopting the task unloading and resource allocation strategy provided by the invention, and the minimization of average system time delay is realized.
Average system time delay comparison of different schemes, as shown in fig. 6, fig. 6 is a time delay comparison diagram of 4 schemes when the computing power of the cloud server is reduced. And (5) providing delay comparison of 4 schemes when the computing capacity of the cloud server is changed. As the computing power of the cloud server increases, the latency generated by a system employing the Only closed scheme will be greatly reduced. In this case, offloading the tasks to the cloud server is a better choice than offloading the tasks to the MEC server, which can effectively reduce the substantial latency. Conversely, as the computing power of the cloud server becomes smaller, the performance of the Only edge solution will be better than the Only closed solution. Similar to fig. 5, the proposed Cloud-Edge-todd scheme is always optimal in all schemes.

Claims (5)

1. The LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation is characterized by comprising the following steps of: the method comprises the following steps:
step 1, establishing a cloud edge cooperative system; different delays in the cloud edge cooperative system include: the method comprises the steps of transmitting delay of mobile equipment, calculating delay of an edge node, transmitting delay of the edge node and calculating delay of a cloud server;
step 2, establishing an MEC cache model;
Step 3, analyzing the time delay existing in the cloud edge cooperative system; comprising the following steps: the mobile device sends a corresponding task request to the connected edge node, judges whether the data is wanted by the mobile device or not, and the edge node judges whether the result of the task exists in a cache or not; if yes, returning a task result, and ending task processing; otherwise, continuing the next step; the mobile device directly uploads the whole time delay sensitive task to the connected edge node through a wireless channel; the MEC server at each edge node leaves one part of the received tasks on the MEC server, and the other part of the received tasks are unloaded to the cloud server; the MEC server distributes available computing resources to each computing task, and simultaneously uploads partial data to the cloud server through a backhaul link; the cloud server distributes computing resources to corresponding computing tasks to realize parallel computing; collecting calculation results by each edge node, caching the calculation results through a task caching strategy, and returning the calculation results to each mobile device;
step 4, constructing a problem model according to the time delay analysis result;
step 5, decomposing the constructed problem, and decomposing Q1 into two parts;
Step 6, a solution is provided for the constructed problem, and an optimal allocation value of the computing resource in the cloud edge cooperative system is solved; comprising the following steps: task caching strategy, task unloading strategy and computing resource allocation strategy;
delays present in the calculation process, including:
a. transmission delay of mobile device: MT is calculated according to the shannon formula i The upload rate r of (2) i,j Expressed as:
wherein B represents the channel bandwidth, p i Representation MT i Transmission power σ of (a) 2 Representing the noise power of a mobile device, I i,j Indicating interference power in cell g i,j Representing channel gain of the communication; MT is Mobile Terminal, MT i Represents the ith MT;
sc represents a cloud server, sc is in communication with the MEC via an optical fiber, and the transmission rate c between the two is greater than MT i The upload rate r of (2) i,j Ignoring sc and MEC communication delays, usingTo indicate that the mobile device will task q i Upload to sc j The time delay generated is as follows:
in the above formula: MT (Mobile telephone) i The upload rate r of (2) i,j ,S i Representing the size of the computing task;
b. computing time delay of edge node: after the edge node successfully receives the complete computing tasks sent from the mobile device, the MEC server immediately executes an unloading strategy to divide each computing task into two parts, wherein one part is executed by the MEC server, and the other part is executed by the cloud server; assume that each computing task is arbitrarily split without considering task content, corresponding to video compression and speech recognition scenarios; definition epsilon i,j ∈[0,1]To calculate the division ratio of the task ε i,j Representing the proportion of computational task data performed at the MEC server; by usingRepresenting the jth edge node to the ith mobile deviceThe computing resources allocated are prepared, and the time delay generated by executing tasks at the edge node is expressed as:
in the above formula: c (C) i,j Representing the CPU calculation period required to calculate the task;
c. transmission delay of edge node: separating the communication module and the computing module CPU/GPU for each edge node; the communication module is a transceiver, and the calculation module is a CPU/GPU; in the edge node, the calculation of the task and the transmission of the task are executed in parallel; all edge nodes are connected with the cloud server through different backhaul links; providing an optimal cooperative strategy of edge calculation and cloud calculation, and assuming that a resource scheduling strategy and a routing algorithm are determined; will H j Backhaul communication capacity represented as each device associated with the jth edge node; the average backhaul transmission delay is proportional to the size of the transmitted data, expressed as:
wherein the method comprises the steps ofRepresenting the time required to transmit 1-bit sized data over the backhaul link, S i Representing the size, ε, of a computing task i,j Representing the proportion of calculation task data, s, performed at the MEC server i,j Representing the size of the computing task;
d. computing time delay of cloud server: when the cloud service successfully receives task data sent from the edge node, the cloud server allocates available computing resources to each task to realize parallel processing; (1-epsilon) i,j )s i,j c i,j Representing the number of CPUs required to perform tasks offloaded to the cloud server, i.e., computing resources; by usingRepresenting cloud computing resources allocated by an ith mobile device serviced by a jth edge node; the computation delay of the cloud server is expressed as:
in the above formula: c (C) i,j Representing the CPU calculation period required to calculate the task;
different time delays existing in the cloud edge cooperative system comprise:
suppose 1: since the offloading of tasks depends on the specific parameters of each task;
suppose 2: the task calculation depends on a specific data structure and correlation between adjacent data, and the cloud server starts to process the task until transmission between the edge node and the cloud server is finished;
based on the above assumption, the total delay incurred by the ith mobile device served by the jth edge node can be expressed as:
in the above formula:indicating that the mobile device will task q i Upload to sc j Delay of production->Representing the time delay created by the execution of a task at an edge node, < > >Representing the delay of the edge node uploading the task to the cloud server,/for the task>Representing the time delay generated by the processing of the computing task in the cloud server;
the method realizes the minimization of the total time delay of all mobile equipment by task unloading and computing resource allocation under the limited resources, and the targets are expressed as follows:
in the above formula: y is i,j Representing the cache state of the task;representing the computing resources allocated by the jth edge node to the ith mobile device,/for the jth edge node>Representing cloud computing resources allocated to an ith mobile device served by a jth edge node,/for>For the computing power of the MEC server in the jth edge node, F c The computing power of the cloud server;
wherein (7 a) and (7 b) represent edge computing resources and cloud meters allocated to mobile devicesNeither of the computing resources may exceed its own maximum number of computing resources; the optimization variables include the offload ratio { ε } of the task i,j Distribution of computing resources
The decomposition of Q1 into two parts comprises:
task offloading and computing resource allocation only affect the solution of the latter part, dividing the problem:
wherein the method comprises the steps of
The task caching strategy is to further simplify the problem Q2 before making task unloading decisions, and calculate a caching vector Y by using the task caching strategy i,j =[y 1,1 ,y 1,2 ,...,y i,j ];
Assuming that the number of requests of the mobile device for the task Q follows the Poisson distribution, the requirement is satisfiedC represents the number of requests of the task, and lambda represents the average occurrence rate of the task in unit time; setting a vector group Y to be empty during initialization, for each task, firstly determining the task type according to poisson distribution, then judging whether the task exists in a task cache table, if so, directly returning an execution result to the mobile equipment by the MEC server, and Y i,j If the task is set to be 1, otherwise, continuously judging whether the task exists in the task history list; if present in the table, will y i,j Set to 1, directlyReturning an execution result, storing the task into a task cache table, and updating a task history record table; if neither table records task q i Then y is i,j Set to 0, traverse the subtask space in this way, and finally obtain the vector set Y i,j =[y 1,1 ,y 1,2 ,...,y i,j ];
Using vector groups Y i,j =[y 1,1 ,y 1,2 ,...,y i,j ]Simplifying Q2 can obtain:
wherein the method comprises the steps of
The task offloading policy: determining an optimal task offloading ratio ε i,j Hold during analysisIs unchanged;
calculating delay by analysisAnd a task allocation ratio epsilon i,j Monotonicity between determines the optimal division ratio +.>First, according to the formula->Deducing->Is monotonically increasing with epsilon i,j Is increased by an increase in (a); when epsilon i,j ∈[0,1]Obtain->Second, according to the formula->Inferred thatIs monotonically decreasing with epsilon i,j The increase and decrease of the function value range is obtainedObservation formula->In combination with the previous discussion, find +.>First follow epsilon i,j Decreasing with increasing epsilon i,j Is increased by an increase in (a); when (when)When get +.>Is the minimum value of (2):wherein the method comprises the steps of
Each mobile device defines two important parameters including:
(2.1) normalized backhaul communication capacity is defined as the ratio of backhaul communication capacity to edge computing capability, i.e
(2.2) normalized cloud computing Capacity is defined as the ratio of cloud computing Capacity to edge computing Capacity, i.e
Based on the definition, obtaining an optimal task unloading strategy; in cloud edge cooperative system, optimal task unloading strategyExpressed as:
in the above formula: mu (mu) i,j Representing the ratio of backhaul communication capacity to edge computing capacity, v i,j Representing a ratio of cloud computing power to edge computing power;
the different systems present in the formula (10) are as follows:
1 st: communication limited system: the edge node and the cloud server have sufficient computing resources, but the communication capacity is insufficient;
in this case μ i,j <<ν i,j ,This occurs when one edge node connects a large number of mobile devices and the capacity of the backhaul link is insufficient, simplifying the optimal task offload ratio:
Equation (11) shows that in this case, the optimal task offload ratio is determined only by the normalized backhaul communication capacity, unaffected by the normalized cloud computing capacity; in this caseCommunication resources reduce the main bottleneck of end-to-end delay for each device; when mu i,j 0, optimal task off-load ratio1, all calculation tasks are executed at the edge node;
2 nd: computing a constrained system: the edge node and the cloud server have sufficient communication capacity, but the computing resources are insufficient;
in this case μ i,j >>ν i,j ,Simplifying the optimal task unloading ratio:
the optimal task offloading ratio is determined only by the normalized cloud computing capability, in which case, since the backhaul communication capacity is relatively sufficient, the latency generated by the communication becomes small, and the magnitude of the computation latency determines the magnitude of the overall latency of the mobile device; in this case, the edge node and cloud server are considered as a whole, based on their ratio of computing power, i.e. v i,j Splitting tasks according to a proportion; if the computing power of the edge node is greater than the computing power of the cloud server, i.e., v i,j < 1, then offload more data to the edge node; otherwise, offloading more data into the cloud, at v i,j In the special case of =1, the data is equally distributed between the edge node and the cloud server;
3 rd: edge-guided systems: the edge node allocates far more computing resources to the mobile device than the cloud server, i.eObtain v i,j 0, this case corresponds to a large-scale small cellular network, in whichThe cloud server serves a number of edge nodes; in this system, the optimal task offload ratio is reduced as follows:
indicating that the whole task should be executed on the edge node, the whole task only needs to be unloaded to the edge node;
4 th: cloud-oriented system: the cloud server has sufficient computing resources, while the edge node has limited computing resources, i.eObtain v i,j The system corresponds to a scene with strong computing power of a cloud server and weak computing power of an edge node; will v i,j The →infinity substitution formula can be obtained:
according to the formula, it is seen that the optimal task offload ratio is determined only by the normalized backhaul communication capacity; if backhaul communication capacity H j Increasing or edge node computing powerThe optimal task unloading rate is reduced along with the reduction; because of v i,j When in → infinity, the time delay generated by the cloud executing task is negligible compared with the time delay generated by the edge node executing task; thus when mu i,j When < 1, the time delay generated by the backhaul transmission will dominate the overall time delay of the mobile device, thereby leading to a larger proportion of task data offloaded to the edge side processing, i.e +.>Otherwise, when mu i,j At > 1, more data should be presentThe offloading is performed to a cloud server;
the computing resource allocation policy: unloading optimal task ratioSubstituting formula to calculate time delay +.>The simplification is as follows:
problem Q3 rewrites as:
theorem 1: q4 is a convex optimization problem;
and (3) proving: only the restriction conditions of the objective function and the objective function are proved to be convex functions, and the theorem is proved; looking at constraints (7 a) and (7 b), it can be seen that constraints (7 c) and (7 d) are affine reflecting their convexity; the following demonstrates that the objective function is also a convex function:
expressed as a hessian matrix:
all front main formulas of W are obtained through mathematical calculation:
according to linear algebra theory, when the front main sub-formula of a matrix is positive definite, the matrix is positive definite; the product can be obtained by the method,is based on the variables->And->Is a convex function of (2); and objective function +.>Is the sum of a series of convex functions, and the obtained objective function is also a convex function; theorem 1 obtains evidence;
according to theorem 1, obtaining that Q4 meets the KKT condition, and obtaining an optimal distribution value of computing resources in the cloud-edge cooperative system by using a Lagrange operator:
Wherein sigma j ,And θ is the optimal variable for lagrangian.
2. The cloud edge collaboration-based LTE power wireless private network task offloading and resource allocation method as claimed in claim 1, wherein the method is characterized by comprising the following steps: the cloud edge cooperative system is established in the step 1, and is composed of a central cloud server and M SCeNB, and is represented by a set M= {1,2, & gt, J; each SCeNB is deployed with MEC servicesThe MEC server uses limited resources for data processing, caching and storage; each combination of an sceb and MEC server is referred to as an edge node; within the coverage area of the jth SCeNB, there is a set χ j X is represented by j Each mobile device has a computing task with different latency requirements; assuming that each user is already connected with one base station, the specific connection relation is determined by a user communication strategy; each mobile device is connected with a corresponding base station through a wireless channel, and the edge node transmits data to the cloud server through different backhaul links; in the system, each computing task is assumed to be capable of processing on both the edge node and the cloud server; assuming that all tasks are of the same type and arrive at the same time, with q i,j =(s i,j ,c i,j ) Representing a computational task generated by an ith mobile device connected to a jth edge node, where s i,j Representing the size of the computational task, c i,j Representing the CPU calculation period required to calculate the task; n represents the total task number requested by the system in a certain time period; the computing power of the MEC server and the computing power of the cloud server in the jth edge node are respectively defined asAnd F c The method comprises the steps of carrying out a first treatment on the surface of the Computing resources of each MEC server and cloud server are allocated to mobile devices through virtual machine technology.
3. The cloud edge collaboration-based LTE power wireless private network task offloading and resource allocation method as claimed in claim 1, wherein the method is characterized by comprising the following steps: step 2, establishing an MEC cache model, including: for each task, the cache vector of the MEC server is defined as Y i,j =[y 1,1 ,y 1,2 ,...,y i,j ]The method comprises the steps of carrying out a first treatment on the surface of the If y i,j =1, then represents the calculation result of the MEC server cached task, and y i,j =0 indicates that the corresponding task result is not cached; when y is i,j When=1, the MEC server directly transmits the calculation result to the mobile device without calculation; due to the transmission power of the SCeNBThe rate is larger than the transmission power of the mobile equipment, the data volume of the calculation result is smaller than the data volume of the task itself, the transmission delay on the wireless downlink is ignored, and the transmission delay required by the MEC caching scheme is considered to be 0; the MEC cache model directly returns the result of frequently requested tasks to the mobile equipment, so that time delay is reduced and energy consumption is reduced.
4. The cloud edge collaboration-based LTE power wireless private network task offloading and resource allocation method as claimed in claim 1, wherein the method is characterized by comprising the following steps: and 3, performing time delay analysis existing in the cloud edge cooperative system, wherein the time delay analysis comprises the following steps: in a cloud-side-end task allocation model, each mobile device is assumed to send a task request due to limited computing capacity and electric quantity, and whether a result of the task exists in a cache is judged; if yes, directly returning a task result; if not, the mobile device uploads the task to the edge node; and determining whether the task is processed by the edge node alone or cooperatively processed by the edge node and the cloud server by the corresponding edge node.
5. The cloud edge collaboration-based LTE power wireless private network task offloading and resource allocation method as claimed in claim 4, wherein the method is characterized by comprising the following steps: if the task does not exist, the mobile device uploads the task to an edge node, and the edge node is also responsible for determining the unloading proportion of each task on the cloud edge.
CN201911365711.8A 2019-12-26 2019-12-26 LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation Active CN111585916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911365711.8A CN111585916B (en) 2019-12-26 2019-12-26 LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911365711.8A CN111585916B (en) 2019-12-26 2019-12-26 LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation

Publications (2)

Publication Number Publication Date
CN111585916A CN111585916A (en) 2020-08-25
CN111585916B true CN111585916B (en) 2023-08-01

Family

ID=72124227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911365711.8A Active CN111585916B (en) 2019-12-26 2019-12-26 LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation

Country Status (1)

Country Link
CN (1) CN111585916B (en)

Families Citing this family (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988805B (en) * 2020-08-28 2022-03-29 重庆邮电大学 End edge cooperation method for reliable time delay guarantee
CN112039992B (en) * 2020-09-01 2022-10-28 平安资产管理有限责任公司 Model management method and system based on cloud computing architecture
CN112118135A (en) * 2020-09-14 2020-12-22 南昌市言诺科技有限公司 Minimum resource configuration method and device for cloud edge cooperative architecture industrial internet platform
CN112365658A (en) * 2020-09-21 2021-02-12 国网江苏省电力有限公司信息通信分公司 Charging pile resource allocation method based on edge calculation
CN112217879B (en) * 2020-09-24 2023-08-01 江苏方天电力技术有限公司 Edge computing technology and cloud edge cooperation method based on power distribution Internet of things
CN112148492B (en) * 2020-09-28 2023-07-28 南京大学 Service deployment and resource allocation method considering multi-user mobility
CN112256413A (en) * 2020-10-16 2021-01-22 国网电子商务有限公司 Scheduling method and device for edge computing task based on Internet of things
CN114384866B (en) * 2020-10-21 2023-06-27 沈阳中科数控技术股份有限公司 Data partitioning method based on distributed deep neural network framework
CN112491957B (en) * 2020-10-27 2021-10-08 西安交通大学 Distributed computing unloading method and system under edge network environment
CN113114714B (en) * 2020-11-03 2022-03-01 吉林大学 Energy-saving method and system for unloading large-scale tasks to 5G edge server
CN112512061B (en) * 2020-11-05 2022-11-22 上海大学 Task unloading and assigning method in multi-access edge computing system
CN112468547B (en) * 2020-11-13 2023-04-07 广州中国科学院沈阳自动化研究所分所 Regional-based industrial edge computing task cloud collaborative unloading method
CN112437156B (en) * 2020-11-23 2022-01-14 兰州理工大学 Distributed cooperative caching method based on MEC-D2D
CN112506656A (en) * 2020-12-08 2021-03-16 深圳市国电科技通信有限公司 Distribution method based on distribution Internet of things computing task
CN112702401B (en) * 2020-12-15 2022-01-04 北京邮电大学 Multi-task cooperative allocation method and device for power Internet of things
CN112689303B (en) * 2020-12-28 2022-07-22 西安电子科技大学 Edge cloud cooperative resource joint allocation method, system and application
CN112887785B (en) * 2021-01-13 2023-05-02 浙江传媒学院 Time delay optimization method based on remote video superposition interactive calculation
CN112749012A (en) * 2021-01-15 2021-05-04 北京智芯微电子科技有限公司 Data processing method, device and system of terminal equipment and storage medium
CN112650338B (en) * 2021-01-22 2022-04-19 褚东花 Energy-saving and environment-friendly forestry seedling detection system and method based on Internet of things
CN113015217B (en) * 2021-02-07 2022-05-20 重庆邮电大学 Edge cloud cooperation low-cost online multifunctional business computing unloading method
CN112996056A (en) * 2021-03-02 2021-06-18 国网江苏省电力有限公司信息通信分公司 Method and device for unloading time delay optimized computing task under cloud edge cooperation
CN112861371B (en) * 2021-03-02 2022-11-18 东南大学 Steel industry cloud production scheduling method based on edge computing
CN113192322B (en) * 2021-03-19 2022-11-25 东北大学 Expressway traffic flow counting method based on cloud edge cooperation
CN112989251B (en) * 2021-03-19 2023-07-14 浙江传媒学院 Mobile Web augmented reality 3D model data service method based on collaborative computing
CN113128681B (en) * 2021-04-08 2023-05-12 天津大学 Multi-edge equipment-assisted general CNN reasoning acceleration system
CN113254095B (en) * 2021-04-25 2022-08-19 西安电子科技大学 Task unloading, scheduling and load balancing system and method for cloud edge combined platform
CN113301151B (en) * 2021-05-24 2023-01-06 南京大学 Low-delay containerized task deployment method and device based on cloud edge cooperation
CN113395679B (en) * 2021-05-25 2022-08-05 安徽大学 Resource and task allocation optimization system of unmanned aerial vehicle edge server
CN113315659B (en) * 2021-05-26 2022-04-22 江西鑫铂瑞科技有限公司 Task collaborative planning method and system for intelligent factory
CN113037877B (en) * 2021-05-26 2021-08-24 深圳大学 Optimization method for time-space data and resource scheduling under cloud edge architecture
CN113361113B (en) * 2021-06-09 2021-12-14 南京工程学院 Energy-consumption-adjustable twin data distribution method for high-speed rail bogie
CN113315669B (en) * 2021-07-28 2021-10-15 江苏电力信息技术有限公司 Cloud edge cooperation-based throughput optimization machine learning inference task deployment method
CN113592077B (en) * 2021-08-05 2024-04-05 哈尔滨工业大学 Cloud edge DNN collaborative reasoning acceleration method for edge intelligence
CN113778685A (en) * 2021-09-16 2021-12-10 上海天麦能源科技有限公司 Unloading method for urban gas pipe network edge computing system
CN113961264B (en) * 2021-09-30 2024-01-09 河海大学 Intelligent unloading algorithm and system for video monitoring cloud edge cooperation
CN113961266B (en) * 2021-10-14 2023-08-22 湘潭大学 Task unloading method based on bilateral matching under edge cloud cooperation
CN114051266B (en) * 2021-11-08 2024-01-12 首都师范大学 Wireless body area network task unloading method based on mobile cloud-edge calculation
CN114301907B (en) * 2021-11-18 2023-03-14 北京邮电大学 Service processing method, system and device in cloud computing network and electronic equipment
CN115102974A (en) * 2021-12-08 2022-09-23 湘潭大学 Cooperative content caching method based on bilateral matching game
CN114143355B (en) * 2021-12-08 2022-08-30 华北电力大学 Low-delay safety cloud side end cooperation method for power internet of things
CN114928607B (en) * 2022-03-18 2023-08-04 南京邮电大学 Collaborative task unloading method for polygonal access edge calculation
CN114928653B (en) * 2022-04-19 2024-02-06 西北工业大学 Data processing method and device for crowd sensing
CN114945025B (en) * 2022-04-25 2023-09-15 国网经济技术研究院有限公司 Price-driven positive and game unloading method and system oriented to cloud-edge coordination in power grid
CN114844900B (en) * 2022-05-05 2022-12-13 中南大学 Edge cloud resource cooperation method based on uncertain demand
CN114637608B (en) * 2022-05-17 2022-09-16 之江实验室 Calculation task allocation and updating method, terminal and network equipment
CN114745389B (en) * 2022-05-19 2023-02-24 电子科技大学 Computing offload method for mobile edge computing system
CN115002113B (en) * 2022-05-26 2023-08-01 南京邮电大学 Mobile base station edge computing power resource scheduling method, system and electronic equipment
CN115225675A (en) * 2022-07-18 2022-10-21 国网信息通信产业集团有限公司 Charging station intelligent operation and maintenance system based on edge calculation
CN115297013B (en) * 2022-08-04 2023-11-28 重庆大学 Task unloading and service cache joint optimization method based on edge collaboration
CN116016522B (en) * 2023-02-13 2023-06-02 广东电网有限责任公司中山供电局 Cloud edge end collaborative new energy terminal monitoring system
CN117240631A (en) * 2023-11-15 2023-12-15 成都超算中心运营管理有限公司 Method and system for connecting heterogeneous industrial equipment with cloud platform based on message middleware

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108541027A (en) * 2018-04-24 2018-09-14 南京邮电大学 A kind of communication computing resource method of replacing based on edge cloud network
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110035410A (en) * 2019-03-07 2019-07-19 中南大学 Federated resource distribution and the method and system of unloading are calculated in a kind of vehicle-mounted edge network of software definition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108541027A (en) * 2018-04-24 2018-09-14 南京邮电大学 A kind of communication computing resource method of replacing based on edge cloud network
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 A method of calculating task unloading is carried out based on edge calculations and cloud computing collaboration
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110035410A (en) * 2019-03-07 2019-07-19 中南大学 Federated resource distribution and the method and system of unloading are calculated in a kind of vehicle-mounted edge network of software definition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
移动边缘计算卸载技术综述;谢人超等;《通信学报》;20181125(第11期);全文 *
超密集网络中基于移动边缘计算的任务卸载和资源优化;张海波等;《电子与信息学报》;20190514(第05期);全文 *

Also Published As

Publication number Publication date
CN111585916A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111585916B (en) LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation
CN111447619B (en) Joint task unloading and resource allocation method in mobile edge computing network
CN107766135B (en) Task allocation method based on particle swarm optimization and simulated annealing optimization in moving cloud
Zhang et al. Energy-efficient offloading for mobile edge computing in 5G heterogeneous networks
Lee et al. An online secretary framework for fog network formation with minimal latency
Sun et al. Autonomous resource slicing for virtualized vehicular networks with D2D communications based on deep reinforcement learning
CN110098969B (en) Fog computing task unloading method for Internet of things
Samanta et al. Battle of microservices: Towards latency-optimal heuristic scheduling for edge computing
WO2023039965A1 (en) Cloud-edge computing network computational resource balancing and scheduling method for traffic grooming, and system
Wang et al. A high reliable computing offloading strategy using deep reinforcement learning for iovs in edge computing
Li Resource optimization scheduling and allocation for hierarchical distributed cloud service system in smart city
Wang et al. Dynamic offloading scheduling scheme for MEC-enabled vehicular networks
KR102298698B1 (en) Method and apparatus for service caching in edge computing network
Mollahasani et al. Dynamic CU-DU selection for resource allocation in O-RAN using actor-critic learning
Wei et al. Optimal offloading in fog computing systems with non-orthogonal multiple access
Wu et al. A mobile edge computing-based applications execution framework for Internet of Vehicles
Liu et al. Multi-agent deep reinforcement learning for end—edge orchestrated resource allocation in industrial wireless networks
Richart et al. Slicing with guaranteed quality of service in wifi networks
CN110996390B (en) Wireless access network computing resource allocation method and network system
Tam et al. Intelligent massive traffic handling scheme in 5G bottleneck backhaul networks
Krijestorac et al. Hybrid vehicular and cloud distributed computing: A case for cooperative perception
TW202021299A (en) Communication system of qoe-oriented cross-layer beam allocation and admission control for functional splitted wireless fronthaul communications
Tayade et al. Delay constrained energy optimization for edge cloud offloading
He et al. An offloading scheduling strategy with minimized power overhead for internet of vehicles based on mobile edge computing
CN112235387B (en) Multi-node cooperative computing unloading method based on energy consumption minimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant