CN116390160A - Cloud edge cooperation-based power task unloading method, device, equipment and medium - Google Patents

Cloud edge cooperation-based power task unloading method, device, equipment and medium Download PDF

Info

Publication number
CN116390160A
CN116390160A CN202310080643.0A CN202310080643A CN116390160A CN 116390160 A CN116390160 A CN 116390160A CN 202310080643 A CN202310080643 A CN 202310080643A CN 116390160 A CN116390160 A CN 116390160A
Authority
CN
China
Prior art keywords
task
mec
user equipment
cloud
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310080643.0A
Other languages
Chinese (zh)
Inventor
谢欢
张秋铭
杜书
赵虹
王东
杨洋
邓冰妍
冯盈
王海霖
张鹤鹏
项靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Sichuan Electric Power Co Ltd
Original Assignee
State Grid Sichuan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Sichuan Electric Power Co Ltd filed Critical State Grid Sichuan Electric Power Co Ltd
Priority to CN202310080643.0A priority Critical patent/CN116390160A/en
Publication of CN116390160A publication Critical patent/CN116390160A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0917Management thereof based on the energy state of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0958Management thereof based on metrics or performance parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0958Management thereof based on metrics or performance parameters
    • H04W28/0967Quality of Service [QoS] parameters
    • H04W28/0975Quality of Service [QoS] parameters for reducing delays
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a cloud edge cooperation-based power task unloading method, a device, equipment and a medium. The decision whether the computing task is to be performed locally at the user equipment or offloaded for execution by the MEC server should adapt to the time-varying network dynamics, taking into account the dynamics of the environment, including computational power resources, bandwidth resources and channel conditions, selecting an appropriate edge server computing task, minimizing the computing cost in terms of overall delay in combination with the DQN algorithm.

Description

Cloud edge cooperation-based power task unloading method, device, equipment and medium
Technical Field
The invention belongs to the technical field of communication, and particularly relates to a cloud edge cooperation-based power task unloading method, device, equipment and medium.
Background
The power internet of things (PIoT) is an application of the internet of things in smart grids, which can function in various aspects of smart grids, including power generation, transmission, distribution and consumption. By integrating the resources of the communication infrastructure and the power infrastructure, and utilizing advanced information and communication technologies, PIoT allows for information exchange between interconnected devices and further improves the unitizing efficiency of the power system. But the service demand differentiation and the application scene diversification are challenged, and the construction of the electric power internet of things needs to introduce 5G key technologies such as edge calculation, low-time delay technology and the like to realize flexible and differentiated communication capability. The 5G network can meet various key communication requirements of the power grid due to the characteristics of low time delay, low cost and high safety of bearing control type services and collecting type services.
The power service covers links such as transmission, transformation, distribution, power consumption and the like, and the service types of each link are numerous, and the services such as power consumption information acquisition, accurate load control, mobile inspection, distribution network motorization and the like are realized. Aiming at the service with the highest delay requirement in the electric power Internet of things, the delay of wireless side-to-core side transmission, signaling processing and the like can be reduced by adopting a mode of deploying an edge computing platform. Multiple access edge computing (MEC) can place data center level computing, storage, and network resources closer to users and end devices, referred to as the edge of the network. MEC servers are designed to serve consumer and enterprise applications that require low latency and high bandwidth. The device may offload computing tasks to a nearby MEC server to reduce processing delay and save battery power.
The prior art utilizes a collaborative computing offload mechanism of centralized cloud servers and distributed MEC server resources, wherein the collaboration between end users, MEC servers, and centralized clouds is considered, and a partial computing offload architecture is considered, wherein the user's computing tasks are partially split and offloaded to the MEC servers or centralized clouds while non-offloadable portions are locally executed. The computing offload mechanisms proposed in the prior art are mostly based on one-time optimization, cannot characterize long-term computing offload performance, and do not take into account the dynamics of the environment, such as bandwidth resources, time-varying channel conditions, energy availability at the mobile device, and computing capabilities of different MEC servers. If the selected MEC server experiences heavy workload and degraded channel conditions, the terminal device may take longer to offload data and receive the results.
Disclosure of Invention
The invention aims to provide a cloud-edge cooperation-based power task unloading method, device, equipment and medium, so as to reduce processing delay of a power task.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, a cloud edge collaboration-based power task offloading method includes the steps of:
determining the environmental state of the MEC wireless network in the target area;
constructing an objective function by taking the minimum total delay time of power task unloading as a target based on the environmental state;
solving the objective function by using a preset reinforcement learning unloading model to obtain an action strategy corresponding to the minimum objective function;
and executing the action strategy to finish the unloading of the power task.
Further, in the step of determining the environmental status of the MEC wireless network in the target area, the environmental status specifically includes: channel condition status, actual available energy of the user equipment, and computing power resources allocable by the MEC server.
Further, in the step of constructing the objective function with the minimum total delay time of power task unloading as a target, the objective function is as follows:
Figure BDA0004067384330000021
wherein alpha is k Representing task offloading decisions for user equipment k, alpha k =1 means that the task is offloaded to the MEC server, α k =0 denotes task calculation locally;
Figure BDA0004067384330000022
representing task T on user device k k Is used to determine the total local execution time of the processor; />
Figure BDA0004067384330000023
Indicating the total offload and computation delay of user equipment k task offload.
Further, the total offloading and computing delay of the user equipment k task offloading
Figure BDA0004067384330000024
Calculated by the following formula:
Figure BDA0004067384330000025
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004067384330000026
task T representing offloading of user equipment k on MEC m k Is a sum of (2)Execution time; />
Figure BDA0004067384330000027
Representing the total execution time of the tasks offloaded by user device k to MECs m and n; />
Figure BDA0004067384330000028
Task T representing user equipment k unloading in cloud k Is performed in a single processor, is performed; τ k Representing the task calculation deadline.
Further, the task T of offloading the user equipment k on the MEC m k Is not limited by the total execution time of (a)
Figure BDA0004067384330000029
The following are provided:
Figure BDA00040673843300000210
the total execution time of the tasks offloaded by the user equipment k to the MECs m and n is:
Figure BDA00040673843300000211
task T of user equipment k unloading in cloud k The total execution time of (2) is:
Figure BDA0004067384330000031
in the above-mentioned method, the step of,
Figure BDA0004067384330000032
representing a transmission delay for offloading tasks from user device k to MEC m; />
Figure BDA0004067384330000033
Representing task T on MEC m k Is a calculation delay of (1); />
Figure BDA0004067384330000034
Representing the forwarding delay between MEC m and MEC n; />
Figure BDA0004067384330000035
Representing task T on MEC n k Is performed with a delay in execution; />
Figure BDA0004067384330000036
Representing a forwarding delay between the MEC m and the cloud server; />
Figure BDA0004067384330000037
Is the execution delay on the cloud server.
Further, task T on user equipment k k Is not limited by the total local execution time of (a)
Figure BDA0004067384330000038
Expressed as:
Figure BDA0004067384330000039
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00040673843300000310
is task T k Average latency of (2); l (L) k Representing task T k The execution at user equipment k is delayed.
Further, in the step of solving the objective function by using a pre-set reinforcement learning unloading model, the training mode of the reinforcement learning unloading model specifically includes the steps of:
acquiring the channel condition state of a wireless network in a target area, the actual available energy of user equipment and the assignable computing power resource of an MEC server to construct a state vector for obtaining the environment state;
generating corresponding actions according to the constructed state vector of the environmental state and the reinforcement learning strategy;
formulating a control strategy according to the state vector of the environmental state and the action; wherein the control strategy comprises a fixed task offloading and server selection strategy;
executing the control strategy, and calculating the obtained rewards according to feedback of the executed environment;
calculating based on the environmental state, the action and the obtained rewards to obtain an updated Q function;
according to the updated Q function, a loss function is set, gradient is calculated, and a model with the minimum loss function is taken as a final reinforcement learning unloading model.
In a second aspect, a cloud-edge collaboration-based power task offloading apparatus includes:
the environment determining module is used for determining the environment state of the MEC wireless network in the target area;
the function construction module is used for constructing an objective function by taking the minimum total delay time of power task unloading as a target based on the environmental state;
the solving module is used for solving the objective function by using a preset reinforcement learning unloading model to obtain an action strategy corresponding to the minimum time of the objective function;
and the execution module is used for executing the action strategy and finishing the unloading of the electric power task.
In a third aspect, an electronic device includes a processor and a memory, where the processor is configured to execute a computer program stored in the memory to implement the above-described cloud-edge collaboration-based power task offloading method.
In a fourth aspect, a computer readable storage medium stores at least one instruction that, when executed by a processor, implements the above-described cloud-edge collaboration-based power task offloading method.
Compared with the prior art, the invention has the following beneficial effects:
according to the cloud edge collaboration-based power task offloading method provided by the invention, a multi-user multi-access edge computing (MEC) wireless network is considered, wherein a plurality of mobile devices can be associated through wireless channels and offload tasks to an MEC server connected with a Base Station (BS) to perform computation. The decision whether the computing task is to be performed locally at the user equipment or offloaded for execution by the MEC server should adapt to the time-varying network dynamics, taking into account the dynamics of the environment, including computational power resources, bandwidth resources and channel conditions, selecting an appropriate edge server computing task, minimizing the computing cost in terms of overall delay in combination with the DQN algorithm.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of a power task unloading method based on cloud edge collaboration in an embodiment of the invention;
FIG. 2 is a schematic diagram of strategy generation of reinforcement learning offloading model in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a system model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of simulation comparison in an embodiment of the present invention;
FIG. 5 is a block diagram of a power task offloading device based on cloud edge collaboration according to an embodiment of the present invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings in connection with embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
The following detailed description is exemplary and is intended to provide further details of the invention. Unless defined otherwise, all technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the invention.
Example 1
As shown in fig. 1 to 4, a cloud-edge collaboration-based power task unloading method includes the following steps:
s1, determining the environment state of the MEC wireless network in the target area.
Specifically, the MEC wireless network in the target area related to the present solution is a wireless network system composed of M Base Stations (BS), each base station is connected to one MEC server, consider a group of K users, where each user equipment K e K can be connected to any MEC server in range, and the system model is shown in fig. 3. The user equipment has limited computational resources and can offload data to the MEC server for processing. In the scheme, at each time t, each user equipment K epsilon K has a calculation task needing to be calculated, such as electricity consumption information acquisition, accurate load control, mobile inspection, distribution network motorization and other services.
Further specifically, a task may be calculated locally on the user device or off-loaded to the MEC server or cloud server. For each user equipment k, a task T is defined k ={s(d k ),τ k ,z k K e K, where s (d) k ) Is the task data size of the user equipment k as a bit representation required for the calculation input, τ k Is the task calculation deadline, z k Is the computational effort or strength in units of CPU cycles required per bit. Furthermore, in this solution, all the MEC servers belong to the same network operator, so the calculation data can be split between the MEC servers for cooperative execution. When a detachable task is offloaded to an MEC server, a decision will be made as to whether to execute the task on a single MEC server or forward a portion of the task to other MEC servers or remote clouds, based on the workload and computing resource status of each MEC server.
Further specifically, the user equipment may associate with any MEC server to offload data to the MEC server based on radio channel conditions, etc., but the associated MEC server may have limited resources (radio resources or computing resources) to provide to the user. If the MEC server is not sufficiently computationally resource efficient for offloading tasks for a certain user device, the MEC server may process the task in part and forward the remainder of the task to other MEC servers or cloud servers for processing. In summary, there are four possible decisions: 1) Executing tasks locally; 2) Executing tasks on the MEC server; 3) Forwarding the task to other MEC servers; 4) And offloading the task to the cloud server.
The advantage of executing tasks locally is reduced latency, however, this decision is faced with the limitation of local resource availability. On the other hand, offloading tasks to cloud servers is often used in reality due to the computing power of the cloud servers, but this results in high latency. Offloading tasks to the MEC server is considered the most efficient decision as it can reduce latency with the available computing resources at the network edge. The switching time, namely the connection time between the user and the MEC server, is negligible compared with the transmission time and the calculation execution time of the calculation task. The system time is divided into successive time frames of equal length t in a fraction of a few seconds.
MEC server resources are virtualized and shared by multiple users. Resource requirements that cannot be met by one MEC server can be met by any other MEC server. Resource allocation function usage for each user equipment k of MEC m
Figure BDA0004067384330000051
Representation of->
Figure BDA0004067384330000052
For representing the allocated computational resources within the MEC server,/for>
Figure BDA0004067384330000053
For indicating the proportion of bandwidth resources allocated to user equipment k by MEC m.
Specifically, the environmental states involved in this step include: channel condition status, actual available energy of the user equipment, and computing power resources allocable by the MEC server.
S2, constructing an objective function by taking the minimum total delay time of power task unloading as a target based on the environment state.
ServiceThe selector variable is defined as
Figure BDA0004067384330000061
Wherein->
Figure BDA0004067384330000062
Indicating that user equipment k selects MEC m, otherwise
Figure BDA0004067384330000063
Considering that the radio channel between a user equipment and its associated MEC is a time-varying channel, the channel power gain of the radio link between user equipment k and MEC m is modeled as a random variable +.>
Figure BDA0004067384330000064
The range of values of (c) may be divided and quantized into L discrete levels. Each level corresponds to a state of the markov chain, thereby forming an L element state space g= { G 0 ,G 1 ,…,G L-1 };/>
Figure BDA0004067384330000065
The channel state realization at time t can be expressed as +.>
Figure BDA0004067384330000066
When the user equipment k associates the MEC server for the first time, the user equipment k comprehensively considers all the computing power resources which can be allocated by the associated MEC server
Figure BDA0004067384330000067
Bandwidth resource->
Figure BDA0004067384330000068
And channel condition status->
Figure BDA0004067384330000069
One MEC server with the best combination condition is selected. Utility function for comprehensive Condition->
Figure BDA00040673843300000610
The formula is as follows:
Figure BDA00040673843300000611
wherein beta is 123 Is three scale factors.
The uplink transmission rate of the user equipment k associated with MEC m may be given by the following formula:
Figure BDA00040673843300000612
wherein P is k Is the transmission power of the user equipment k,
Figure BDA00040673843300000613
is the power of the gaussian noise at user equipment k. />
Figure BDA00040673843300000614
Is interference from other user equipments k' sharing the same frequency band as the user equipment k. The transmit power of the user equipment is set to be fixed, so for simplicity the interference term +.>
Figure BDA00040673843300000615
Can be absorbed into noise item->
Figure BDA00040673843300000616
Is a kind of medium.
The instantaneous data rate of the user equipment k associated with MEC m is given by the following formula:
Figure BDA00040673843300000617
where K is the set of all user equipments,
Figure BDA00040673843300000618
selecting variables for a server, W m For the total bandwidth of MEC m, each user equipment k associated with MEC m is allocated a proportion of bandwidth W>
Figure BDA00040673843300000619
In a dense setup scenario, all base stations share the same bandwidth W, i.e. a generic spectrum sharing, due to spectrum resource limitations. Furthermore, a server selection for offloading is only accepted when there are enough spectrum resources to meet its needs.
The task offloading decision of user equipment k is denoted as alpha k Wherein alpha is k =1 means that the task is offloaded to the MEC server, α k =0 means that the task is calculated locally.
According to the defined instantaneous data rate, the transmission delay for offloading tasks from user device k to MEC m is expressed as:
Figure BDA0004067384330000071
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004067384330000072
representing transmission delay for offloading tasks from user device K to MEC m, K m Is a set of user equipments for MEC m services.
When MEC m does not have sufficient resources to meet the user equipment requirements, MEC m forwards the request over the link to another MEC n with sufficient resources.
The proportion of user equipment forwarding tasks is expressed as
Figure BDA0004067384330000073
Thus, the requests of the user equipment are served by different MEC servers at different delay costs. Forwarding delay between MEC m and MEC n>
Figure BDA0004067384330000074
Is expressed as follows:
Figure BDA0004067384330000075
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004067384330000076
is the transmission rate of the link between MEC m and MEC n; />
Figure BDA0004067384330000077
Representing the forwarding delay between MEC m and MEC n; />
Figure BDA0004067384330000078
Representing the proportion of user equipment forwarding tasks.
When there is no MEC server with sufficient computing resources to complete the remaining tasks of user equipment k, MEC m will forward the request to the remote cloud server over the wired backhaul link. The forwarding delay between MEC m and cloud server is defined as
Figure BDA0004067384330000079
Wherein->
Figure BDA00040673843300000710
The following equation gives:
Figure BDA00040673843300000711
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00040673843300000712
is the transmission rate between MEC m and remote cloud; />
Figure BDA00040673843300000713
Representing the forwarding delay between MEC m and cloud server.
Suppose that there is one for each user equipment K e KTask T k Local computing resources that require use of device k
Figure BDA00040673843300000714
Task T k Local calculation execution time l of (a) k . Thus, task T k Is calculated to require CPU energy η k The CPU computation power consumption at user device k is expressed as:
Figure BDA00040673843300000715
where v is a constant parameter related to the CPU hardware architecture,
Figure BDA00040673843300000716
is a local computing resource.
Task T in addition to CPU power consumption k The calculation of (2) also requires an execution time l k . Thus, task T k The execution delay at user equipment k is given by the following formula:
Figure BDA0004067384330000081
the actual available energy f of the user equipment k in each time slot t k (t) can be modeled as a random variable and divided into discrete levels, with F= { F 0 ,F 1 ,…,F N-1 And N is the number of available energy states.
Task T on user equipment k k Is not limited by the total local execution time of (a)
Figure BDA0004067384330000082
Given by the formula:
Figure BDA0004067384330000083
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004067384330000084
is task T k Until it is executed locally by device k; l (L) k Representing task T k The execution at user equipment k is delayed.
Representing computing resources of MEC m allocated to user equipment k as
Figure BDA0004067384330000085
May be measured by a CPU cycle. In the network model, multiple user devices may access the same MEC server at a given time and share computing resources on the MEC server. Therefore, the computing power +.>
Figure BDA0004067384330000086
Can be modeled as a random variable and divided into discrete levels, with h= { H 0 ,H 1 ,…,H J-1 And represents where J is the number of available computing power states.
In the model, the user device may offload tasks to the MEC server if the computational resources for local processing are limited or less. If enough resources exist, the terminal equipment detects all MEC servers in the service range, comprehensively considers the computing power resources and the operating power resources, selects the best MEC server, and carries out the task T k Unloading to the server for execution.
The available computational resources of MEC m (i.e., CPU cycle frequency) are denoted as delta m The total calculated allocation must satisfy:
Figure BDA0004067384330000087
task T on MEC m k Is a computational delay of (a)
Figure BDA0004067384330000088
Given by the formula:
Figure BDA0004067384330000089
thus, task T with user equipment k off-loaded on MEC m k The total execution time of (2) is as follows:
Figure BDA00040673843300000810
however, if
Figure BDA00040673843300000811
(i.e., MEC m does not have enough computing resources to meet the computation deadline), MEC m will task T k Is forwarded to other MECs n, ++where there are sufficient resources to meet the demand>
Figure BDA00040673843300000812
Is task T on MEC n k The execution delay of (2) can be calculated according to the following formula
Figure BDA0004067384330000091
Thus, the total execution time of the tasks offloaded by user device k to MECs m and n becomes:
Figure BDA0004067384330000092
when none of the MECs has enough computational power resources to meet the task computation deadline, i.e
Figure BDA0004067384330000093
The MEC m forwards the task to the cloud server. Therefore, task T with user device k offloading in the cloud k Becomes:
Figure BDA0004067384330000094
in the above-mentioned description of the invention,
Figure BDA0004067384330000095
representing a transmission delay for offloading tasks from user device k to MEC m; />
Figure BDA0004067384330000096
Representing task T on MEC m k Is a calculation delay of (1); />
Figure BDA0004067384330000097
Representing the forwarding delay between MEC m and MEC n; />
Figure BDA0004067384330000098
Representing task T on MEC n k Is performed with a delay in execution; />
Figure BDA0004067384330000099
Representing a forwarding delay between the MEC m and the cloud server; />
Figure BDA00040673843300000910
Is the execution delay on the cloud server.
Wherein the method comprises the steps of
Figure BDA00040673843300000911
Can be calculated from (13). Furthermore, the total offloading and calculation delay of task offloading for user device k is as follows:
Figure BDA00040673843300000912
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00040673843300000913
task T representing offloading of user equipment k on MEC m k Is performed in a single processor, is performed; />
Figure BDA00040673843300000914
Representing the total execution time of the tasks offloaded by user device k to MECs m and n; />
Figure BDA00040673843300000915
Task T representing user equipment k unloading in cloud k Is performed in a single processor, is performed; τ k Representing the task calculation deadline.
The scheme takes the problems of joint server selection, resource allocation, calculation task unloading and switching as a deep reinforcement learning process. The proxy is responsible for collecting state from each user device, MEC server, and then the proxy aggregates the entire information into one constructed system state. After generating the best policies for server selection, resource allocation and task offloading, the agent will send a notification informing the user. Consider the computational cost in terms of total latency as a complete computational task T k The total amount of time required (including the unloading delay). To minimize computational delay costs, the total delay D (x, α, ω, P) is used for the task of offloading to the MEC server or remote cloud, so the objective function constructed in this application is as follows:
Figure BDA0004067384330000101
wherein alpha is k Representing task offloading decisions for user equipment k, alpha k =1 means that the task is offloaded to the MEC server, α k =0 denotes task calculation locally;
Figure BDA0004067384330000102
representing task T on user device k k Is used to determine the total local execution time of the processor; />
Figure BDA0004067384330000103
Indicating the total offload and computation delay of user equipment k task offload.
The joint server selection, collaborative offloading and switching is formulated as follows:
Figure BDA0004067384330000104
constraint (18 a) ensures the total of each MEC serverThe computing resource allocation does not exceed the maximum computing power of the MEC server. The constraint (18 b) ensures that each user equipment can be associated with at most one MEC at a time. The constraint (18 c) ensures that the sum of the spectrum allocations of all user equipments must be less than or equal to the total available spectrum of each MEC m. If the user equipment cannot contact any MEC at time t, i.e
Figure BDA0004067384330000105
It will compute tasks locally, i.e. alpha k =0。
The problem is solved, once given the server selection x and the offload decision α, the problem (18) can be restated as a convex problem as follows:
Figure BDA0004067384330000111
wherein phi is k Is a weight, and the data z of each user equipment k s(d k ) In direct proportion to each other,
Figure BDA0004067384330000112
is a scale factor.
And S3, solving the objective function by using a preset reinforcement learning unloading model to obtain an action strategy corresponding to the minimum objective function.
In the reinforcement learning method, the agent learns the optimal strategy by interacting with the environment. Such an optimal strategy is achieved by the agent recursively correcting its errors after each interaction. The time-varying channel conditions between the mobile device and the MEC server, the available energy of the mobile device, the computational effort, and the computational power of the different MEC servers are all dynamically changing. The goal is then to determine whether the mobile user offloads its computing tasks to the MEC servers or executes locally, and based on the current state, which MEC servers and their resources are best suited to service the user's computing task requests. Formally, each action taken by an agent is defined by a probability and is associated with an agent's policy. When an agent interacts with the environment based on some action, it may be rewarded and change its state. Thus, the goal of each agent is to implement an optimal strategy by which the total reward can be maximized.
In the framework, the agent learns its optimal control strategy, thereby minimizing the computational cost in terms of total delay, as defined in (17).
As an optional implementation manner of the invention, the training mode of the reinforcement learning unloading model specifically comprises the following steps:
(1) Acquiring the channel condition state of a wireless network in a target area, the actual available energy of user equipment and the assignable computing power resource of an MEC server to construct a state vector for obtaining the environment state;
in particular, the network state of the user equipment k at the time slot t may be determined by a random variable
Figure BDA0004067384330000113
Status of->
Figure BDA0004067384330000114
Random variable f k State F of (2) k (t), and a random variable +.>
Figure BDA0004067384330000115
Status of->
Figure BDA0004067384330000116
To describe. Thus, a state vector may be described as
Figure BDA0004067384330000117
(2) Generating corresponding actions according to the constructed state vector of the environmental state and the reinforcement learning strategy;
specifically, at the beginning of time slot t, the agent strategically decides the actions of user equipment k as per the fixed control strategy
Figure BDA0004067384330000118
(3) Formulating a control strategy according to a state vector and actions of the environmental state; the control strategy comprises a fixed task unloading strategy and a server selection strategy;
specifically, the joint control action
Figure BDA0004067384330000121
Observing the network state χ at the beginning of each decision slot t k After (t), according to Φ, wherein (Φ) (α)(x) ) Fixed task offloading, server selection policies, respectively.
(4) Executing a control strategy, and calculating the obtained rewards according to feedback of the executed environment;
specifically, after taking action, the smart will get rewards or penalties from the environment, regarding negative rewards as estimated total aggregate delay costs. If a task does not reach the calculated expiration date, the task is considered an unsuccessful task and a penalty value p will be added to the reward, which will be updated according to the task's expiration date value.
(5) Calculating based on the environmental state, the action and the obtained rewards to obtain an updated Q function;
specifically, in the Q-learning method, the Q function is updated at the beginning of each duration. Based on χ for network state k (t), action y (t), received computation cost D (χ (t), y (t)), and resulting network state χ (t+1) for the next duration t+1, agent update Q-function as follows:
Figure BDA0004067384330000122
where γ is the discount coefficient, δ t ∈[0,1]Is a learning rate that varies with time.
Further specifically, to mitigate the cost of Q-learning, a Deep Q Network (DQN) is employed to estimate the Q-function online. In DQN, Q (χ, y) is approximated by Q (χ, y, θ), i.e., Q (χ, y) ζ Q (χ, y, θ), where θ is the weight set of DQN.
(6) According to the updated Q function, a loss function is set, gradient is calculated, and a model with the minimum loss function is taken as a final reinforcement learning unloading model.
Figure BDA0004067384330000123
Specifically, the transition is a tuple consisting of four parts: current state χ (t), current action y (t), current prize D (χ (t), y (t)) and next state χ (t+1), use ζ (t) = (χ (t), y (t), D (χ (t), y (t)), χ (t+1)), it is stored in an empirical playback pool of finite size U at the end of each duration t.
Further specifically, let's psi (t) ={ζ (t-U+1) ,…,ζ (t) And represents a memory pool. The agent samples the experience at random at each time point t, i.e. from ψ (t) Small lot of the first S transitions
Figure BDA0004067384330000124
To train the DQN in a direction that minimizes the loss function and to find its gradient.
DQN is provided with a set of parameters
Figure BDA0004067384330000131
For evaluating the action value, in particular, < +.>
Figure BDA0004067384330000132
The function is calculated based on this separate target DQN, and Q (χ, y; θ) (t+1) ) Is calculated based on the DQN in training. The target DQN is a copy of the earlier DQN, in the next duration, the weight set of the target DQN +.>
Figure BDA0004067384330000133
The set of weights of the DQN updated to be of duration t, i.e., θ (t)
And S4, executing an action strategy to finish the unloading of the power task.
In order to test the performance of the method for selecting cooperative unloading by the server based on edge calculation, which is provided by the embodiment of the invention, three simulation schemes are designed for comparison, and the server calculates: all user equipment tasks are offloaded to the MEC server or cloud server; and (3) local calculation: tasks of all user devices are executed locally on the user devices; and (3) randomly calculating: the tasks of all the user equipment are executed or unloaded locally on the user equipment to the MEC server and the cloud server, and the specific simulation process is as follows:
first, a system model is initialized, considering a wireless network consisting of 5 Base Stations (BS), each connected to a MEC server, and considering a set of 10 user equipments, each of which can be connected to any MEC server within range, at each instant t, there is a computational task on each user equipment that needs to be calculated.
In the process of selecting the edge server, computing power resources, bandwidth resources and channel state conditions which can be allocated by the edge server are comprehensively considered. And the user equipment is unloaded to the edge server for execution according to whether the available energy and available computing power resources of the terminal equipment can meet the requirement of the task expiration date or not, and if not, the user equipment is unloaded to the edge server for execution. The edge server decides whether to cooperate with other edge server calculations or cooperate with cloud server calculations to minimize the time used according to its own computing power resources, and the system model is shown in fig. 3.
In order to check the performance of the method for selecting cooperative unloading by the server based on the edge calculation, which is proposed by the invention, the method is simulated and compared with the method based on the server calculation, the local calculation and the random calculation under the same environment.
The system simulation environment parameters are set as follows:
1) A wireless network of 5 Base Stations (BS), 10 user equipments, each of which can be connected to any MEC server within range, each of which has a computational task to be calculated.
2) The task data of the user equipment are randomly generated within the range of 1MB-10MB, the task calculation deadline is randomly generated within the range of 0.1s-1s, and the working intensity of the task is uniformly distributed within the range from 452.5cycles/bit to 737.5 cycles/bit;
3) For each user device, the computing resources are randomly generated in the range from 0.5GHz to 1.0GHz, and the local computing energy per cycle of each user device is set equal to 10 -8 The available energy of the user equipment is evenly distributed in the range of 1 joule/sec to 10 joules/sec;
4) Computing resources of the MEC server are assumed to be in the range of 2.3GHz to 2.8GHz, 24-core/48-hyper-threading;
5) The transmitting power of the terminal equipment is 1W;
6) All MECs share the same bandwidth and are randomly generated from within 25-32 MHz;
7) The transmission rate between the edge servers is randomly generated within the range of 20-25 Mbps;
8) The transmission rate between the cloud server and the edge server is randomly generated in the range of 50-120 Mbps.
Simulation results are shown in fig. 4, and it can be seen from fig. 4 that the time delay of the method of the present invention after 2000 epoode is smaller than that of the other three comparison methods.
The simulation result can be obtained, a deep Q learning method is adopted to learn a task scheduling strategy, and the server based on edge calculation selects a cooperative unloading method which is used for joint server selection and cooperative unloading in the multi-access edge wireless network. Numerical results show that the DRL-based algorithm is superior to other calculation methods in delay, and can improve system performance.
Example 2
As shown in fig. 5, a cloud-edge collaboration-based power task offloading device includes:
the environment determining module is used for determining the environment state of the MEC wireless network in the target area;
the function construction module is used for constructing an objective function by taking the minimum total delay time of power task unloading as a target based on the environmental state;
the solution module is used for solving the objective function by using a preset reinforcement learning unloading model to obtain an action strategy corresponding to the minimum time of the objective function;
and the execution module is used for executing the action strategy and completing the unloading of the power task.
Example 3
As shown in fig. 6, the present invention further provides an electronic device 100 for implementing a cloud-edge collaboration-based power task offloading method; the electronic device 100 comprises a memory 101, at least one processor 102, a computer program 103 stored in the memory 101 and executable on the at least one processor 102, and at least one communication bus 104. The memory 101 may be used to store a computer program 103, and the processor 102 implements a cloud-edge collaboration-based power task offloading method step of embodiment 1 by running or executing the computer program stored in the memory 101 and invoking data stored in the memory 101.
The memory 101 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data) created according to the use of the electronic device 100, and the like. In addition, the memory 101 may include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), at least one disk storage device, a Flash memory device, or other non-volatile solid state storage device.
The at least one processor 102 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The processor 102 may be a microprocessor or the processor 102 may be any conventional processor or the like, the processor 102 being a control center of the electronic device 100, the various interfaces and lines being utilized to connect various portions of the overall electronic device 100.
The memory 101 in the electronic device 100 stores a plurality of instructions to implement a cloud-edge collaboration-based power task offloading method, and the processor 102 may execute the plurality of instructions to implement:
determining the environmental state of the MEC wireless network in the target area;
based on the environment state, constructing an objective function by taking the minimum total delay time of power task unloading as a target;
solving the objective function by using a preset reinforcement learning unloading model to obtain an action strategy corresponding to the minimum time of the objective function;
and executing an action strategy to finish the unloading of the power task.
Example 4
The modules/units integrated with the electronic device 100 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as a stand alone product. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, and a Read-Only Memory (ROM).
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (10)

1. The electric power task unloading method based on cloud edge cooperation is characterized by comprising the following steps of:
determining the environmental state of the MEC wireless network in the target area;
constructing an objective function by taking the minimum total delay time of power task unloading as a target based on the environmental state;
solving the objective function by using a preset reinforcement learning unloading model to obtain an action strategy corresponding to the minimum objective function;
and executing the action strategy to finish the unloading of the power task.
2. The cloud edge collaboration-based power task offloading method of claim 1, wherein in the step of determining an environmental state of the MEC wireless network in the target area, the environmental state specifically includes: channel condition status, actual available energy of the user equipment, and computing power resources allocable by the MEC server.
3. The cloud edge collaboration-based power task offloading method as claimed in claim 1, wherein in the step of constructing an objective function with a minimum total power task offloading delay time as a target, the objective function is as follows:
Figure FDA0004067384310000011
wherein alpha is k Representing task offloading decisions for user equipment k, alpha k =1 means that the task is offloaded to the MEC server, α k =0 denotes task calculation locally;
Figure FDA0004067384310000014
representing task T on user device k k Is used to determine the total local execution time of the processor; />
Figure FDA0004067384310000015
Indicating the total offload and computation delay of user equipment k task offload.
4. The cloud edge collaboration-based power task offloading method of claim 3, wherein a total offload and computation delay of the user device k task offload
Figure FDA0004067384310000016
Calculated by the following formula:
Figure FDA0004067384310000012
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004067384310000017
task T representing offloading of user equipment k on MECm k Is performed in a single processor, is performed; />
Figure FDA0004067384310000018
Representing the total execution time of the tasks offloaded by user device k to MECm and n; />
Figure FDA0004067384310000019
Task T representing user equipment k unloading in cloud k Is performed in a single processor, is performed; τ k Representing task computation cutoffsTime.
5. The cloud edge collaboration-based power task offloading method of claim 4, wherein a task T offloaded by a user equipment k on MECm k Is not limited by the total execution time of (a)
Figure FDA00040673843100000110
The following are provided:
Figure FDA0004067384310000013
the total execution time of the tasks offloaded to MECm and n by user device k is:
Figure FDA0004067384310000021
task T of user equipment k unloading in cloud k The total execution time of (2) is:
Figure FDA0004067384310000022
in the above-mentioned method, the step of,
Figure FDA0004067384310000023
representing a transmission delay for offloading tasks from user device k to MECm; />
Figure FDA0004067384310000024
Representing task T on MECm k Is a calculation delay of (1); />
Figure FDA0004067384310000025
Representing the forwarding delay between MECm and MEC n; />
Figure FDA0004067384310000026
Representing tasks on MEC nT k Is performed with a delay in execution;
Figure FDA0004067384310000027
representing a forwarding delay between the MECm and the cloud server; />
Figure FDA0004067384310000028
Is the execution delay on the cloud server.
6. The cloud edge collaboration-based power task offloading method of claim 3, wherein task T on user device k k Is not limited by the total local execution time of (a)
Figure FDA00040673843100000211
Expressed as:
Figure FDA0004067384310000029
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA00040673843100000210
is task T k Average latency of (2); l (L) k Representing task T k The execution at user equipment k is delayed.
7. The cloud edge collaboration-based power task offloading method of claim 1, wherein in the step of solving the objective function using a pre-set reinforcement learning offloading model, a training manner of the reinforcement learning offloading model specifically includes the steps of:
acquiring the channel condition state of a wireless network in a target area, the actual available energy of user equipment and the assignable computing power resource of an MEC server to construct a state vector for obtaining the environment state;
generating corresponding actions according to the constructed state vector of the environmental state and the reinforcement learning strategy;
formulating a control strategy according to the state vector of the environmental state and the action; wherein the control strategy comprises a fixed task offloading and server selection strategy;
executing the control strategy, and calculating the obtained rewards according to feedback of the executed environment;
calculating based on the environmental state, the action and the obtained rewards to obtain an updated Q function;
according to the updated Q function, a loss function is set, gradient is calculated, and a model with the minimum loss function is taken as a final reinforcement learning unloading model.
8. Electric power task uninstallation device based on cloud limit is cooperated, characterized in that includes:
the environment determining module is used for determining the environment state of the MEC wireless network in the target area;
the function construction module is used for constructing an objective function by taking the minimum total delay time of power task unloading as a target based on the environmental state;
the solving module is used for solving the objective function by using a preset reinforcement learning unloading model to obtain an action strategy corresponding to the minimum time of the objective function;
and the execution module is used for executing the action strategy and finishing the unloading of the electric power task.
9. An electronic device comprising a processor and a memory, the processor configured to execute a computer program stored in the memory to implement the cloud-edge collaboration-based power task offload method of any of claims 1-7.
10. A computer readable storage medium storing at least one instruction that when executed by a processor implements a cloud-edge collaboration-based power task offload method as claimed in any one of claims 1 to 7.
CN202310080643.0A 2023-01-17 2023-01-17 Cloud edge cooperation-based power task unloading method, device, equipment and medium Pending CN116390160A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310080643.0A CN116390160A (en) 2023-01-17 2023-01-17 Cloud edge cooperation-based power task unloading method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310080643.0A CN116390160A (en) 2023-01-17 2023-01-17 Cloud edge cooperation-based power task unloading method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116390160A true CN116390160A (en) 2023-07-04

Family

ID=86975760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310080643.0A Pending CN116390160A (en) 2023-01-17 2023-01-17 Cloud edge cooperation-based power task unloading method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116390160A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116647880A (en) * 2023-07-26 2023-08-25 国网冀北电力有限公司 Base station cooperation edge computing and unloading method and device for differentiated power service

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116647880A (en) * 2023-07-26 2023-08-25 国网冀北电力有限公司 Base station cooperation edge computing and unloading method and device for differentiated power service
CN116647880B (en) * 2023-07-26 2023-10-13 国网冀北电力有限公司 Base station cooperation edge computing and unloading method and device for differentiated power service

Similar Documents

Publication Publication Date Title
Zhao et al. Edge caching and computation management for real-time internet of vehicles: An online and distributed approach
Chen et al. Energy-efficient task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge networks
Wu et al. EEDTO: An energy-efficient dynamic task offloading algorithm for blockchain-enabled IoT-edge-cloud orchestrated computing
CN109669768B (en) Resource allocation and task scheduling method for edge cloud combined architecture
CN110928654B (en) Distributed online task unloading scheduling method in edge computing system
CN113225377B (en) Internet of things edge task unloading method and device
Nguyen et al. Joint data compression and computation offloading in hierarchical fog-cloud systems
CN114340016B (en) Power grid edge calculation unloading distribution method and system
Kim Nested game-based computation offloading scheme for mobile cloud IoT systems
CN112231085B (en) Mobile terminal task migration method based on time perception in collaborative environment
Shahryari et al. Energy-Efficient and delay-guaranteed computation offloading for fog-based IoT networks
Zhang et al. Theoretical analysis on edge computation offloading policies for IoT devices
CN114143346A (en) Joint optimization method and system for task unloading and service caching of Internet of vehicles
CN112672382B (en) Hybrid collaborative computing unloading method and device, electronic equipment and storage medium
CN116390160A (en) Cloud edge cooperation-based power task unloading method, device, equipment and medium
CN116489708B (en) Meta universe oriented cloud edge end collaborative mobile edge computing task unloading method
Zhang et al. Effect: Energy-efficient fog computing framework for real-time video processing
CN116566838A (en) Internet of vehicles task unloading and content caching method with cooperative blockchain and edge calculation
CN114172558B (en) Task unloading method based on edge calculation and unmanned aerial vehicle cluster cooperation in vehicle network
CN115473896A (en) Electric power internet of things unloading strategy and resource configuration optimization method based on DQN algorithm
CN113821346B (en) Edge computing unloading and resource management method based on deep reinforcement learning
Wang et al. Multi-layer computation offloading in distributed heterogeneous mobile edge computing networks
Gao et al. Com-DDPG: A multiagent reinforcement learning-based offloading strategy for mobile edge computing
Dreibholz et al. Towards a lightweight task scheduling framework for cloud and edge platform
Qiao et al. Task migration computation offloading with low delay for mobile edge computing in vehicular networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination