CN113950066B - Single server part calculation unloading method, system and equipment under mobile edge environment - Google Patents

Single server part calculation unloading method, system and equipment under mobile edge environment Download PDF

Info

Publication number
CN113950066B
CN113950066B CN202111060966.0A CN202111060966A CN113950066B CN 113950066 B CN113950066 B CN 113950066B CN 202111060966 A CN202111060966 A CN 202111060966A CN 113950066 B CN113950066 B CN 113950066B
Authority
CN
China
Prior art keywords
task
user terminal
unloading
server
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111060966.0A
Other languages
Chinese (zh)
Other versions
CN113950066A (en
Inventor
安玲玲
张星雨
廖鹏
单颖欣
岳佳豪
马晓亮
王泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202111060966.0A priority Critical patent/CN113950066B/en
Publication of CN113950066A publication Critical patent/CN113950066A/en
Application granted granted Critical
Publication of CN113950066B publication Critical patent/CN113950066B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/22Traffic simulation tools or models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/20Negotiating bandwidth
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention belongs to the technical field of task unloading and resource allocation of mobile edge computing, and discloses a method, a system and equipment for partial computation unloading of a single server in a mobile edge environment, wherein the method for partial computation unloading of the single server in the mobile edge environment comprises the following steps: a single base station and multi-user terminal single MEC server scene is established, a network communication model is designed, and each user task input data is sent to the MEC server by depending on a base station uplink; a local processing and edge unloading parallel task execution module is built, problem planning is carried out on the total cost of the MEC system under the single-cell multi-mobile-user scene by means of network communication delay and task execution energy consumption, and delay and energy consumption are jointly optimized by means of a depth deterministic gradient algorithm. The invention can dynamically make reasonable task unloading decision and allocate bandwidth and computing resource for each user in each time slice, thereby effectively reducing the weighted cost in two aspects of task processing total time delay and mobile equipment total energy consumption.

Description

Single server part calculation unloading method, system and equipment under mobile edge environment
Technical Field
The invention belongs to the technical field of task unloading and resource allocation of mobile edge computing, and particularly relates to a method, a system and equipment for unloading partial computing of a single server in a mobile edge environment.
Background
At present, the internet of things technology is continuously popularized and the light speed in the field of mobile communication is rapidly developed, a large number of intelligent mobile devices are produced at present, a large number of calculation-intensive tasks are mostly generated by massive intelligent devices and application, the data traffic is increased explosively, and higher requirements are brought to the data calculation capacity and the battery endurance capacity of mobile terminal equipment. Mobile Edge Computing (MEC) enables a terminal device to complete a computing task in a low-delay and low-energy-consumption manner, a Mobile user unloads the task to an Edge base station for processing, and the time delay and energy consumption of a local device can be effectively reduced by utilizing the computing power and energy reserve of an Edge server. The basic idea of edge computing is to change the computing task generated on the mobile device from being offloaded to the cloud end to being offloaded to the network edge end, so that the requirement of computation-intensive applications such as real-time online games and augmented reality on low delay is met. Different task offloading schemes have a large impact on task completion latency and mobile device energy consumption. Therefore, how to make the most reasonable task offloading decision and resource allocation scheme according to the terminal device and the surrounding environment factors is a difficult problem to be solved in the current computing offloading framework research, and a large number of task offloading strategies based on different algorithms and optimization objectives are also proposed. According to the optimization target of the algorithm, the calculation unloading strategy can be divided into the following three types:
the first category of strategies is mainly to reduce task processing delays. Bi and Zhang et al consider a multi-user MEC system supported by Wireless power transmission in "Computing Rate multiplication for Wireless Powered Mobile-Edge Computing with Binary Computing Offloading" (IEEE Transactions on Wireless Communications,2018,17 (6): 4177-4190), study a Binary Computing Offloading strategy in a multi-user MEC network, and solve the problems of Computing Rate weighting and Maximization of all Wireless devices in each time range by using an alternating direction multiplier algorithm, which has the disadvantages: when computing offloading is performed, energy consumption at one end of the mobile terminal device cannot be considered, the terminal device may have a situation that an offloading policy cannot operate normally due to insufficient electric energy, the mobile device is limited in terms of computing resources, and data is generally processed at the expense of high latency and high device energy consumption.
The second type of strategy is mainly to reduce the device energy consumption. Kuang and Shi et al propose an offload Game mechanism in the "Multi-User Offloading Game ranking in OFDM Mobile Computing System" (IEEE Transactions on Vehicular Technology,2019,68 (12): 12190-12201) to achieve the energy saving requirement by Offloading tasks for each device in an orthogonal frequency division multiple Access communication System using a Game theory approach. The method has the following defects: mobile devices need to be responsible for handling more and more compute intensive tasks such as high definition video live and face recognition. However, due to limited computing resources and battery endurance, these devices may not be able to process all tasks locally with low latency.
The third type of strategy is mainly time delay and energy consumption trade-off. In consideration of a system architecture including a plurality of access points, an Edge server and a Mobile device and an application scenario in which tasks can be left in a local device for execution and offload computation can be performed to a server or a remote cloud, an offload decision and Resource management scheme Based on an SARSA algorithm in Reinforcement Learning is designed to reduce the cost of the system in terms of energy consumption and computation delay. The disadvantages of the method are: in the actual unloading process, different systems may have different performance requirements instead of being limited to only time delay and energy consumption, and how to make the most reasonable task unloading decision and resource allocation scheme according to the terminal device and the surrounding environment factors is a difficult problem to be solved urgently in the research of the computing unloading framework.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) In the existing calculation unloading strategy for reducing task processing delay, energy consumption at one end of mobile terminal equipment during calculation unloading cannot be considered, so that the total execution time delay of a model task is longer, and the total energy consumption of the mobile equipment is higher.
(2) In the existing calculation unloading strategy for reducing the energy consumption of the equipment, a user more hopes that the sum of the time consumption and the energy consumption of the system can be minimized to reduce the overall consumption of the system, or the time consumption and the energy consumption are balanced.
(3) In the existing calculation unloading strategy with compromise between time delay and energy consumption, different systems may have different performance requirements in the actual unloading process rather than being limited to time delay and energy consumption.
The difficulty in solving the above problems and defects is: in the process of task data transmission, enough wireless network bandwidth is needed, and the edge server also has limited computing resources, so that making the most reasonable task offloading decision and resource allocation scheme according to the terminal device and the surrounding environment factors is a difficult problem to be solved urgently in the research of the computing offloading framework nowadays. The device resources are limited by the conflict between the high-performance task processing requirements, but with the continuous increase of the task unloading scale, the power consumption generated by executing the tasks rises sharply, and the benefit of the MEC system is seriously influenced.
The significance of solving the problems and the defects is as follows: in a more complex environment of a single MEC server with multiple mobile devices, a partial computation unloading scheme taking total time delay of task execution and total energy consumption of the mobile devices as a common optimization index is urgently needed.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method, a system and equipment for partial computation unloading of a single server in a mobile edge environment.
The invention is realized in such a way that a single server part computation and unloading method under a mobile edge environment comprises the following steps:
constructing an application scene of a single base station and a single MEC server of a multi-user terminal to realize unloading decision of a mobile terminal task;
designing a network communication model, and sending each user task input data to an MEC server through a base station uplink to reasonably distribute network bandwidth resources;
step three, a local processing and edge unloading parallel task execution module is set up, and the feedback efficiency of the application service is improved;
step four, performing target modeling according to network communication time delay and task execution energy consumption, and realizing problem planning of the total cost of the MEC system in a single-base-station multi-user scene;
and fifthly, dynamically optimizing a task unloading and resource allocation strategy by using a depth certainty strategy gradient method, and realizing overall performance optimization based on time delay and energy consumption.
Further, in the step one, the construction of the application scenario of the single MEC server for multiple end users includes the following steps:
1) Modeling a single MEC scene of a plurality of mobile users in a cell, constructing a system network architecture consisting of a single base station, a plurality of user terminals and a public high-performance MEC server, and expressing the serial number of mobile equipment as follows:
N={1,2,...}
and each user device n will generate a divisible compute-intensive task:
Figure GDA0003862356550000041
wherein D is n Representing task A n Size of the uploaded data amount, L n Representing the execution of a computing task A n The amount of computing resources that are required,
Figure GDA0003862356550000042
for processing task A n Tolerable maximum time delay requirements;
2) Computing an offload decision vector, task A on user equipment n n The execution ratio offloaded to the MEC server is denoted x n =[0,1]And N belongs to N, the proportion of executing the task on the local equipment N is 1-x n The two parts of operations of local execution of the subtask and calculation and unloading of the MEC server are executed in a parallel mode, so that the total processing time delay of the task is reduced, and the service response time is prolonged;
the final computational offload decision vector x for all user tasks can be expressed as:
x={x 1 ,x 2 ,...,x |N| };
3) Setting a task waiting execution cache queue, and supposing that the time of the whole MEC system is divided into a plurality of time lengths which are tau 0 And a buffer queue I of tasks to be executed, which are executed in a first-in first-out sequence, is set for each user terminal n in view of the problem of limited resources of mobile users n ,K n Is queue I n The total amount of task calculation to be executed is calculated, and each time a new time slice t +1 starts, the total amount K of task calculation to be executed of the mobile terminal n n The dynamically updated formula of (a) is:
Figure GDA0003862356550000043
wherein, K n (t + 1) is the amount of tasks to be executed on the n buffer queue of the t +1 th time slice mobile terminal, x n (t) indicating task A on the nth user terminal within time slice t n (t) decision variables for local processing or edge offload, L n (t) to perform task A n (t) the amount of computational resources required,
Figure GDA0003862356550000044
representing the local execution task amount of the mobile user n in the t time slice;
4) The method solves the problems of data transmission and calculation, and aims at the problems of data transmission and calculation during task unloading at a server side, the information of tasks related to each mobile user is firstly forwarded to an MEC server through a base station, then the MEC server allocates corresponding calculation resources to execute the tasks, and the model also needs to consider a real-time allocation scheme of bandwidth and calculation resources.
Further, in the second step, the design of the network communication model includes the following steps:
defining the uplink network bandwidth resource of a base station as a fixed value W, all mobile users in a cell need to share the bandwidth resource of the base station, and making a reasonable network bandwidth resource allocation decision B = { B ] for a system in each time slice 1 ,B 2 ,...,B |N| }; the proportion of the bandwidth resources allocated to the mobile user by the base station in the time slice t is B n ,B n ∈[0,1]And satisfy
Figure GDA0003862356550000051
Then according to Shannon's formula, when multiple terminals in a cell simultaneously offload tasks to the MEC server, the uplink task transmission rate r between user n and the server n Can be expressed as:
Figure GDA0003862356550000052
wherein, P n Is the transmission power of the nth user equipment, and g n Representing the gain of the radio transmission channel between user N and the base station over time slice t, N 0 Is the power spectral density of white gaussian channel noise.
Further, in the third step, the construction of the parallel task execution model includes the following steps:
calculating an unloading model classification according to a task unloading decision x = { x = { [ x ] 1 ,x 2 ,...,x |N| Each mobile terminal can process the task locally and can unload part of the task to the MEC server for execution; the calculation unloading model of the task is divided into a local execution model and an edge unloading model, and the local processing and the edge unloading are simultaneously executed in parallel, so that the task processing time delay is reduced, and the service feedback time is prolonged;
a local execution model, wherein the local execution model processes task data and obtains an execution result through the computing power of the mobile equipment, and mainly relates to local execution delay
Figure GDA0003862356550000053
And device execution energy consumption
Figure GDA0003862356550000054
Two parts of overhead; defining the local CPU calculation frequency of the nth mobile terminal as
Figure GDA0003862356550000055
The computation delay of the locally executed part of the task
Figure GDA0003862356550000056
Can be expressed as:
Figure GDA0003862356550000057
wherein (1-x) n )L n As task A n The number of CPU cycles required for the local execution section; in addition, for each user terminal, the sub-tasks processed locally should also take into account the waiting time of the current time slice on the task buffer queue waiting to be executed locally
Figure GDA0003862356550000061
Figure GDA0003862356550000062
Therefore, the total delay of the nth user terminal to process the sub-tasks locally is defined as:
Figure GDA0003862356550000063
in the edge unloading model, the edge server-side processing subtask generally needs to be divided into three steps; firstly, a mobile user transmits task related data to a base station through a wireless link; when the base station forwards the subtask unloaded by the mobile user n to the MEC server, the MEC server allocates the computing resource for the MEC server, and the computing resource allocation decision is represented as:
F={F 1 ,F 2 ,...,F n ,...,F |N| };
and satisfy the conditions
Figure GDA0003862356550000064
Finally, the step ofThe MEC server feeds back the execution result to the mobile equipment, and the return data volume of the task execution result is usually very small and far lower than the uploaded task data, and the return rate of a wireless network is very high, so that the time delay and the energy consumption cost for returning the task result to the user are ignored; the required transmission time is delayed in the process of transmitting the unloading task data to the base station by the nth user terminal
Figure GDA0003862356550000065
Is defined as:
Figure GDA0003862356550000066
corresponding offload data x n D n The energy consumption during transmission is:
Figure GDA0003862356550000067
wherein, P n The uplink transmission power of the nth mobile user; after the data transmission is completed, task A n The computation time of the offload portion on the edge server is:
Figure GDA0003862356550000068
wherein, F n Proportion of computing resources allocated to the nth mobile user for the MEC server, f C Computing power for the CPU of the MEC server; the total processing delay of the tasks unloaded to the MEC server is the sum of the transmission delay and the execution delay of the tasks, and is represented as follows:
Figure GDA0003862356550000071
in summary, for the time slice t, since the user local and the server in the MEC system can execute the task in parallel, the cell movesTask A for active user n n The total processing delay is:
Figure GDA0003862356550000072
complete task A n The total cost of energy consumption required is:
Figure GDA0003862356550000073
further, in the fourth step, the MEC system total cost problem planning includes the following steps:
aiming at the problems of task part unloading and network bandwidth and MEC computing resource distribution, a weighting factor omega of time delay and energy consumption is introduced 1 And omega 2 For adjusting the weight ratio of time and energy consumption costs according to user-specific preferences, and 12 =1, and the objective function is formulated as follows:
Figure GDA0003862356550000074
as shown in the above equation, according to the task execution model,
Figure GDA0003862356550000075
and
Figure GDA0003862356550000076
adopting task unloading decision x = { x) under t time slices respectively 1 ,x 2 ,...,x |N| Bandwidth and computational resource allocation policy B = { B = } 1 ,B 2 ,...,B n ,...,B |N| And F = { F = } 1 ,F 2 ,...,F n ,...,F |N| Task processing delay and energy consumption of the electronic device; to pair
Figure GDA0003862356550000077
Maximizing aims to reduce task A as much as possible n ToThe utility maximization of the whole MEC system model is realized by the weighted sum of the feed time and the energy consumption of the local mobile equipment;
the constraint conditions in C1-C3 respectively represent that the unloading proportion allocated to all mobile users in the cell is not more than 1, and the sum of the allocated bandwidth resources and the proportion of the calculated resources is less than or equal to 1; c4 then represents task A n The time delay required for the local execution part and the offload to the server-side part must not exceed its tolerable maximum deadline
Figure GDA0003862356550000081
And C5 and C6 ensure that the time and energy cost required to employ the present computational offload scheme is no greater than the time delay and energy consumption of the full local execution.
Further, in the fifth step, the joint optimization of the time delay and the energy consumption includes the following steps:
giving next action decision based on the current state, and using quadruplet M = (S, A) s ,P ss',a ,R s,a ) To describe this process;
wherein S is a limited set of states, A is a limited set of actions, S is the system state under the current time slice and S belongs to S, S 'is the next state of the system and S' belongs to S, a is the selected action and a belongs to A, P ss',a Representing the probability of a transition from the current state s to the next new state s' when performing action a, R s,a The instant direct reward obtained by converting the state s into s' after the action a is executed;
in addition, discount factor γ ∈ [0,1] is used to measure the reward value that the state of a future time slice has, i.e., the impact of actions taken based on the state of a later time slice on the overall reward value gradually decays, and the weighted sum G (t) of all reward values resulting from the selection of a set of actions on time slice t in the markov decision process is described as:
Figure GDA0003862356550000082
wherein, γ i R (t + i + 1) is time slice t + i +1 value performance of the reward obtained over time slice t;
and (3) function three-element design, namely respectively designing the states, actions and reward functions as follows according to the application scene of the single MEC server of the multi-terminal user on the basis of a system model:
the state is as follows: the state space firstly ensures that all information in the environment can be contained, and the change of the environment in each time slice is fully reflected; the present system state is therefore defined as:
Figure GDA0003862356550000083
the system state is composed of four parts, namely the data volume D (t) of the mobile user arriving at the task in the cell on the current time slice t, the required computing resource number L (t), the task volume K (t) to be executed on the task cache queue and the data transmission rate r (t) between the mobile user and the server;
the actions are as follows: according to the data volume D (t) of a new arriving task of a user on each time slice in the state s (t), the number L (t) of required computing resources and the task volume K (t) to be executed on a task cache queue, the Agent of the Agent is a task A n Making an offload proportion decision x (t), a ratio of bandwidth resources allocated per mobile user B (t), and a server-calculated resource allocation ratio F (t), i.e.:
a(t)=[x 1 (t),x 2 (t),...,x |N| (t),B 1 (t),B 2 (t),...,B |N| (t),F 1 (t),F 2 (t),...,F |N| (t)];
the reward function: the instantaneous reward return obtained by the system on the time slice t is set as the objective function value in the formula, namely
Figure GDA0003862356550000091
The greater the reward R (t) obtained by performing action a (t), the greater the cost of time for all user tasks to perform at the current time slice t
Figure GDA0003862356550000092
And energy consumption cost
Figure GDA0003862356550000093
The smaller the weighted sum;
optimizing a depth deterministic strategy gradient (DDPG) algorithm, and dynamically solving a task unloading decision and resource allocation scheme under each time slice by using the DDPG algorithm so as to minimize a target function and reduce the weighted total cost of time delay and energy consumption;
the DDPG network structure is provided with an Actor and a Critic module and comprises four neural networks which are an Actor current network, an Actor target network, a Critic current network and a Critic target network respectively;
the action network module is used for selecting an action and delivering the action to the Agent of the intelligent Agent to execute the action, and the criticic module is used for evaluating a Q value according to a state s (t) and an action a (t); the experience playback unit stores a state transition data sample obtained by interaction with the environment for later sampling;
the objective function of the Critic module in the DDPG algorithm is a time difference TD-error which is used for representing the difference between the current action and the expected action, and the loss function of the Critic network is defined as the square of the TD-error:
Figure GDA0003862356550000094
where m is the number of state transition samples { s (t), a (t), R (t), s (t + 1) } randomly drawn from the empirical playback unit, R (t) being the reward resulting from performing action a (t) over a time slice t, Q' (s (t + 1), a (t + 1), θ (t) } Q ') then represents the evaluation value of the state s (t + 1) and action a (t + 1) pair on the next time slice t +1 given by the Critic object network, θ Q 、θ Q' The weighting parameters, Q (s (t), a (t), θ, of the current and target networks, respectively Q ) Then the value of the state s (t) and the action a (t) corresponding to the current moment t is judged through the Critic current network, and gamma is a discount factor;
the Actor module in the DDPG algorithm is decreased through gradientIn the method of (1), the network is updated with parameters, and an action a (t) = μ (s (t), θ) that maximizes the evaluation value is selected for the purpose of selecting the action μ ) The loss function is:
Figure GDA0003862356550000101
wherein, theta μ Is the weight parameter of the Actor's current network;
the DDPG-based multi-user single-server joint task unloading and resource allocation steps are as follows:
(1) input task request set { A 1 ,A 2 ,...,A |N| }, uplink network bandwidth W and MEC server computing capacity f C The experience playback unit M is initialized, and the current network mu (s (t), theta) of the Actor is initialized randomly μ ) And criticic current network Q (s (t), a (t), θ Q ) Weight of theta μ And theta Q
(2) Randomly initializing m state transition data for action detection, and receiving an initial state s (0);
(3) generating an action a (t) = μ (s (t), θ, according to the current strategy and probing noise loop traversal μ )+noise(t);
(4) Performing the offloading decision and resource allocation of action a (t) and obtaining a reward R (t) and a status s (t + 1) of a next time slice;
(5) randomly selecting M state conversion tuples { s (t), a (t), R (t), s (t + 1) } from an empirical playback unit M;
(6) updating the target networks of Critic and Actor by using a soft updating mode based on a loss function
(7) Repeating the step (3) to the step (6) for T times, wherein T is the number of time slices;
(8) repeating the step (2) to the step (7) for E times, wherein E is the number of epsilon;
(9) and outputting a task unloading decision x, a bandwidth resource allocation scheme B and a computing resource allocation scheme F.
Another object of the present invention is to provide a system for offloading computation of a single server portion in a mobile edge environment, which implements the method for offloading computation of a single server portion in a mobile edge environment, the system for offloading computation of a single server portion in a mobile edge environment comprising:
the MEC service scene construction module is used for abstracting an application scene that a single base station and an edge server are deployed around a plurality of mobile users in a single cell;
the network communication module is used for assuming the condition that a single base station deployed in a cell provides wireless network service for users, and sending each user task input data to the MEC server through a base station uplink;
the task execution module is used for dividing a calculation unloading model of a task into a local execution model and an edge unloading model, and executing the local processing part and the edge unloading part simultaneously in parallel so as to reduce the task processing time delay and improve the service feedback time;
the MEC system total cost problem planning module is used for improving the experience quality of each mobile user in a cell, measuring the performance of a calculation unloading model from two aspects of time delay and energy consumption, and performing problem planning on the MEC system total cost in a single-cell multi-mobile-user scene according to the network communication and task execution model;
and the delay and energy consumption combined optimization module is used for solving the problem in each time slice, so that the delay and energy consumption combined optimization in the MEC scene of multiple mobile users under all the time slices is realized.
It is a further object of the invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
a single base station and multi-user terminal single MEC server scene is established, a network communication model is designed, and each user task input data is sent to the MEC server by depending on a base station uplink;
a local processing and edge unloading parallel task execution module is built, problem planning is carried out on the total cost of the MEC system under the single-cell multi-mobile-user scene by means of network communication delay and task execution energy consumption, and delay and energy consumption are jointly optimized by means of a depth deterministic gradient algorithm.
It is another object of the present invention to provide a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
a single base station and multi-user terminal single MEC server scene is established, a network communication model is designed, and each user task input data is sent to the MEC server by depending on a base station uplink;
a local processing and edge unloading parallel task execution module is built, problem planning is carried out on the total cost of the MEC system under the single-cell multi-mobile-user scene by means of network communication delay and task execution energy consumption, and delay and energy consumption are jointly optimized by means of a depth deterministic gradient algorithm.
The invention provides a single-server partial computation unloading strategy in a mobile edge environment, which is mainly applied to the technical field of task unloading and resource allocation problems of mobile edge computation and mainly solves the problems of task execution delay and equipment energy consumption in an application scene of a multi-user single MEC server. Aiming at the scene and the task type, a dynamic partial task unloading and resource allocation algorithm based on a depth deterministic gradient algorithm is designed, and finally, a partial computation unloading model taking minimization of total delay of all task computation and total energy consumption of terminal equipment as optimization targets is realized. The partial calculation unloading strategy based on the DDPG is compared with the partial calculation unloading strategy based on the DQN through a numerical simulation experiment, the partial calculation unloading strategy based on the DDPG is obtained through comparison research, the performance is optimal, and the time delay and energy consumption long-term weighting total overhead of the whole multi-user single-server MEC system can be effectively reduced.
Compared with the prior art, the invention also has the following advantages: under the partial unloading condition of a multi-mobile-equipment single MEC server, a problem planning model under the MEC system is constructed by taking the total task execution time delay and the total mobile-equipment energy consumption as common optimization indexes, a depth deterministic strategy gradient (DDPG) algorithm in a depth reinforcement learning theory is applied to the calculation unloading problem, and a partial calculation unloading scheme based on the DDPG is provided. The strategy allows the execution of tasks to be completed on the mobile equipment and the single MEC server, can dynamically make a reasonable task unloading decision and allocate bandwidth and computing resources for each user in each time slice, and effectively reduces the weighted cost in two aspects of the total time delay of task processing and the total energy consumption of the mobile equipment.
Drawings
Fig. 1 is a flowchart of a single-server partial computation offload method in a mobile edge environment according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a single-server part computation offload policy implementation process in a mobile edge environment according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of a single-server partial computing offload system in a mobile edge environment according to an embodiment of the present invention;
in fig. 3: 1. an MEC service scene construction module; 2. a network communication module; 3. a task execution module; 4. an MEC system total cost problem planning module; 5. and a time delay and energy consumption combined optimization module.
Fig. 4 is a diagram of a deep deterministic policy gradient network architecture provided by an embodiment of the present invention.
Fig. 5 is a diagrammatic view of the influence of the computing power of the MEC server on the overall system benefit in the simulation experiment result of the embodiment of the present invention.
Fig. 6 is a diagrammatic view of the influence of the number of mobile devices on the total system benefit in the simulation experiment result of the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems of intensive real-time computing tasks, weak computing power of a mobile terminal, low running efficiency of a core network and the like in the technical field of the current Internet of things and 5G networks, the invention provides a method, a system and equipment for partial computation unloading of a single server in a mobile edge environment, and the invention is described in detail by combining the following drawings, and is oriented to application scenes such as virtual reality games, wearable equipment, intelligent home and the like.
Those skilled in the art can also implement the method for offloading computation of a single server portion in a mobile edge environment by using other steps, and the method for offloading computation of a single server portion in a mobile edge environment provided by the present invention in fig. 1 is only one specific embodiment.
As shown in fig. 1, the single server part computation offload method in the mobile edge environment provided by the present invention includes the following steps:
s001, constructing an application scene of a single base station and a single MEC server of a multi-user terminal, and realizing unloading decision of a mobile terminal task;
s002, designing a network communication model, sending each user task input data to an MEC server through a base station uplink, and reasonably distributing network bandwidth resources;
s003, a local processing and edge unloading parallel task execution module is built, and the feedback efficiency of the application service is improved;
s004, performing target modeling according to network communication time delay and task execution energy consumption, and realizing problem planning of the total cost of the MEC system in a single-base-station multi-user scene;
and S005, dynamically optimizing a task unloading and resource allocation strategy by using a depth certainty strategy gradient method, and realizing overall performance optimization based on time delay and energy consumption.
As shown in fig. 2, the method for offloading computation of a single server portion in a mobile edge environment provided by the present invention specifically includes the following steps:
1) And (3) application scene description, and single base station and multi-user terminal single MEC service scenes are constructed. The method is based on the practical life application of the Internet of things, and a partial computation unloading model for the application scene of a single MEC server of a multi-terminal user is constructed.
2) A network communication model is constructed. Each user task input data is sent to the MEC server via the base station uplink. A reasonable network bandwidth resource allocation decision should be made for the system at each time slice.
3) And constructing a task execution model. The calculation unloading model of the task is divided into a local execution model and an edge unloading model, and the local processing and the edge unloading are simultaneously executed in parallel, so that the task processing time delay is reduced, and the service feedback time is greatly prolonged.
4) And planning the total cost of the MEC system. And measuring the performance of the calculation unloading model from two aspects of time delay and energy consumption, and performing problem planning on the overall cost of the MEC system in the scene of single cell and multiple mobile users according to the network communication and task execution model.
5) And (4) jointly optimizing time delay and energy consumption. A combined dynamic task unloading and resource allocation strategy is designed based on a deep deterministic strategy gradient method, and an optimal compromise scheme capable of simultaneously reducing time delay and energy consumption is found according to changes of a real-time environment.
In step 1) provided by the embodiment of the present invention, constructing an application scenario of a single MEC server for multiple end users comprises the following steps:
1.1 Single MEC scenario modeling of multiple mobile users in a cell
Constructing a system network architecture consisting of a single base station, a plurality of user terminals and a public high-performance MEC server, and expressing the serial number of the mobile equipment as follows:
N={1,2,...};
and each user device n will generate a divisible compute-intensive task:
Figure GDA0003862356550000141
wherein D is n Representing task A n Size of the uploaded data amount, L n Representing the execution of a computing task A n The amount of computing resources that are required,
Figure GDA0003862356550000142
for processing task A n Tolerable maximum time delay requirements.
1.2 Compute offload decision vectors
Task A on user equipment n n The execution rate offloaded to the MEC server is denoted as x n =[0,1]And N belongs to N, the proportion of executing the task on the local equipment N is 1-x n The two parts of operations of local execution of the subtask and calculation and unloading of the MEC server are executed in a parallel mode, so that the total processing time delay of the task is reduced, and the service response time is prolonged;
the final computational offload decision vector x for all user tasks can be expressed as:
x={x 1 ,x 2 ,...,x |N| };
1.3 Set up pending execution task buffer queue
Suppose that the whole MEC system time is divided into a plurality of time lengths tau 0 And a buffer queue I for waiting execution tasks to be executed in a first-in first-out sequence is set for each user terminal n aiming at the problem of limited mobile user resources n ,K n Is queue I n The total amount of task calculation to be executed is calculated, and each time a new time slice t +1 starts, the total amount K of task calculation to be executed of the mobile terminal n n The dynamically updated formula of (a) is:
Figure GDA0003862356550000151
wherein, K n (t + 1) is the amount of tasks to be executed on the n buffer queue of the t +1 th time slice mobile terminal, x n (t) indicating task A on the nth user terminal within time slice t n (t) decision variables for local processing or edge offload, L n (t) to perform task A n (t) the amount of computing resources required,
Figure GDA0003862356550000152
representing the local execution task amount of the mobile user n in the t time slice;
1.4 To solve data transmission and computation problems
For data transmission and calculation problems during task unloading at a server side, the related task information of each mobile user needs to be forwarded to an MEC server through a base station, and then the MEC server allocates corresponding calculation resources to execute tasks. Therefore, the model also needs to consider a real-time allocation scheme of bandwidth and computing resources.
In step 2) provided by the embodiment of the present invention, the design of the network communication model includes the following steps:
defining the uplink network bandwidth resource of the base station as a fixed value W, all mobile users in a cell need to share the bandwidth resource of the base station, and making a reasonable network bandwidth resource allocation decision B = { B } for the system at each time slice 1 ,B 2 ,...,B |N| }. The proportion of the bandwidth resources allocated to the mobile user by the base station in the time slice t is B n ,B n ∈[0,1]And satisfy
Figure GDA0003862356550000153
Then according to Shannon's formula, when multiple terminals in a cell simultaneously offload tasks to the MEC server, the uplink task transmission rate r between user n and the server n Can be expressed as:
Figure GDA0003862356550000161
wherein r is n Is the transmission power of the nth user equipment, and g n Representing the gain of the radio transmission channel between user N and the base station over time slice t, N 0 Is the power spectral density of white gaussian channel noise.
In step 3) provided by the embodiment of the present invention, the construction of the parallel task execution model includes the following steps:
3.1 Classification of computational offload models
Depending on task offload decision x = { x = } 1 ,x 2 ,...,x |N| Each mobile terminal can process its task locally and can also offload a part of the task to be executed on the MEC server. Therefore, the calculation unloading model of the task is divided into a local execution model and an edge unloading model, and the local processing and the edge unloading are simultaneously executed in parallel, so that the task processing time delay is reduced, and the service feedback time is greatly prolonged.
3.2 ) native execution model
The local execution model processes task data and obtains an execution result through self computing capacity of the mobile equipment, and mainly relates to local execution time delay
Figure GDA0003862356550000162
And equipment execution energy consumption
Figure GDA0003862356550000163
Two parts overhead. Defining the local CPU calculation frequency of the nth mobile terminal as
Figure GDA0003862356550000164
The computation delay of the locally executed part of the task
Figure GDA0003862356550000165
Can be expressed as:
Figure GDA0003862356550000166
wherein (1-x) n )L n As task A n Number of CPU cycles required to execute a portion locally. In addition, for each user terminal, the sub-tasks processed locally should also take into account the waiting time of the current time slice on the task buffer queue waiting to be executed locally
Figure GDA0003862356550000167
Figure GDA0003862356550000168
Therefore, the total delay of the nth ue for locally processing the subtasks is defined as:
Figure GDA0003862356550000169
3.3 Edge unload model
The edge server side processing subtasks typically need to be divided into three steps. First, the mobile user transmits task related data to the base station through a wireless link. When the base station forwards the subtasks unloaded by the mobile user n to the MEC server, the MEC server allocates the computing resources for the MEC server, and the computing resource allocation decision is represented as:
F={F 1 ,F 2 ,...,F n ,...,F |N| };
and satisfy the conditions
Figure GDA0003862356550000171
And finally, the MEC server feeds back the execution result to the mobile equipment, and the return data volume of the task execution result is usually very small and is far lower than the uploaded task data, and the return rate of the wireless network is very high, so the method ignores the time delay and the energy consumption cost for returning the task result to the user. The required transmission time is delayed in the process of transmitting the unloading task data to the base station by the nth user terminal
Figure GDA0003862356550000172
Is defined as:
Figure GDA0003862356550000173
corresponding offload data x n D n The energy consumption during transmission is:
Figure GDA0003862356550000174
wherein, P n The uplink transmission power of the nth mobile user. After the data transmission is completed, task A n The computation time of the offload portion on the edge server is:
Figure GDA0003862356550000175
wherein, F n Is MEC clothesProportion of computing resources allocated by the server to the nth mobile subscriber, f C The CPU computing power of the MEC server. The total processing delay of the tasks unloaded to the MEC server is the sum of the transmission delay and the execution delay of the tasks, and is represented as follows:
Figure GDA0003862356550000176
in summary, for the time slice t, since the user local and the server in the MEC system can execute the task in parallel, the task a of the mobile user n in the cell n The total processing delay is:
Figure GDA0003862356550000177
complete task A n The total cost of the energy consumption required is:
Figure GDA0003862356550000181
in step 4) provided by the embodiment of the invention, the overall cost problem planning of the MEC system comprises the following steps:
aiming at the problems of partial unloading of the task and allocation of network bandwidth and MEC (centralized computation center) computing resources, a weighting factor omega of time delay and energy consumption is introduced 1 And omega 2 For adjusting the weight ratio of time and energy consumption costs according to user-specific preferences, and 12 =1, and the objective function is formulated as follows:
Figure GDA0003862356550000182
as shown in the above equation, according to the task execution model,
Figure GDA0003862356550000183
and
Figure GDA0003862356550000184
adopting task unloading decision x = { x) under t time slices respectively 1 ,x 2 ,...,x |N| Bandwidth and computational resource allocation policy B = { B = } 1 ,B 2 ,...,B n ,...,B |N| } and F = { F 1 ,F 2 ,...,F n ,...,F |N| The task processing time delay and energy consumption are reduced; for is to
Figure GDA0003862356550000185
Maximizing aims to reduce task A as much as possible n And the weighted sum of the feedback time and the local mobile equipment energy consumption realizes the maximum utility of the whole MEC system model.
The constraint conditions in C1-C3 respectively represent that the unloading proportion distributed by all mobile users in the cell is not more than 1, and the sum of the proportion of the distributed bandwidth resources and the calculated resources is less than or equal to 1; c4 then represents task A n The time delay required for the local execution part and the offload to the server-side part must not exceed its tolerable maximum deadline
Figure GDA0003862356550000186
And C5 and C6 ensure that the time and energy cost required to employ the present computational offload scheme is no greater than the time delay and energy consumption of the full local execution.
In step 5) provided by the embodiment of the present invention, the joint optimization of time delay and energy consumption includes the following steps:
5.1 ) next action decision
Based on the current state, a next step action decision is given, specifically, a quadruplet M = (S, a) may be utilized s ,P ss',a ,R s,a ) This process is described.
Wherein S is a limited set of states, A is a limited set of actions, S is the system state under the current time slice and S belongs to S, S 'is the next state of the system and S' belongs to S, a is the selected action and a belongs to A, P ss',a Representing the probability of a transition from the current state s to the next new state s' when performing action a, R s,a To be heldThe action a is then converted to an immediate direct award acquired by s' via state s.
In addition, discount factor γ ∈ [0,1] is used to measure the reward value that the state of future time slices has, i.e., the impact of actions taken based on the state of later time slices on the overall reward value gradually decays, and the weighted sum G (t) of all reward values resulting from the set of actions taken on time slice t in the markov decision process is described as:
Figure GDA0003862356550000191
wherein, γ i R (t + i + 1) is the value expression of the reward obtained by the time slice t + i +1 on the time slice t.
5.2 ) function three-factor design
Based on the system model, respectively designing the states, actions and reward functions as follows according to the application scene of the single MEC server of the multi-terminal user:
5.2.1 State): the state space firstly ensures that all information in the environment can be contained, and the change of the environment in each time slice is fully reflected; the present system state is therefore defined as:
s(t)=[D 1 (t),D 2 (t),...,D |N| (t),L 1 (t),L 2 (t),...,L |N| (t),K 1 (t),K 2 (t),...,K |N| (t),r 1 (t),r 2 (t),...,r |N| (t)];
the system state comprises four parts, namely the data volume D (t) of the mobile user arriving at the task in the cell on the current time slice t, the number L (t) of the required computing resources, the task volume K (t) to be executed on the task cache queue and the data transmission rate r (t) between the mobile user and the server;
5.2.2 Actions): according to the data volume D (t) of the newly arrived task of the user on each time slice in the state s (t), the number L (t) of required computing resources and the task volume K (t) to be executed on the task cache queue, the Agent of the intelligent Agent is a task A n Making an offload proportion decision x (t), a ratio of bandwidth resources allocated per mobile user B (t), and server calculationsThe resource allocation ratio F (t), i.e.:
a(t)=[x 1 (t),x 2 (t),...,x |N| (t),B 1 (t),B 2 (t),...,B |N| (t),F 1 (t),F 2 (t),...,F |N| (t)];
5.2.3 A reward function: the instantaneous reward obtained by the system on the time slice t is set as the objective function value in the formula, namely
Figure GDA0003862356550000201
The greater the reward R (t) obtained by performing the action a (t), the greater the cost of time for all user tasks to perform at the current time slice t
Figure GDA0003862356550000202
And cost of energy consumption
Figure GDA0003862356550000203
The smaller the weighted sum.
5.3 Optimization of depth-deterministic policy gradient (DDPG) algorithm
And dynamically solving a task unloading decision and resource allocation scheme under each time slice by using a depth deterministic strategy gradient algorithm so as to minimize an objective function and reduce the weighted total cost of time delay and energy consumption.
5.3.1 Deep deterministic policy gradient network architecture
The DDPG network structure has two modules of Actor and Critic, and includes four neural networks, which are an Actor current network, an Actor target network, a Critic current network, and a Critic target network, respectively, and the deep deterministic policy gradient network architecture is shown in fig. 5.
The Actor network module is used for selecting an action and giving the action to the Agent of the Agent to execute the action, and the Critic module is used for evaluating a Q value according to the state s (t) and the action a (t). The empirical playback unit stores samples of state transition data obtained from interaction with the environment for later sampling.
5.3.2 Depth deterministic policy gradient algorithm objective function
The objective function of the Critic module in the DDPG algorithm is a timing difference TD-error, which is used for representing the difference between the current action and the expected action, and the loss function of the Critic network is defined as the square of the TD-error:
Figure GDA0003862356550000204
where m is the number of state transition samples { s (t), a (t), R (t), s (t + 1) } randomly drawn from the empirical playback unit, R (t) being the reward resulting from performing action a (t) over a time slice t, Q' (s (t + 1), a (t + 1), θ Q' ) Then represents the evaluation value, θ, of the state s (t + 1) and action a (t + 1) pair at the next time slice t +1 given by the Critic target network Q 、θ Q' The weighting parameters, Q (s (t), a (t), θ, of the current and target networks, respectively Q ) Then the value of the state s (t) and the action a (t) corresponding to the current moment t is judged through the Critic current network, and gamma is a discount factor;
the Actor module in the DDPG algorithm updates parameters of the network in a gradient descending mode, and aims to select an action a (t) = mu (s (t), theta (t)) which can enable the evaluation value to be maximum as far as possible μ ) The loss function is:
Figure GDA0003862356550000211
wherein, theta μ Is the weight parameter of the Actor's current network;
the multi-user single-server joint task unloading and resource allocation method based on the DDPG comprises the following steps:
(1) input task request set { A 1 ,A 2 ,...,A N }, uplink network bandwidth W and MEC server computing capacity f C Initializing the experience playback unit M and simultaneously randomly initializing the current network mu (s (t), theta) of the Actor μ ) And criticic current network Q (s (t), a (t), θ Q ) Weight of theta μ And theta Q
(2) Randomly initializing m state transition data for action detection, and receiving an initial state s (0);
(3) generating an action a (t) = μ (s (t), θ, according to current strategy and detection noise loop traversal μ )+noise(t);
(4) Executing the unloading decision and resource allocation of the action a (t) and obtaining a report R (t) and a state s (t + 1) of the next time slice;
(5) randomly selecting M state conversion tuples { s (t), a (t), R (t), s (t + 1) } from an empirical playback unit M;
(6) updating the target networks of Critic and Actor by using a soft updating mode based on a loss function
(7) Repeating the step (3) to the step (6) for T times, wherein T is the number of time slices;
(8) repeating the step (2) to the step (7) for E times, wherein E is the number of epsilon;
(9) and outputting a task unloading decision x, a bandwidth resource allocation scheme B and a computing resource allocation scheme F.
As shown in fig. 3, the single-server partial computation offload system in the mobile edge environment provided by the present invention specifically includes:
the MEC service scene constructing module 1 is used for abstracting an application scene that a single base station and an edge server are deployed around a plurality of mobile users in a single cell.
And the network communication module 2 is used for sending each user task input data to the MEC server through the uplink of the base station under the condition that the single base station deployed in the cell provides wireless network service for the users.
And the task execution module 3 is used for dividing the calculation unloading model of the task into a local execution model and an edge unloading model, and simultaneously executing the local processing part and the edge unloading part in parallel so as to reduce the task processing time delay and greatly improve the service feedback time.
And the MEC system total cost problem planning module 4 improves the experience quality of each mobile user in the cell, measures the performance of the calculation unloading model from two aspects of time delay and energy consumption, and performs problem planning on the MEC system total cost in a single-cell multi-mobile user scene according to the network communication and task execution model.
And the delay and energy consumption combined optimization module 5 is used for solving the problem in each time slice, so that the delay and energy consumption combined optimization under the scene of multiple mobile users and single server MEC under all the time slices is realized.
The technical effects of the present invention will be described in detail with reference to simulation experiments.
1. Experimental setup
In order to verify the performance of the proposed partial computation unloading algorithm based on the depth certainty strategy gradient, the method adopts python 3.6 to carry out a simulation experiment, and the integrated development environment is JetBrains Pycharm.
2. Content of the experiment
Because the proposed model is different from models in existing edge computing task unloading documents, in order to verify the performance of the proposed algorithm, the cited partial computation unloading algorithm based on the depth deterministic strategy gradient is compared with three baseline algorithms and a partial computation unloading algorithm based on a Depth Q Network (DQN).
(1) Local execution (AL): the tasks generated by the mobile user in each time slice are all performed locally and there is no need to allocate bandwidth resources and computational resources at this time.
(2) Offload execution and Proportional allocation of resources (AOAPF): and unloading all tasks to the MEC server for execution and distributing bandwidth resources and computing resources according to the data size of each mobile user task and the required computing resource number.
(3) Offload execution and Random Fair (AOARF): the tasks are all offloaded to the MEC server execution and bandwidth resources and computing resources are randomly allocated.
(4) Partial computation offload algorithm based on DQN: the same DDPG-based partial computation offload algorithm states, actions, and reward functions as proposed by the present invention, but the action space of the DQN algorithm is discrete. For each mobile user n, a proportional decision alpha is made for its task offloading n Bandwidth resource allocation ratio decision B n And computing resource allocation proportion decision F n The setting level is level =6, and can be respectively expressed as:
Figure GDA0003862356550000231
Figure GDA0003862356550000232
Figure GDA0003862356550000233
then is being satisfied
Figure GDA0003862356550000234
And
Figure GDA0003862356550000235
under the constraint of (2), each user-selectable action space is: alpha is alpha n,level ×B n,level ×F n,level
3. Experimental results and performance analysis
As shown in FIG. 5, 1000 epicodes are trained for the proposed DDPG-based partial computation offload algorithm in the experiment, each epicode in the learning process contains 1000 time slices, and a weight factor ω of time delay and energy consumption is set 1 And omega 2 All of which are 0.5. First, table 1 shows the MEC server computing power f C The impact of offloading policy performance is calculated for a lower part of the proposed multi-user single-server MEC system. Experiments show that with MEC server computing power f C The proposed DDPG-based computational offload algorithm performs best.
TABLE 1 Effect of MEC Server computing capacity on Total System benefit
Figure GDA0003862356550000236
As shown in fig. 6, simulation experiments were performed on parameters of the number of terminal devices being 2,3,4,5, and 6, respectively, and table 2 shows the variation trend of the total benefit of the system with the increase of the number of mobile devices. Experiments show that under the condition that the number of mobile users is continuously increased, the total system benefits of the five algorithms, namely DDPG, AL, AOAPF, AOARF and DQN, are in a descending trend, the partial unloading strategy performance based on DDPG is optimal, the time delay and energy consumption long-term weighting total overhead of the whole multi-user single-server MEC system can be effectively reduced, and certain feasibility is achieved.
TABLE 2 influence of number of mobile devices on the overall benefit of the system
Figure GDA0003862356550000241
It should be noted that embodiments of the present invention can be realized in hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. It will be appreciated by those skilled in the art that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, for example such code provided on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware) or a data carrier such as an optical or electronic signal carrier. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the embodiments of the present invention, and the scope of the present invention should not be limited thereto, and any modifications, equivalents and improvements made by those skilled in the art within the technical scope of the present invention as disclosed in the present invention should be covered by the scope of the present invention.

Claims (4)

1. A single server part calculation unloading method in a mobile edge environment is characterized in that the single server part calculation unloading method in the mobile edge environment specifically comprises the following steps:
constructing an application scene of a single base station and a single MEC server of a multi-user terminal to realize the unloading decision of a user terminal task;
designing a network communication model, and sending task input data of each user terminal to an MEC server through a base station uplink to allocate network bandwidth resources;
step three, a local processing and edge unloading parallel task execution module is constructed, and the feedback efficiency of the application service is improved;
fourthly, performing target modeling according to network communication time delay and task execution energy consumption to realize problem planning of the total cost of the MEC system in a single-base-station multi-user terminal scene;
step five, dynamically optimizing a task unloading and resource allocation strategy by using a depth certainty strategy gradient method, and realizing combined performance optimization based on time delay and energy consumption;
in the first step, the construction of the multi-user terminal single MEC server application scene comprises the following steps:
1) Modeling a single MEC scene of a plurality of user terminals in a cell, constructing a system network architecture consisting of a single base station, a plurality of user terminals and a public high-performance MEC server, and expressing the sequence number of user terminal equipment as follows:
N={1,2,...}
and each user terminal n will generate a divisible compute-intensive task:
Figure FDA0003887513980000011
wherein D is n Representing task A n Size of uploaded data amount, L n Representing the execution of a computing task A n The amount of computing resources that are required,
Figure FDA0003887513980000012
for processing task A n Tolerable maximum time delay requirements;
2) Calculating an offload decision vector, task A on user terminal n n The execution ratio offloaded to the MEC server is denoted x n =[0,1]And N belongs to N, the ratio of the locally executed tasks on the user terminal N is 1-x n The two parts of operations of task local execution and MEC server calculation unloading are executed in a parallel mode, so that the total processing time delay of the task is reduced, and the service response time is prolonged;
the final computed offload decision vector x for all user terminal tasks can be expressed as:
x={x 1 ,x 2 ,...,x |N| };
3) Setting a task waiting execution cache queue, and supposing that the time of the whole MEC system is divided into a plurality of time lengths which are tau 0 And a buffer queue I of tasks to be executed, which are executed in a first-in first-out sequence, is set for each user terminal n in view of the problem of limited user terminal resources n ,K n Is queue I n The total amount of task computation to be executed is calculated, and each time a new time slice t +1 begins, the user terminal n waits for the total amount of task computation K to be executed n The dynamically updated formula of (a) is:
Figure FDA0003887513980000021
wherein, K n (t + 1) is the amount of tasks to be executed on the buffer queue of the t +1 th time slice user terminal n, x n (t) indicates the task on the nth user terminal within a time slice t, A n (t) is a decision variable for local processing or edge offload, L n (t) to perform task A n (t) the amount of computing resources required,
Figure FDA0003887513980000022
representing the local execution task amount of the user terminal n in the t time slice;
4) The method comprises the steps of solving the problems of data transmission and calculation, firstly forwarding the relevant task information of each user terminal to an MEC server through a base station aiming at the problems of data transmission and calculation when a server-side task is unloaded, and then distributing corresponding calculation resources to execute the task by the MEC server;
in the second step, the design of the network communication model comprises the following steps:
defining the uplink network bandwidth resource of a base station as a fixed value W, all user terminals in a cell need to share the bandwidth resource of the base station, and making a network bandwidth resource allocation decision B = { B ] for a system in each time slice 1 ,B 2 ,...,B |N| }; the ratio of the bandwidth resources allocated to the user terminal by the base station in the time slice t is B n ,B n ∈[0,1]And satisfy
Figure FDA0003887513980000023
Then according to Shannon's formula, when multiple terminals in a cell simultaneously offload tasks to the MEC server, the uplink task transmission rate r between the user terminal n and the server n Can be expressed as:
Figure FDA0003887513980000024
wherein, P n Is the transmission power of the nth user terminal, and g n Representing the gain of the radio transmission channel between the user terminal N and the base station over a time slice t, N 0 Is the power spectral density of white gaussian channel noise;
in the third step, the construction of the parallel task execution module comprises the following steps:
calculating an unloading model classification according to task unloading decision x = { x = 1 ,x 2 ,...,x |N| Each user terminal can process the task locally and can unload part of the task to the MEC server for execution; the calculation unloading model of the task is divided into a local execution model and an edge unloading model, and the local processing part and the edge unloading part are simultaneously executed in parallel, so that the time delay of task processing is reduced, and the service is improvedService feedback time;
a local execution model, which processes task data and obtains execution result by user terminal through its own computing power, mainly relating to local execution time delay
Figure FDA0003887513980000031
And device execution energy consumption
Figure FDA0003887513980000032
Two parts of overhead; defining the local CPU calculation frequency of the nth user terminal as
Figure FDA0003887513980000033
The computation delay of the locally executed part of the task
Figure FDA0003887513980000034
Can be expressed as:
Figure FDA0003887513980000035
wherein (1-x) n )L n As task A n The number of CPU cycles required for the local execution section; in addition, for each user terminal, the locally processed subtasks should also adopt the waiting time of the current time slice on the task cache queue which is locally waiting to be executed
Figure FDA0003887513980000036
Figure FDA0003887513980000037
Therefore, the total delay of the nth user terminal to process the sub-tasks locally is defined as:
Figure FDA0003887513980000038
an edge server side needs to be divided into three steps for processing a subtask; firstly, a user terminal transmits task related data to a base station through a wireless link; when the base station forwards the subtask unloaded by the user terminal n to the MEC server, the MEC server allocates the computing resource for the MEC server, and the computing resource allocation decision is represented as:
F={F 1 ,F 2 ,...,F n ,...,F |N| };
and satisfy the conditions
Figure FDA0003887513980000041
Finally, the MEC server feeds back the execution result to the user terminal, and the return data volume of the task execution result is usually very small and far lower than the uploaded task data, and the return rate of the wireless network is very high, so that the time delay and the energy consumption cost for returning the task result to the user terminal are ignored; the required transmission time is delayed in the process of transmitting the unloading task data to the base station by the nth user terminal
Figure FDA0003887513980000042
Is defined as:
Figure FDA0003887513980000043
corresponding offload data x n D n The energy consumption during transmission is:
Figure FDA0003887513980000044
wherein, P n The uplink transmission power of the nth user terminal; after the data transmission is completed, task A n The computation time of the offload part on the edge server is:
Figure FDA0003887513980000045
wherein, F n Proportion of computing resources allocated to the nth user terminal for the MEC server, f C Computing power for the CPU of the MEC server; the total processing time delay of the tasks unloaded to the MEC server end is the sum of the transmission time delay and the execution time delay of the tasks, and is represented as follows:
Figure FDA0003887513980000046
for time slice t, since the local user terminal and the server in the MEC system can execute tasks in parallel, task a of user terminal n in the cell n The total processing delay is:
Figure FDA0003887513980000047
complete task A n The total cost of the energy consumption required is:
Figure FDA0003887513980000048
in the fourth step, the overall cost problem planning of the MEC system comprises the following steps:
aiming at the problems of partial unloading of the task and allocation of network bandwidth and MEC (centralized computation center) computing resources, a weighting factor omega of time delay and energy consumption is introduced 1 And omega 2 For adjusting the weight ratio of time and energy consumption costs according to user terminal specific preferences, and ω 12 =1, and the objective function is formulated as follows:
Figure FDA0003887513980000051
as shown in the above formula, according to the task executionThe line model is a model of the line,
Figure FDA0003887513980000052
and
Figure FDA0003887513980000053
respectively adopting task unloading decision x = { x ] under t time slices 1 ,x 2 ,...,x |N| And bandwidth resource allocation policy B = { B = } 1 ,B 2 ,...,B n ,...,B |N| } and the computation resource allocation policy F = { F = 1 ,F 2 ,...,F n ,...,F |N| The task processing time delay and energy consumption are reduced; to pair
Figure FDA0003887513980000054
Maximizing aims to reduce task A as much as possible n The weighted sum of the feedback time and the energy consumption of the local user terminal realizes the maximum utility of the whole MEC system model,
Figure FDA0003887513980000055
indicating the time delay of the overall local execution,
Figure FDA0003887513980000056
represents the energy consumption of all local executions;
constraint conditions in C1-C3 respectively represent that the unloading proportion distributed by all user terminals in a cell is not more than 1, and the sum of the proportion of the distributed bandwidth resources and the calculated resources is less than or equal to 1; c4 then represents task A n The time delay required for the local execution part and the offload to the server-side part must not exceed its tolerable maximum deadline
Figure FDA0003887513980000057
And C5 and C6 ensure that the time and energy consumption costs required to employ the present computational offload scheme are no greater than the time delay and energy consumption of all local executions;
in the fifth step, the joint optimization of time delay and energy consumption comprises the following steps:
giving next action decision based on the current state, and using the quadruplet M = (S, A) s ,P ss',a ,R s,a ) To describe this process;
wherein S is a limited set of states, A is a limited set of actions, S is the system state under the current time slice and S belongs to S, S 'is the next state of the system and S' belongs to S, a is the selected action and a belongs to A, P ss',a Representing the probability of a transition from the current state s to the next new state s' when performing action a, R s,a The instant direct reward obtained by converting the state s into s' after the action a is executed;
in addition, discount factor γ ∈ [0,1] is used to measure the reward value that the state of the future time slice has, i.e., the effect of actions taken based on the state of the delayed time slice on the overall reward value gradually decays, and the weighted sum G (t) of all reward values resulting from the action of a set of actions taken on time slice t in the markov decision process is described as:
Figure FDA0003887513980000061
wherein, γ i R (t + i + 1) is the value expression of the reward obtained by the time slice t + i +1 on the time slice t;
and (3) function three-element design, namely respectively designing the states, actions and reward functions as follows according to the application scene of the single MEC server of the multi-user terminal on the basis of a system model:
the state is as follows: the state space firstly ensures that all information in the environment can be contained, and the change of the environment in each time slice is fully reflected; the present system state is therefore defined as:
s(t)=[D 1 (t),D 2 (t),...,D |N| (t),L 1 (t),L 2 (t),...,L |N| (t),K 1 (t),K 2 (t),...,K |N| (t),r 1 (t),r 2 (t),...,r |N| (t)];
the system state comprises four parts, namely data volume D (t) of the user terminal to reach the task in the cell on the current time slice t, the number L (t) of required computing resources, the task volume K (t) to be executed on the task cache queue and the data transmission rate r (t) between the user terminal and the server;
the method comprises the following steps: according to the data volume D (t) of the newly arrived task of the user terminal on each time slice in the state s (t), the number L (t) of required computing resources and the task volume K (t) to be executed on the task cache queue, the Agent of the Agent is the task A n Making an offload proportion decision x (t), a bandwidth resource ratio B (t) allocated to each ue, and a server computing resource allocation ratio F (t), i.e.:
a(t)=[x 1 (t),x 2 (t),...,x |N| (t),B 1 (t),B 2 (t),...,B |N| (t),F 1 (t),F 2 (t),...,F |N| (t)];
the reward function: the instantaneous reward obtained by the system on the time slice t is set as the objective function value in the formula, namely
Figure FDA0003887513980000062
The larger the reward R (t) obtained by executing the action a (t) is, the time cost for executing all the user terminal tasks in the current time slice t is shown
Figure FDA0003887513980000071
And energy consumption cost
Figure FDA0003887513980000072
The smaller the weighted sum;
optimizing a depth certainty strategy gradient DDPG algorithm, and dynamically solving a task unloading decision and resource allocation scheme under each time slice by using the depth certainty strategy gradient algorithm so as to minimize a target function and reduce the weighted total cost of time delay and energy consumption;
the DDPG network structure is provided with an Actor and a Critic module and comprises four neural networks which are an Actor current network, an Actor target network, a Critic current network and a Critic target network respectively;
the Actor network module is used for selecting an action and delivering the action to the Agent of the intelligent Agent to execute the action, and the Critic module is used for evaluating a Q value according to a state s (t) and an action a (t); the experience playback unit stores a state transition data sample obtained by interaction with the environment for later sampling;
the objective function of the criticc module in the DDPG algorithm is a time sequence difference TD-error and is used for representing the difference between the current action and the expected action, and the loss function of the criticc network is defined as the square of the TD-error:
Figure FDA0003887513980000073
where m is the number of state transition samples { s (t), a (t), R (t), s (t + 1) } randomly drawn from the empirical playback unit, R (t) being the reward resulting from performing action a (t) over a time slice t, Q' (s (t + 1), a (t + 1), θ Q' ) Then represents the evaluation value, θ, of the state s (t + 1) and action a (t + 1) pair at the next time slice t +1 given by the Critic target network Q 、θ Q' The weighting parameters, Q (s (t), a (t), θ, of the current and target networks, respectively Q ) Then the value of the state s (t) and the action a (t) corresponding to the current moment t is judged through the Critic current network, and gamma is a discount factor;
the Actor module in the DDPG algorithm updates parameters of the network in a gradient descending mode, and aims to select an action a (t) = mu (s (t), theta (t)) which can enable the evaluation value to be maximum as far as possible μ ) The loss function is:
Figure FDA0003887513980000074
wherein, theta μ Is the weight parameter of the Actor's current network;
the joint task unloading and resource allocation method based on the DDPG comprises the following steps of:
(1) input task request set { A 1 ,A 2 ,...,A |N| And upstreamNetwork bandwidth W and MEC server computing power f C Initializing the experience playback unit M and simultaneously randomly initializing the current network mu (s (t), theta) of the Actor μ ) And criticic current network Q (s (t), a (t), θ Q ) Weight of theta μ And theta Q
(2) Randomly initializing m state transition data for action detection, and receiving an initial state s (0);
(3) generating an action a (t) = μ (s (t), θ, according to current strategy and detection noise loop traversal μ )+noise(t);
(4) Executing the unloading decision and resource allocation of the action a (t) and obtaining a report R (t) and a state s (t + 1) of the next time slice;
(5) randomly selecting M state conversion tuples { s (t), a (t), R (t), s (t + 1) } from an empirical playback unit M;
(6) updating the Critic and Actor target networks by using a soft updating mode based on the loss function;
(7) repeating the step (3) to the step (6) for T times, wherein T is the number of time slices;
(8) repeating the step (2) to the step (7) for E times, wherein E is the number of epsilon;
(9) and outputting a task unloading decision x, a bandwidth resource allocation scheme B and a computing resource allocation scheme F.
2. A single server portion computing offload system in a mobile edge environment for implementing the single server portion computing offload method in the mobile edge environment of claim 1, wherein the single server portion computing offload system in the mobile edge environment comprises:
an MEC service scene construction module, which is used for abstracting an application scene that a single base station and an edge server are deployed around a plurality of user terminals in a single cell;
a network communication module, configured to assume a situation that a single base station deployed in a cell provides a wireless network service for a user terminal, and send task input data of each user terminal to an MEC server through a base station uplink;
the task execution module is used for dividing a calculation unloading model of a task into a local execution model and an edge unloading model, and simultaneously executing a local processing part and an edge unloading part in parallel so as to reduce the task processing time delay and improve the service feedback time;
the MEC system total cost problem planning module is used for improving the experience quality of each user terminal in a cell, measuring the performance of a calculation unloading model from two aspects of time delay and energy consumption, and performing problem planning on the MEC system total cost in a single-cell multi-user terminal scene according to the network communication and task execution model;
and the delay and energy consumption combined optimization module is used for solving the problem in each time slice, so that the delay and energy consumption combined optimization in the multi-user terminal single server MEC scene under all the time slices is realized.
3. A computer arrangement comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the method of single server portion computing offload in a mobile edge environment of claim 1.
4. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the single server portion computing offload method in a mobile edge environment of claim 1.
CN202111060966.0A 2021-09-10 2021-09-10 Single server part calculation unloading method, system and equipment under mobile edge environment Active CN113950066B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111060966.0A CN113950066B (en) 2021-09-10 2021-09-10 Single server part calculation unloading method, system and equipment under mobile edge environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111060966.0A CN113950066B (en) 2021-09-10 2021-09-10 Single server part calculation unloading method, system and equipment under mobile edge environment

Publications (2)

Publication Number Publication Date
CN113950066A CN113950066A (en) 2022-01-18
CN113950066B true CN113950066B (en) 2023-01-17

Family

ID=79328000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111060966.0A Active CN113950066B (en) 2021-09-10 2021-09-10 Single server part calculation unloading method, system and equipment under mobile edge environment

Country Status (1)

Country Link
CN (1) CN113950066B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114614878B (en) * 2022-02-14 2023-08-29 哈尔滨工业大学(深圳) Coding calculation distribution method based on matrix-vector multiplication task in star-to-ground network
CN114615705B (en) * 2022-03-11 2022-12-20 广东技术师范大学 Single-user resource allocation strategy method based on 5G network
CN114340016B (en) * 2022-03-16 2022-07-26 北京邮电大学 Power grid edge calculation unloading distribution method and system
CN114640675B (en) * 2022-03-21 2024-02-09 中国联合网络通信集团有限公司 Unloading strategy determining method and device, electronic equipment and storage medium
CN114786215B (en) * 2022-03-22 2023-10-20 国网浙江省电力有限公司信息通信分公司 Transmission and calculation joint optimization system and method for multi-base-station mobile edge calculation system
CN115002799B (en) * 2022-04-25 2024-04-12 燕山大学 Task unloading and resource allocation method for industrial hybrid network
CN114866548B (en) * 2022-04-26 2022-12-23 中南大学 Task unloading method based on mobile fog calculation
CN115002801B (en) * 2022-04-27 2024-04-16 燕山大学 Edge computing resource dynamic unloading method and device based on passive relay collaboration
CN114884949B (en) * 2022-05-07 2024-03-26 深圳泓越信息科技有限公司 Task unloading method for low-orbit satellite Internet of things based on MADDPG algorithm
CN115002123B (en) * 2022-05-25 2023-05-05 西南交通大学 System and method for rapidly adapting task offloading based on mobile edge computation
CN115002113B (en) * 2022-05-26 2023-08-01 南京邮电大学 Mobile base station edge computing power resource scheduling method, system and electronic equipment
CN114860345B (en) * 2022-05-31 2023-09-08 南京邮电大学 Calculation unloading method based on cache assistance in smart home scene
CN115022319A (en) * 2022-05-31 2022-09-06 浙江理工大学 DRL-based edge video target detection task unloading method and system
CN115334076A (en) * 2022-07-08 2022-11-11 电子科技大学 Service migration method and system of edge server and edge server equipment
CN115208894B (en) * 2022-07-26 2023-10-13 福州大学 Pricing and calculating unloading method based on Stackelberg game in mobile edge calculation
CN115623540B (en) * 2022-11-11 2023-10-03 南京邮电大学 Edge optimization unloading method for mobile equipment
CN115858048B (en) * 2023-03-03 2023-04-25 成都信息工程大学 Hybrid critical task oriented dynamic arrival edge unloading method
CN116321199A (en) * 2023-04-10 2023-06-23 南京邮电大学 Task unloading method, device and medium based on timing diagram and diagram matching theory
CN116489711A (en) * 2023-04-25 2023-07-25 北京交通大学 Task migration method of edge computing network based on deep reinforcement learning
CN116709428B (en) * 2023-08-04 2023-11-24 华东交通大学 Calculation unloading method and system based on mobile edge calculation
CN117499999B (en) * 2023-12-29 2024-04-12 四川华鲲振宇智能科技有限责任公司 Task unloading method based on edge calculation
CN117793805B (en) * 2024-02-27 2024-04-26 厦门宇树康信息技术有限公司 Dynamic user random access mobile edge computing resource allocation method and system
CN117834643B (en) * 2024-03-05 2024-05-03 南京邮电大学 Deep neural network collaborative reasoning method for industrial Internet of things

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951897A (en) * 2019-03-08 2019-06-28 东华大学 A kind of MEC discharging method under energy consumption and deferred constraint
CN111918339A (en) * 2020-07-17 2020-11-10 西安交通大学 AR task unloading and resource allocation method based on reinforcement learning in mobile edge network
CN112689296A (en) * 2020-12-14 2021-04-20 山东师范大学 Edge calculation and cache method and system in heterogeneous IoT network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951897A (en) * 2019-03-08 2019-06-28 东华大学 A kind of MEC discharging method under energy consumption and deferred constraint
CN111918339A (en) * 2020-07-17 2020-11-10 西安交通大学 AR task unloading and resource allocation method based on reinforcement learning in mobile edge network
CN112689296A (en) * 2020-12-14 2021-04-20 山东师范大学 Edge calculation and cache method and system in heterogeneous IoT network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Joint Task Offloading and Resource Allocation via Proximal Policy Optimization for Mobile Edge Computing Network;Lingling An等;《2021 International Conference on Networking and Network Applications》;20211031;全文 *
基于深度强化学习多用户移动边缘计算轻量任务卸载优化;张文献,杜永文;《Journal of Measurement Science and Instrumentation》;20201119;全文 *

Also Published As

Publication number Publication date
CN113950066A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN113242568B (en) Task unloading and resource allocation method in uncertain network environment
CN113950103B (en) Multi-server complete computing unloading method and system under mobile edge environment
CN108920280B (en) Mobile edge computing task unloading method under single-user scene
CN110798849A (en) Computing resource allocation and task unloading method for ultra-dense network edge computing
CN113612843A (en) MEC task unloading and resource allocation method based on deep reinforcement learning
CN114340016B (en) Power grid edge calculation unloading distribution method and system
Shi et al. Toward energy-efficient federated learning over 5g+ mobile devices
CN113064671A (en) Multi-agent-based edge cloud extensible task unloading method
CN112988345A (en) Dependency task unloading method and device based on mobile edge calculation
CN114205353B (en) Calculation unloading method based on hybrid action space reinforcement learning algorithm
CN113568727A (en) Mobile edge calculation task allocation method based on deep reinforcement learning
CN113645637B (en) Method and device for unloading tasks of ultra-dense network, computer equipment and storage medium
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN113760511B (en) Vehicle edge calculation task unloading method based on depth certainty strategy
CN113993218A (en) Multi-agent DRL-based cooperative unloading and resource allocation method under MEC architecture
CN112799823A (en) Online dispatching and scheduling method and system for edge computing tasks
CN113626104A (en) Multi-objective optimization unloading strategy based on deep reinforcement learning under edge cloud architecture
Hu et al. Dynamic task offloading in MEC-enabled IoT networks: A hybrid DDPG-D3QN approach
CN116366576A (en) Method, device, equipment and medium for scheduling computing power network resources
CN117579701A (en) Mobile edge network computing and unloading method and system
CN117436485A (en) Multi-exit point end-edge-cloud cooperative system and method based on trade-off time delay and precision
CN112445617A (en) Load strategy selection method and system based on mobile edge calculation
CN113452625B (en) Deep reinforcement learning-based unloading scheduling and resource allocation method
CN113157344B (en) DRL-based energy consumption perception task unloading method in mobile edge computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant