CN113950066A - Single server part calculation unloading method, system and equipment under mobile edge environment - Google Patents

Single server part calculation unloading method, system and equipment under mobile edge environment Download PDF

Info

Publication number
CN113950066A
CN113950066A CN202111060966.0A CN202111060966A CN113950066A CN 113950066 A CN113950066 A CN 113950066A CN 202111060966 A CN202111060966 A CN 202111060966A CN 113950066 A CN113950066 A CN 113950066A
Authority
CN
China
Prior art keywords
task
user
mobile
server
unloading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111060966.0A
Other languages
Chinese (zh)
Other versions
CN113950066B (en
Inventor
安玲玲
张星雨
廖鹏
单颖欣
岳佳豪
马晓亮
王泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202111060966.0A priority Critical patent/CN113950066B/en
Publication of CN113950066A publication Critical patent/CN113950066A/en
Application granted granted Critical
Publication of CN113950066B publication Critical patent/CN113950066B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/22Traffic simulation tools or models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/20Negotiating bandwidth
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention belongs to the technical field of task unloading and resource allocation of mobile edge computing, and discloses a method, a system and equipment for partial computation unloading of a single server in a mobile edge environment, wherein the method for partial computation unloading of the single server in the mobile edge environment comprises the following steps: a single base station and multi-user terminal single MEC server scene is established, a network communication model is designed, and each user task input data is sent to the MEC server by depending on a base station uplink; a local processing and edge unloading parallel task execution module is built, problem planning is carried out on the total cost of the MEC system under the single-cell multi-mobile-user scene by means of network communication delay and task execution energy consumption, and delay and energy consumption are jointly optimized by means of a depth deterministic gradient algorithm. The invention can dynamically make a reasonable task unloading decision and allocate bandwidth and computing resources for each user in each time slice, thereby effectively reducing the weighted cost in the aspects of the total time delay of task processing and the total energy consumption of the mobile equipment.

Description

Single server part calculation unloading method, system and equipment under mobile edge environment
Technical Field
The invention belongs to the technical field of task unloading and resource allocation of mobile edge computing, and particularly relates to a method, a system and equipment for unloading partial computing of a single server in a mobile edge environment.
Background
At present, the internet of things technology is continuously popularized and the light speed in the field of mobile communication is rapidly developed, a large number of intelligent mobile devices are produced at present, a large number of calculation-intensive tasks are mostly generated by massive intelligent devices and application, the data traffic is increased explosively, and higher requirements are brought to the data calculation capacity and the battery endurance capacity of mobile terminal equipment. Mobile Edge Computing (MEC) enables a terminal device to complete a computing task in a low-delay and low-energy-consumption manner, a Mobile user unloads the task to an Edge base station for processing, and the time delay and energy consumption of a local device can be effectively reduced by utilizing the computing power and energy reserve of an Edge server. The basic idea of edge computing is to change the computing task generated on the mobile device from being offloaded to the cloud end to being offloaded to the network edge end, so that the requirement of computation-intensive applications such as real-time online games and augmented reality on low delay is met. Different task offloading schemes have a large impact on both task completion latency and mobile device energy consumption. Therefore, how to make the most reasonable task offloading decision and resource allocation scheme according to the terminal device and the surrounding environment factors is a difficult problem to be solved in the current computing offloading framework research, and a large number of task offloading strategies based on different algorithms and optimization objectives are also proposed. According to the optimization target of the algorithm, the calculation unloading strategy can be divided into the following three types:
the first category of strategies is mainly to reduce task processing delays. Bi and Zhang et al consider a multi-user MEC system supported by Wireless power transmission in "Computing Rate Maximization for Wireless Power Mobile-Edge Computing with Binary Computing Offloading" (IEEE Transactions on Wireless Communications,2018,17 (6): 4177 and 4190), study a Binary Computing Offloading strategy in a multi-user MEC network, and solve the problems of Computing Rate weighting and Maximization of all Wireless devices in each time range by using an alternating direction multiplier algorithm, which has the following disadvantages: the energy consumption of one end of the mobile terminal device cannot be considered when computation offloading is performed, the terminal device may have a situation that an offloading policy cannot normally operate due to insufficient electric energy, the mobile device is limited in terms of computation resources, and data is generally processed at the expense of high latency and high device energy consumption.
The second category of strategies is mainly to reduce the device energy consumption. Kuang and Shi et al propose an offload Game mechanism in the "Multi-User offload Game Structure in OFDMAMobile Computing System" (IEEE Transactions on Vehicular Technology,2019,68(12): 12190-. The method has the following defects: mobile devices need to be responsible for handling more and more compute intensive tasks such as high definition video live and face recognition. However, due to limited computing resources and battery endurance, these devices may not be able to process all tasks locally with low latency.
The third type of strategy is mainly time delay and energy consumption trade-off. T. Alfakih and M.Hassan et al, "Task off flow and Resource Allocation for Mobile Edge Computing by Deep retrieval implementation on SARSA" (IEEEAccess,2020,8: 54074-. The method has the following defects: in the actual unloading process, different systems may have different performance requirements instead of being limited to only time delay and energy consumption, and how to make the most reasonable task unloading decision and resource allocation scheme according to the terminal device and the surrounding environment factors is a difficult problem to be solved urgently in the research of the computing unloading framework.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) in the existing calculation unloading strategy for reducing task processing delay, energy consumption at one end of a mobile terminal device during calculation unloading cannot be considered, so that the total time delay of model task execution is longer, and the total energy consumption of the mobile device is higher.
(2) In the existing calculation unloading strategy for reducing the energy consumption of the equipment, a user more hopes that the sum of the time consumption and the energy consumption of the system can be minimized to reduce the overall consumption of the system, or the time consumption and the energy consumption are balanced.
(3) In the existing calculation unloading strategy with compromise between time delay and energy consumption, different systems may have different performance requirements in the actual unloading process rather than being limited to time delay and energy consumption.
The difficulty in solving the above problems and defects is: in the process of task data transmission, enough wireless network bandwidth is needed, and the edge server also has limited computing resources, so that making the most reasonable task offloading decision and resource allocation scheme according to the terminal device and the surrounding environment factors is a difficult problem to be solved urgently in the research of the computing offloading framework nowadays. The device resources are limited by the conflict between the high-performance task processing requirements, but with the continuous increase of the task unloading scale, the power consumption generated by the execution of the tasks rises sharply, and the benefit of the MEC system is seriously influenced.
The significance of solving the problems and the defects is as follows: under the more complex environment of a single MEC server with multiple mobile devices, a partial computation unloading scheme taking the total time delay of task execution and the total energy consumption of the mobile devices as common optimization indexes is urgently needed.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method, a system and equipment for partial computation unloading of a single server in a mobile edge environment.
The invention is realized in such a way that a single server part computation and unloading method under a mobile edge environment comprises the following steps:
constructing an application scene of a single base station and a single MEC server of a multi-user terminal to realize unloading decision of a mobile terminal task;
designing a network communication model, and sending each user task input data to an MEC server through a base station uplink to reasonably distribute network bandwidth resources;
step three, a local processing and edge unloading parallel task execution module is set up, and the feedback efficiency of the application service is improved;
fourthly, performing target modeling according to network communication time delay and task execution energy consumption to realize problem planning of the total cost of the MEC system in a single-base-station multi-user scene;
and fifthly, dynamically optimizing a task unloading and resource allocation strategy by using a depth certainty strategy gradient method, and realizing overall performance optimization based on time delay and energy consumption.
Further, in the step one, the constructing the application scenario of the single MEC server for multiple end users includes the following steps:
1) modeling a single MEC scene of a plurality of mobile users in a cell, constructing a system network architecture consisting of a single base station, a plurality of user terminals and a public high-performance MEC server, and expressing the serial number of mobile equipment as follows:
N={1,2,...}
and each user device n will generate a divisible compute-intensive task:
Figure BDA0003256348100000041
wherein D isnRepresenting task AnSize of the uploaded data amount, LnRepresenting the execution of a computing task AnThe amount of computing resources that are required,
Figure BDA0003256348100000042
for processing task AnTolerable maximum time delay requirements;
2) computing an offload decision vector, task A on user equipment nnThe execution ratio offloaded to the MEC server is denoted xn=[0,1]And N belongs to N, the proportion of the tasks executed on the local equipment N is 1-xnThe two parts of operations of local execution of the subtask and calculation and unloading of the MEC server are executed in a parallel mode, so that the total processing time delay of the task is reduced, and the service response time is prolonged;
the final computational offload decision vector x for all user tasks can be expressed as:
x={x1,x2,...,x|N|};
3) setting a buffer queue for waiting to execute tasks, and supposing that the time of the whole MEC system is divided into a plurality of time lengths which are tau0And a buffer queue I of tasks to be executed, which are executed in a first-in first-out sequence, is set for each user terminal n in view of the problem of limited resources of mobile usersn,KnIs queue InThe total amount of task calculation to be executed is calculated, and each time a new time slice t +1 starts, the total amount K of task calculation to be executed of the mobile terminal nnThe dynamically updated formula of (a) is:
Figure BDA0003256348100000043
wherein, Kn(t +1) is the amount of tasks to be executed on the n buffer queue of the t +1 th time slice mobile terminal, xn(t) indicating task A on the nth user terminal within time slice tn(t) decision variables for local processing or edge offload, Ln(t) to perform task An(t) the amount of computing resources required,
Figure BDA0003256348100000051
representing the local execution task amount of the mobile user n in the t time slice;
4) the method solves the problems of data transmission and calculation, and aims at the problems of data transmission and calculation during task unloading at a server side, the information of tasks related to each mobile user is firstly forwarded to an MEC server through a base station, then the MEC server allocates corresponding calculation resources to execute the tasks, and the model also needs to consider a real-time allocation scheme of bandwidth and calculation resources.
Further, in the second step, the design of the network communication model includes the following steps:
defining the uplink network bandwidth resource of the base station as a fixed value W, all mobile users in a cell need to share the bandwidth resource of the base station, and making a reasonable network bandwidth resource allocation decision B ═ B for the system at each time slice1,B2,...,B|N|}; the proportion of the bandwidth resources allocated to the mobile user by the base station in the time slice t is Bn,Bn∈[0,1]And satisfy
Figure BDA0003256348100000052
Then according to Shannon's formula, when multiple terminals in a cell simultaneously offload tasks to the MEC server, the uplink task transmission rate r between user n and the servernCan be expressed as:
Figure BDA0003256348100000053
wherein, PnIs the transmission power of the nth user equipment, and gnRepresenting the gain of the radio transmission channel between user N and the base station over time slice t, N0Is the power spectral density of white gaussian channel noise.
Further, in the third step, the construction of the parallel task execution model includes the following steps:
calculating unloading model classification, and deciding x as { x according to task unloading1,x2,...,x|N|Each mobile terminal can process the task locally and can unload part of the task to the MEC server for execution; dividing the calculation unloading model of the task into a local execution model and an edge unloading model, and simultaneously executing the local processing part and the edge unloading part in parallelThe task processing time delay is reduced, and the service feedback time is prolonged;
a local execution model, wherein the local execution model processes task data and obtains an execution result through the computing power of the mobile equipment, and mainly relates to local execution delay
Figure BDA0003256348100000054
And device execution energy consumption
Figure BDA0003256348100000055
Two parts of overhead; defining the local CPU calculation frequency of the nth mobile terminal as
Figure BDA0003256348100000061
The computation delay of the locally executed part of the task
Figure BDA0003256348100000062
Can be expressed as:
Figure BDA0003256348100000063
wherein (1-x)n)LnAs task AnThe number of CPU cycles required for the local execution section; in addition, for each user terminal, the sub-tasks processed locally should also take into account the waiting time of the current time slice on the task buffer queue waiting to be executed locally
Figure BDA0003256348100000064
Figure BDA0003256348100000065
Therefore, the total delay of the nth ue for locally processing the subtasks is defined as:
Figure BDA0003256348100000066
in the edge unloading model, the edge server-side processing subtask generally needs to be divided into three steps; firstly, a mobile user transmits task related data to a base station through a wireless link; when the base station forwards the subtasks unloaded by the mobile user n to the MEC server, the MEC server allocates the computing resources for the MEC server, and the computing resource allocation decision is represented as:
F={F1,F2,...,Fn,...,F|N|};
and satisfy the conditions
Figure BDA0003256348100000067
Finally, the MEC server feeds back the execution result to the mobile equipment, and as the return data volume of the task execution result is usually very small and is far lower than the uploaded task data, and the return rate of the wireless network is very high, the time delay and the energy consumption cost for returning the task result to the user are ignored; the required transmission time is delayed in the process of transmitting the unloading task data to the base station by the nth user terminal
Figure BDA0003256348100000068
Is defined as:
Figure BDA0003256348100000069
corresponding offload data xnDnThe energy consumption during transmission is:
Figure BDA00032563481000000610
wherein, PnThe uplink transmission power of the nth mobile user; after the data transmission is completed, task AnThe computation time of the offload portion on the edge server is:
Figure BDA0003256348100000071
wherein, FnProportion of computing resources allocated to the nth mobile user for the MEC server, fCComputing power for the CPU of the MEC server; the total processing delay of the tasks unloaded to the MEC server is the sum of the transmission delay and the execution delay of the tasks, and is represented as follows:
Figure BDA0003256348100000072
in summary, for the time slice t, since the user local and the server in the MEC system can execute the task in parallel, the task a of the mobile user n in the cellnThe total processing delay is:
Figure BDA0003256348100000073
complete task AnThe total cost of energy consumption required is:
Figure BDA0003256348100000074
further, in the fourth step, the MEC system total cost problem planning includes the following steps:
aiming at the problems of task part unloading and network bandwidth and MEC computing resource distribution, a weighting factor omega of time delay and energy consumption is introduced1And omega2For adjusting the weight ratio of time and energy consumption costs according to user-specific preferences, and1+ω 21, and the objective function is formulated as follows:
Figure BDA0003256348100000075
as shown in the above equation, according to the task execution model,
Figure BDA0003256348100000076
and
Figure BDA0003256348100000077
adopting task unloading decision x ═ x under t time slices respectively1,x2,...,x|N|Bandwidth and computing resource allocation policy B ═ B1,B2,...,Bn,...,B|N|F ═ F1,F2,...,Fn,...,F|N|The task processing time delay and energy consumption are reduced; to pair
Figure BDA0003256348100000081
Maximizing aims to reduce task A as much as possiblenThe feedback time and the weighted sum of the energy consumption of the local mobile equipment realize the utility maximization of the whole MEC system model;
the constraint conditions in C1-C3 respectively represent that the unloading proportion allocated to all mobile users in the cell is not more than 1, and the sum of the allocated bandwidth resources and the calculation resource proportion is less than or equal to 1; c4 represents task AnThe time delay required for the local execution part and the offload to the server-side part must not exceed its tolerable maximum deadline
Figure BDA0003256348100000082
And C5 and C6 ensure that the time and energy costs required to employ the present computing offloading scheme are no greater than the time delay and energy consumption of a full local execution.
Further, in the fifth step, the joint optimization of the time delay and the energy consumption includes the following steps:
and giving a next action decision based on the current state, and using the quadruple M ═ S, As,Pss',a,Rs,a) To describe this process;
wherein S is a limited set of states, A is a limited set of actions, S is the system state under the current time slice and S belongs to S, S 'is the next state of the system and S' belongs to S, a is the selected action and a belongs to A, Pss',aRepresenting the probability of a transition from the current state s to the next new state s' when performing action a, Rs,aImmediate direct access to be obtained via a state s-to-s' after execution of action aA reward;
in addition, discount factor γ ∈ [0,1] is used to measure the reward value that the state of future time slices has, i.e., the impact of actions taken based on the state of later time slices on the overall reward value gradually decays, and the weighted sum g (t) of all reward values resulting from the selection of a set of actions on time slice t in the markov decision process is described as:
Figure BDA0003256348100000083
wherein, γiR (t + i +1) is the value expression of the reward obtained by the time slice t + i +1 on the time slice t;
and (3) function three-element design, namely respectively designing the states, actions and reward functions as follows according to the application scene of the single MEC server of the multi-terminal user on the basis of a system model:
the state is as follows: the state space firstly ensures that all information in the environment can be contained, and the change of the environment in each time slice is fully reflected; the present system state is therefore defined as:
s(t)=[D1(t),D2(t),…,D|N|(t),L1(t),L2(t),…,L|N|(t),K1(t),K2(t),…,K|N|(t),r1(t),r2(t),…,r|N|(t)];
the system state comprises four parts, namely data volume D (t) of tasks reached by the mobile user in a cell on a current time slice t, required computing resource number L (t), task volume K (t) to be executed on a task cache queue and data transmission rate r (t) between the mobile user and a server;
the actions are as follows: according to the data volume D (t) of the new arriving task of the user on each time slice in the state s (t), the required computing resource number L (t) and the task volume K (t) to be executed on the task cache queue, the Agent of the Agent is the task AnMaking an offload proportion decision x (t), a ratio of bandwidth resources allocated by each mobile user B (t), and a server-calculated resource allocation ratio F (t), i.e.:
a(t)=[x1(t),x2(t),…,x|N|(t),B1(t),B2(t),…,B|N|(t),F1(t),F2(t),…,F|N|(t)];
the reward function: the instantaneous reward return obtained by the system on the time slice t is set as the objective function value in the formula, namely
Figure BDA0003256348100000091
The larger the reward R (t) obtained by executing the action a (t), the higher the time cost for executing all the user tasks in the current time slice t is
Figure BDA0003256348100000092
And cost of energy consumption
Figure BDA0003256348100000093
The smaller the weighted sum;
optimizing a depth deterministic strategy gradient (DDPG) algorithm, and dynamically solving a task unloading decision and resource allocation scheme under each time slice by using the DDPG algorithm so as to minimize a target function and reduce the weighted total cost of time delay and energy consumption;
the DDPG network structure is provided with an Actor and a Critic module and comprises four neural networks which are an Actor current network, an Actor target network, a Critic current network and a Critic target network respectively;
the action network module is used for selecting an action and delivering the action to the Agent of the intelligent Agent to execute the action, and the criticic module is used for evaluating a Q value according to a state s (t) and an action a (t); the experience playback unit stores a state transition data sample obtained by interaction with the environment for later sampling;
the objective function of the criticc module in the DDPG algorithm is a time sequence difference TD-error and is used for representing the difference between the current action and the expected action, and the loss function of the criticc network is defined as the square of the TD-error:
Figure BDA0003256348100000101
wherein m is the number of state transition samples { s (t), a (t), R (t), s (t +1) } randomly drawn from the empirical playback unit, R (t) is the reward resulting from performing action a (t) over time slice t, Q '(s (t +1), a (t +1), θ (T }) is the reward resulting from performing action a (t) over time slice t, Q' (s (t +1), a (t +1), θ (T +1), and (T })Q') Then represents the evaluation value, θ, of the state s (t +1) and action a (t +1) pair at the next time slice t +1 given by the Critic target networkQ、θQ'The weighting parameters of the current and target networks, Q (s (t), a (t), θQ) Then the value of the corresponding state s (t) and the action a (t) at the current moment t is judged through the Critic current network, and gamma is a discount factor;
the Actor module in the DDPG algorithm updates parameters of the network in a gradient descending mode, and aims to select an action a (t) ═ mu (s (t)), theta (t) capable of enabling evaluation value to be maximum as far as possibleμ) The loss function is:
Figure BDA0003256348100000102
wherein, thetaμIs the weight parameter of the Actor's current network;
the multi-user single-server joint task unloading and resource allocation method based on the DDPG comprises the following steps:
input task request set { A1,A2,...,A|N|}, uplink network bandwidth W and MEC server computing capacity fCInitializing the experience playback unit M and simultaneously randomly initializing the current network mu (s (t), theta) of the Actorμ) And Critic Current network Q (s (t), a (t), θQ) Weight of thetaμAnd thetaQ
Randomly initializing m state transition data for action detection, and receiving an initial state s (0);
traversing and generating action a (t) mu (s (t) and theta according to the current strategy and the detection noiseμ)+noise(t);
Executing the unloading decision and resource allocation of the action a (t) and obtaining a report R (t) and a state s (t +1) of the next time slice;
randomly selecting M state conversion tuples { s (t), a (t), R (t), s (t +1) }fromthe empirical playback unit M;
updating Critic and Actor target network by using soft updating mode based on loss function
Seventhly, repeating the step III to the step III for T times, wherein T is the number of time slices;
eighthly, repeating the step (E) and the step (C), wherein E is the number of epicode;
and ninthly, outputting a task unloading decision x, a bandwidth resource allocation scheme B and a computing resource allocation scheme F.
Another object of the present invention is to provide a single server part computation offload system in a mobile edge environment for implementing the single server part computation offload method in the mobile edge environment, the single server part computation offload system in the mobile edge environment, comprising:
the MEC service scene construction module is used for abstracting an application scene that a single base station and an edge server are deployed around a plurality of mobile users in a single cell;
the network communication module is used for assuming the condition that a single base station deployed in a cell provides wireless network service for users, and sending each user task input data to the MEC server through a base station uplink;
the task execution module is used for dividing a calculation unloading model of a task into a local execution model and an edge unloading model, and executing the local processing part and the edge unloading part simultaneously in parallel so as to reduce the task processing time delay and improve the service feedback time;
the MEC system total cost problem planning module is used for improving the experience quality of each mobile user in a cell, measuring the performance of a calculation unloading model from two aspects of time delay and energy consumption, and performing problem planning on the MEC system total cost in a single-cell multi-mobile user scene according to the network communication and task execution model;
and the delay and energy consumption combined optimization module is used for solving the problem in each time slice, so that the delay and energy consumption combined optimization in the MEC scene of multiple mobile users under all the time slices is realized.
It is a further object of the invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
a single base station and multi-user terminal single MEC server scene is established, a network communication model is designed, and each user task input data is sent to the MEC server by depending on a base station uplink;
a local processing and edge unloading parallel task execution module is built, problem planning is carried out on the total cost of the MEC system under the single-cell multi-mobile-user scene by means of network communication delay and task execution energy consumption, and delay and energy consumption are jointly optimized by means of a depth deterministic gradient algorithm.
It is another object of the present invention to provide a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
a single base station and multi-user terminal single MEC server scene is established, a network communication model is designed, and each user task input data is sent to the MEC server by depending on a base station uplink;
a local processing and edge unloading parallel task execution module is built, problem planning is carried out on the total cost of the MEC system under the single-cell multi-mobile-user scene by means of network communication delay and task execution energy consumption, and delay and energy consumption are jointly optimized by means of a depth deterministic gradient algorithm.
The invention provides a single-server partial computation unloading strategy in a mobile edge environment, which is mainly applied to the technical field of task unloading and resource allocation problems of mobile edge computation and mainly solves the problems of task execution delay and equipment energy consumption in an application scene of a multi-user single MEC server. Aiming at the scene and the task type, a dynamic partial task unloading and resource allocation algorithm based on a depth deterministic gradient algorithm is designed, and finally, a partial computation unloading model taking minimization of total delay of all task computation and total energy consumption of terminal equipment as optimization targets is realized. The partial calculation unloading strategy based on the DDPG is compared with the partial calculation unloading strategy based on the DQN through a numerical simulation experiment, the partial calculation unloading strategy based on the DDPG is obtained through comparison research, the performance is optimal, and the time delay and energy consumption long-term weighting total overhead of the whole multi-user single-server MEC system can be effectively reduced.
Compared with the prior art, the invention also has the following advantages: under the partial unloading condition of a multi-mobile-equipment single MEC server, a problem planning model under the MEC system is constructed by taking the total task execution time delay and the total mobile-equipment energy consumption as common optimization indexes, a depth deterministic strategy gradient (DDPG) algorithm in a depth reinforcement learning theory is applied to the calculation unloading problem, and a partial calculation unloading scheme based on the DDPG is provided. The strategy allows the execution of tasks to be completed on the mobile equipment and the single MEC server, can dynamically make a reasonable task unloading decision and allocate bandwidth and computing resources for each user in each time slice, and effectively reduces the weighted cost in two aspects of the total time delay of task processing and the total energy consumption of the mobile equipment.
Drawings
Fig. 1 is a flowchart of a single-server partial computation offload method in a mobile edge environment according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a single-server partial computation offload policy implementation process in a mobile edge environment according to an embodiment of the present invention.
FIG. 3 is a block diagram of a single server component computing offload system in a mobile edge environment according to an embodiment of the present invention;
in fig. 3: 1. an MEC service scene construction module; 2. a network communication module; 3. a task execution module; 4. an MEC system total cost problem planning module; 5. and a time delay and energy consumption combined optimization module.
Fig. 4 is a diagram of a deep deterministic policy gradient network architecture provided by an embodiment of the present invention.
Fig. 5 is a diagrammatic view of the influence of the computing power of the MEC server on the total benefit of the system in the simulation experiment result of the embodiment of the present invention.
Fig. 6 is a diagrammatic view of the influence of the number of mobile devices on the total system benefit in the simulation experiment result of the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems of intensive real-time computing tasks, weak computing power of a mobile terminal, low running efficiency of a core network and the like in the technical field of the current Internet of things and 5G networks, the invention provides a method, a system and equipment for partial computation unloading of a single server in a mobile edge environment, and the invention is described in detail by combining the following drawings, and is oriented to application scenes such as virtual reality games, wearable equipment, intelligent home and the like.
Those skilled in the art can also implement the method for offloading computation of a single server portion in a mobile edge environment by using other steps, and the method for offloading computation of a single server portion in a mobile edge environment provided by the present invention in fig. 1 is only one specific embodiment.
As shown in fig. 1, the single server part computation offload method in the mobile edge environment provided by the present invention includes the following steps:
s001, constructing an application scene of a single base station and a single MEC server of a multi-user terminal, and realizing unloading decision of a mobile terminal task;
s002, designing a network communication model, sending each user task input data to the MEC server through a base station uplink, and reasonably distributing network bandwidth resources;
s003, a local processing and edge unloading parallel task execution module is built, and the feedback efficiency of the application service is improved;
s004, performing target modeling by depending on network communication time delay and task execution energy consumption, and realizing problem planning of the total cost of the MEC system in a single-base-station multi-user scene;
and S005, dynamically optimizing a task unloading and resource allocation strategy by using a depth certainty strategy gradient method, and realizing overall performance optimization based on time delay and energy consumption.
As shown in fig. 2, the method for offloading computation of a single server portion in a mobile edge environment provided by the present invention specifically includes the following steps:
1) and (3) application scene description, and single base station and multi-user terminal single MEC service scenes are constructed. And standing in the angle of the actual life application of the Internet of things, and constructing a partial computation unloading model aiming at the application scene of a single MEC server of a plurality of terminal users.
2) A network communication model is constructed. Each user task input data is sent to the MEC server via the base station uplink. A reasonable network bandwidth resource allocation decision should be made for the system at each time slice.
3) And constructing a task execution model. The calculation unloading model of the task is divided into a local execution model and an edge unloading model, and the local processing and the edge unloading are simultaneously executed in parallel, so that the task processing time delay is reduced, and the service feedback time is greatly prolonged.
4) And planning the total cost of the MEC system. And measuring the performance of the calculation unloading model from two aspects of time delay and energy consumption, and performing problem planning on the total cost of the MEC system in the single-cell multi-mobile-user scene according to the network communication and task execution model.
5) And (4) jointly optimizing time delay and energy consumption. A combined dynamic task unloading and resource allocation strategy is designed based on a depth certainty strategy gradient method, and an optimal compromise scheme capable of simultaneously reducing time delay and energy consumption is found according to changes of a real-time environment.
In step 1) provided by the embodiment of the present invention, constructing an application scenario of a single MEC server for multiple end users comprises the following steps:
1.1) Single MEC Scenario modeling of multiple Mobile subscribers in a cell
Constructing a system network architecture consisting of a single base station, a plurality of user terminals and a public high-performance MEC server, and expressing the serial number of the mobile equipment as follows:
N={1,2,...};
and each user device n will generate a divisible compute-intensive task:
Figure BDA0003256348100000151
wherein D isnRepresenting task AnSize of the uploaded data amount, LnRepresenting the execution of a computing task AnThe amount of computing resources that are required,
Figure BDA0003256348100000152
for processing task AnTolerable maximum time delay requirements.
1.2) calculating an offload decision vector
Task A on user equipment nnThe execution ratio offloaded to the MEC server is denoted xn=[0,1]And N belongs to N, the proportion of the tasks executed on the local equipment N is 1-xnThe two parts of operations of local execution of the subtask and calculation and unloading of the MEC server are executed in a parallel mode, so that the total processing time delay of the task is reduced, and the service response time is prolonged;
the final computational offload decision vector x for all user tasks can be expressed as:
x={x1,x2,...,x|N|};
1.3) setting a waiting task buffer queue
Suppose the whole MEC system time is divided into several time lengths tau0And a buffer queue I of tasks to be executed, which are executed in a first-in first-out sequence, is set for each user terminal n in view of the problem of limited resources of mobile usersn,KnIs queue InThe total amount of task calculation to be executed is calculated, and each time a new time slice t +1 starts, the total amount K of task calculation to be executed of the mobile terminal nnThe dynamically updated formula of (a) is:
Figure BDA0003256348100000153
wherein, Kn(t +1) is the amount of tasks to be executed on the n buffer queue of the t +1 th time slice mobile terminal, xn(t) is inIndicating task A on nth user terminal in interval tn(t) decision variables for local processing or edge offload, Ln(t) to perform task An(t) the amount of computing resources required,
Figure BDA0003256348100000154
representing the local execution task amount of the mobile user n in the t time slice;
1.4) solving data transfer and computation problems
For data transmission and calculation problems during task unloading at a server side, the related task information of each mobile user needs to be forwarded to an MEC server through a base station, and then the MEC server allocates corresponding calculation resources to execute tasks. Therefore, the model also needs to consider a real-time allocation scheme of bandwidth and computing resources.
In step 2) provided by the embodiment of the present invention, the design of the network communication model includes the following steps:
defining the uplink network bandwidth resource of the base station as a fixed value W, all mobile users in a cell need to share the bandwidth resource of the base station, and making a reasonable network bandwidth resource allocation decision B ═ B for the system at each time slice1,B2,...,B|N|}. The proportion of the bandwidth resources allocated to the mobile user by the base station in the time slice t is Bn,Bn∈[0,1]And satisfy
Figure BDA0003256348100000161
Then according to Shannon's formula, when multiple terminals in the cell simultaneously unload tasks to the MEC server, the uplink task transmission rate between user n and server
Figure BDA0003256348100000162
Can be expressed as:
Figure BDA0003256348100000163
wherein, PnIs the transmission power of the nth user equipment, and gnRepresenting the gain of the radio transmission channel between user N and the base station over time slice t, N0Is the power spectral density of white gaussian channel noise.
In step 3) provided by the embodiment of the present invention, the construction of the parallel task execution model includes the following steps:
3.1) computational offload model classification
Depending on task offload decision x ═ { x ═ x1,x2,...,x|N|And each mobile terminal can process the task locally and can unload part of the task to the MEC server for execution. Therefore, the calculation unloading model of the task is divided into a local execution model and an edge unloading model, and the local processing and the edge unloading are simultaneously executed in parallel, so that the task processing time delay is reduced, and the service feedback time is greatly prolonged.
3.2) local execution model
The local execution model processes task data and obtains an execution result through the computing capacity of the mobile equipment, and mainly relates to local execution delay
Figure BDA0003256348100000164
And device execution energy consumption
Figure BDA0003256348100000165
Two parts overhead. Defining the local CPU calculation frequency of the nth mobile terminal as
Figure BDA0003256348100000171
The computation delay of the locally executed part of the task
Figure BDA0003256348100000172
Can be expressed as:
Figure BDA0003256348100000173
wherein (1-x)n)LnAs task AnThe number of CPU cycles required to execute the part locally. Furthermore, for each user terminal, it is localThe subtasks should also take into account the latency on the buffer queue of the task waiting to be executed locally in the current slot
Figure BDA0003256348100000174
Figure BDA0003256348100000175
Therefore, the total delay of the nth ue for locally processing the subtasks is defined as:
Figure BDA0003256348100000176
3.3) edge unloading model
The edge server side processing subtasks typically need to be divided into three steps. First, the mobile user transmits task related data to the base station through a wireless link. When the base station forwards the subtasks unloaded by the mobile user n to the MEC server, the MEC server allocates the computing resources for the MEC server, and the computing resource allocation decision is represented as:
F={F1,F2,...,Fn,...,F|N|};
and satisfy the conditions
Figure BDA0003256348100000177
And finally, the MEC server feeds back the execution result to the mobile equipment, and the return data volume of the task execution result is usually very small and is far lower than the uploaded task data, and the return rate of the wireless network is very high, so the method ignores the time delay and the energy consumption cost for returning the task result to the user. The required transmission time is delayed in the process of transmitting the unloading task data to the base station by the nth user terminal
Figure BDA0003256348100000178
Is defined as:
Figure BDA0003256348100000179
corresponding offload data xnDnThe energy consumption during transmission is:
Figure BDA00032563481000001710
wherein, PnThe uplink transmission power of the nth mobile user. After the data transmission is completed, task AnThe computation time of the offload portion on the edge server is:
Figure BDA0003256348100000181
wherein, FnProportion of computing resources allocated to the nth mobile user for the MEC server, fCThe CPU computing power of the MEC server. The total processing delay of the tasks unloaded to the MEC server is the sum of the transmission delay and the execution delay of the tasks, and is represented as follows:
Figure BDA0003256348100000182
in summary, for the time slice t, since the user local and the server in the MEC system can execute the task in parallel, the task a of the mobile user n in the cellnThe total processing delay is:
Figure BDA0003256348100000183
complete task AnThe total cost of energy consumption required is:
Figure BDA0003256348100000184
in step 4) provided by the embodiment of the present invention, the total cost problem planning of the MEC system includes the following steps:
aiming at the problems of task part unloading and network bandwidth and MEC computing resource distribution, a weighting factor omega of time delay and energy consumption is introduced1And omega2For adjusting the weight ratio of time and energy consumption costs according to user-specific preferences, and1+ω 21, and the objective function is formulated as follows:
Figure BDA0003256348100000185
as shown in the above equation, according to the task execution model,
Figure BDA0003256348100000186
and
Figure BDA0003256348100000187
adopting task unloading decision x ═ x under t time slices respectively1,x2,...,x|N|Bandwidth and computing resource allocation policy B ═ B1,B2,...,Bn,...,B|N|F ═ F1,F2,...,Fn,...,F|N|The task processing time delay and energy consumption are reduced; to pair
Figure BDA0003256348100000191
Maximizing aims to reduce task A as much as possiblenThe weighted sum of the feedback time and the local mobile device energy consumption of the mobile device realizes the maximum utility of the whole MEC system model.
The constraint conditions in C1-C3 respectively represent that the unloading proportion allocated to all mobile users in the cell is not more than 1, and the sum of the allocated bandwidth resources and the calculation resource proportion is less than or equal to 1; c4 represents task AnThe time delay required for the local execution part and the offload to the server-side part must not exceed its tolerable maximum deadline
Figure BDA0003256348100000192
And C5 and C6 ensure the adoption of the calculation unloading schemeThe time and energy cost required is no more than the time delay and energy consumption of the full local execution.
In step 5) provided by the embodiment of the present invention, the joint optimization of the time delay and the energy consumption includes the following steps:
5.1) decision of next action
The next step of action decision is given based on the current state, specifically, the quadruple M ═ S, a can be utilizeds,Pss',a,Rs,a) This process is described.
Wherein S is a limited set of states, A is a limited set of actions, S is the system state under the current time slice and S belongs to S, S 'is the next state of the system and S' belongs to S, a is the selected action and a belongs to A, Pss',aRepresenting the probability of a transition from the current state s to the next new state s' when performing action a, Rs,aThe immediate direct reward is obtained for the transition from state s to s' after performing action a.
In addition, discount factor γ ∈ [0,1] is used to measure the reward value that the state of future time slices has, i.e., the impact of actions taken based on the state of later time slices on the overall reward value gradually decays, and the weighted sum g (t) of all reward values resulting from the selection of a set of actions on time slice t in the markov decision process is described as:
Figure BDA0003256348100000193
wherein, γiR (t + i +1) is the value expression of the reward obtained by the time slice t + i +1 on the time slice t.
5.2) function three-factor design
Based on the system model, respectively designing the states, actions and reward functions as follows according to the application scene of the single MEC server of the multi-terminal user:
5.2.1) state: the state space firstly ensures that all information in the environment can be contained, and the change of the environment in each time slice is fully reflected; the present system state is therefore defined as:
s(t)=[D1(t),D2(t),...,D|N|(t),L1(t),L2(t),...,L|N|(t),K1(t),K2(t),...,K|N|(t),r1(t),r2(t),...,r|N|(t)];
the system state comprises four parts, namely data volume D (t) of tasks reached by the mobile user in a cell on a current time slice t, required computing resource number L (t), task volume K (t) to be executed on a task cache queue and data transmission rate r (t) between the mobile user and a server;
5.2.2) action: according to the data volume D (t) of the new arriving task of the user on each time slice in the state s (t), the required computing resource number L (t) and the task volume K (t) to be executed on the task cache queue, the Agent of the Agent is the task AnMaking an offload proportion decision x (t), a ratio of bandwidth resources allocated by each mobile user B (t), and a server-calculated resource allocation ratio F (t), i.e.:
a(t)=[x1(t),x2(t),...,x|N|(t),B1(t),B2(t),...,B|N|(t),F1(t),F2(t),...,F|N|(t)];
5.2.3) reward function: the instantaneous reward return obtained by the system on the time slice t is set as the objective function value in the formula, namely
Figure BDA0003256348100000201
The larger the reward R (t) obtained by executing the action a (t), the higher the time cost for executing all the user tasks in the current time slice t is
Figure BDA0003256348100000202
And cost of energy consumption
Figure BDA0003256348100000203
The smaller the weighted sum.
5.3) Deep Deterministic Policy Gradient (DDPG) algorithm optimization
And dynamically solving a task unloading decision and resource allocation scheme under each time slice by using a depth deterministic strategy gradient algorithm so as to minimize an objective function and reduce the weighted total cost of time delay and energy consumption.
5.3.1) deep deterministic policy gradient network architecture
The DDPG network structure has two modules of Actor and Critic, and includes four neural networks, which are an Actor current network, an Actor target network, a Critic current network, and a Critic target network, respectively, and the deep deterministic policy gradient network architecture is shown in fig. 5.
The Actor network module is used for selecting an action and delivering the action to the Agent of the Agent to execute the action, and the criticic module is used for evaluating the Q value according to the state s (t) and the action a (t). The empirical playback unit stores samples of state transition data obtained from interaction with the environment for later sampling.
5.3.2) depth-deterministic policy gradient Algorithm target function
The objective function of the Critic module in the DDPG algorithm is a time sequence difference TD-error, which is used for representing the difference between the current action and the expected action, and the loss function of the Critic network is defined as the square of the TD-error:
Figure BDA0003256348100000211
wherein m is the number of state transition samples { s (t), a (t), R (t), s (t +1) } randomly drawn from the empirical playback unit, R (t) is the reward resulting from performing action a (t) over time slice t, Q '(s (t +1), a (t +1), θ (T }) is the reward resulting from performing action a (t) over time slice t, Q' (s (t +1), a (t +1), θ (T +1), and (T })Q') Then represents the evaluation value, θ, of the state s (t +1) and action a (t +1) pair at the next time slice t +1 given by the Critic target networkQ、θQ'The weighting parameters of the current and target networks, Q (s (t), a (t), θQ) Then the value of the corresponding state s (t) and the action a (t) at the current moment t is judged through the Critic current network, and gamma is a discount factor;
the Actor module in the DDPG algorithm updates parameters of the network in a gradient descending mode, and aims to select an action a (t) ═ mu (s (t)), theta (t) capable of enabling evaluation value to be maximum as far as possibleμ) The loss function is:
Figure BDA0003256348100000212
wherein, thetaμIs the weight parameter of the Actor's current network;
the multi-user single-server joint task unloading and resource allocation method based on the DDPG comprises the following steps:
input task request set { A1,A2,...,A|N|}, uplink network bandwidth W and MEC server computing capacity fCInitializing the experience playback unit M and simultaneously randomly initializing the current network mu (s (t), theta) of the Actorμ) And Critic Current network Q (s (t), a (t), θQ) Weight of thetaμAnd thetaQ
Randomly initializing m state transition data for action detection, and receiving an initial state s (0);
traversing and generating action a (t) mu (s (t) and theta according to the current strategy and the detection noiseμ)+noise(t);
Executing the unloading decision and resource allocation of the action a (t) and obtaining a report R (t) and a state s (t +1) of the next time slice;
randomly selecting M state conversion tuples { s (t), a (t), R (t), s (t +1) }fromthe empirical playback unit M;
updating Critic and Actor target network by using soft updating mode based on loss function
Seventhly, repeating the step III to the step III for T times, wherein T is the number of time slices;
eighthly, repeating the step (E) and the step (C), wherein E is the number of epicode;
and ninthly, outputting a task unloading decision x, a bandwidth resource allocation scheme B and a computing resource allocation scheme F.
As shown in fig. 3, the single-server partial computation offload system in the mobile edge environment provided by the present invention specifically includes:
the MEC service scene constructing module 1 is used for abstracting an application scene that a single base station and an edge server are deployed around a plurality of mobile users in a single cell.
And the network communication module 2 is used for sending each user task input data to the MEC server through the uplink of the base station under the condition that the single base station deployed in the cell provides wireless network service for the users.
And the task execution module 3 is used for dividing the calculation unloading model of the task into a local execution model and an edge unloading model, and executing the local processing part and the edge unloading part simultaneously and parallelly so as to reduce the task processing time delay and greatly improve the service feedback time.
And the MEC system total cost problem planning module 4 improves the experience quality of each mobile user in the cell, measures the performance of the calculation unloading model from two aspects of time delay and energy consumption, and performs problem planning on the MEC system total cost in a single-cell multi-mobile user scene according to the network communication and task execution model.
And the delay and energy consumption combined optimization module 5 is used for solving the problem in each time slice, so that the delay and energy consumption combined optimization in the multi-mobile-user single-server MEC scene under all the time slices is realized.
The technical effects of the present invention will be described in detail with reference to simulation experiments.
1. Experimental setup
In order to verify the performance of the proposed partial computation offload algorithm based on the depth deterministic strategy gradient, the method adopts python 3.6 to perform a simulation experiment, and the integrated development environment is JetBrains Pycarm.
2. Content of the experiment
Because the proposed model is different from models in existing edge computing task unloading documents, in order to verify the performance of the proposed algorithm, the cited partial computation unloading algorithm based on the Deep deterministic strategy gradient is compared with three baseline algorithms and a partial computation unloading algorithm based on a Deep Q Network (DQN).
(1) Local execution (AL): the tasks generated by the mobile user in each time slice are all performed locally and there is no need to allocate bandwidth resources and computational resources at this time.
(2) Offload execution and Proportional allocation of resources (AOAPF): and unloading all tasks to the MEC server for execution and distributing bandwidth resources and computing resources according to the data size of each mobile user task and the required computing resource number.
(3) Offload execution and Random Fair (AOARF): the tasks are all offloaded to the MEC server execution and bandwidth resources and computing resources are randomly allocated.
(4) Partial computation offload algorithm based on DQN: the same DDPG-based partial computation offload algorithm states, actions, and reward functions as proposed by the present invention, but the action space of the DQN algorithm is discrete. For each mobile user n, a proportional decision alpha is made for its task offloadingnBandwidth resource allocation ratio decision BnAnd computing resource allocation proportion decision FnThe setting level is 6, which can be respectively expressed as:
Figure BDA0003256348100000231
Figure BDA0003256348100000232
Figure BDA0003256348100000233
then is being satisfied
Figure BDA0003256348100000234
And
Figure BDA0003256348100000235
under the constraint of (2), each user-selectable action space is: alpha is alphan,level×Bn,level×Fn,level
3. Experimental results and performance analysis
As shown in FIG. 5, 1000 epicodes are trained for the proposed DDPG-based partial computation offload algorithm in the experiment, each epicode in the learning process contains 1000 time slices, and a weight factor ω of time delay and energy consumption is set1And ω2Are all 0.5. First, table 1 shows the MEC server computing power fCThe impact of offloading policy performance is calculated for a lower part of the proposed multi-user single-server MEC system. Experiments show that with MEC server computing power fCThe proposed DDPG-based computational offload algorithm performs best.
TABLE 1 Effect of MEC Server computing capacity on Total System benefit
Figure BDA0003256348100000241
As shown in fig. 6, simulation experiments are performed on parameters of which the number of terminal devices is 2, 3, 4, 5, and 6, respectively, and table 2 shows a variation trend of the total system benefit with an increase in the number of mobile devices. Experiments show that under the condition that the number of mobile users is continuously increased, the total system benefits of the five algorithms of DDPG, AL, AOAPF, AOARF and DQN are in a descending trend, the partial unloading strategy performance based on DDPG is optimal, the time delay and the energy consumption long-term weighting total overhead of the whole multi-user single-server MEC system can be effectively reduced, and certain feasibility is achieved.
TABLE 2 influence of number of mobile devices on the overall benefit of the system
Figure BDA0003256348100000242
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A single server part calculation unloading method in a mobile edge environment is characterized in that the single server part calculation unloading method in the mobile edge environment specifically comprises the following steps:
constructing an application scene of a single base station and a single MEC server of a multi-user terminal to realize unloading decision of a mobile terminal task;
designing a network communication model, and sending each user task input data to an MEC server through a base station uplink to reasonably distribute network bandwidth resources;
step three, a local processing and edge unloading parallel task execution module is set up, and the feedback efficiency of the application service is improved;
fourthly, performing target modeling according to network communication time delay and task execution energy consumption to realize problem planning of the total cost of the MEC system in a single-base-station multi-user scene;
and fifthly, dynamically optimizing a task unloading and resource allocation strategy by using a depth certainty strategy gradient method, and realizing overall performance optimization based on time delay and energy consumption.
2. The single-server partial computation offload method in mobile edge environment of claim 1, wherein in the first step, constructing a single-MEC server application scenario for multiple end-users comprises the following steps:
1) modeling a single MEC scene of a plurality of mobile users in a cell, constructing a system network architecture consisting of a single base station, a plurality of user terminals and a public high-performance MEC server, and expressing the serial number of mobile terminal equipment as follows:
N={1,2,...}
and each user device n will generate a divisible compute-intensive task:
Figure FDA0003256348090000011
wherein D isnRepresenting task AnSize of the uploaded data amount, LnRepresenting the execution of a computing task AnThe amount of computing resources that are required,
Figure FDA0003256348090000012
for processing task AnTolerable maximum time delay requirements;
2) computing an offload decision vector, task A on user equipment nnThe execution ratio offloaded to the MEC server is denoted xn=[0,1]And N belongs to N, the proportion of the tasks executed on the local equipment N is 1-xnThe two parts of operations of local execution of the subtask and calculation and unloading of the MEC server are executed in a parallel mode, so that the total processing time delay of the task is reduced, and the service response time is prolonged;
the final computational offload decision vector x for all user tasks can be expressed as:
x={x1,x2,...,x|N|};
3) setting a buffer queue for waiting to execute tasks, and supposing that the time of the whole MEC system is divided into a plurality of time lengths which are tau0And a buffer queue I of tasks to be executed, which are executed in a first-in first-out sequence, is set for each user terminal n in view of the problem of limited resources of mobile usersn,KnIs queue InUpper part of the main shaftThe total amount of task calculation of a row, n, is the total amount of task calculation K to be performed by the mobile terminal whenever a new time slice t +1 startsnThe dynamically updated formula of (a) is:
Figure FDA0003256348090000024
wherein, Kn(t +1) is the amount of tasks to be executed on the n buffer queue of the t +1 th time slice mobile terminal, xn(t) indicating task A on the nth user terminal within time slice tn(t) decision variables for local processing or edge offload, Ln(t) to perform task An(t) the amount of computing resources required,
Figure FDA0003256348090000021
representing the local execution task amount of the mobile user n in the t time slice;
4) the method solves the problems of data transmission and calculation, and aims at the problems of data transmission and calculation during task unloading at a server side, the information of tasks related to each mobile user is firstly forwarded to the MEC server through the base station, and then the MEC server distributes corresponding calculation resources to execute the tasks.
3. The method for offloading computation of a single server portion in a mobile edge environment of claim 1, wherein in the second step, the design of the network communication model comprises the steps of:
defining the uplink network bandwidth resource of the base station as a fixed value W, all mobile users in a cell need to share the bandwidth resource of the base station, and making a reasonable network bandwidth resource allocation decision B ═ B for the system at each time slice1,B2,...,B|N|}; the proportion of the bandwidth resources allocated to the mobile user by the base station in the time slice t is Bn,Bn∈[0,1]And satisfy
Figure FDA0003256348090000022
Then according to Shannon's formula, when there are more than one cellUplink task transmission rate r between user n and server when terminal unloads task to MEC server simultaneouslynCan be expressed as:
Figure FDA0003256348090000023
wherein, PnIs the transmission power of the nth user equipment, and gnRepresenting the gain of the radio transmission channel between user N and the base station over time slice t, N0Is the power spectral density of white gaussian channel noise.
4. The partial computation offload method for single server in mobile edge environment according to claim 1, wherein in the third step, the construction of the parallel task execution model includes the following steps:
calculating unloading model classification, and deciding x as { x according to task unloading1,x2,...,x|N|Each mobile terminal can process the task locally and can unload part of the task to the MEC server for execution; the calculation unloading model of the task is divided into a local execution model and an edge unloading model, and the local processing and the edge unloading are simultaneously executed in parallel, so that the task processing time delay is reduced, and the service feedback time is prolonged;
a local execution model, wherein the local execution model processes task data and obtains an execution result through the computing power of the mobile equipment, and mainly relates to local execution delay
Figure FDA0003256348090000031
And device execution energy consumption
Figure FDA0003256348090000032
Two parts of overhead; defining the local CPU calculation frequency of the nth mobile terminal as
Figure FDA0003256348090000033
The computation delay of the locally executed part of the task
Figure FDA0003256348090000034
Can be expressed as:
Figure FDA0003256348090000035
wherein (1-x)n)LnAs task AnThe number of CPU cycles required for the local execution section; in addition, for each user terminal, the locally processed subtasks should also adopt the waiting time of the current time slice on the task cache queue which is locally waiting to be executed
Figure FDA0003256348090000036
Figure FDA0003256348090000037
Therefore, the total delay of the nth ue for locally processing the subtasks is defined as:
Figure FDA0003256348090000038
in the edge unloading model, the edge server-side processing subtask generally needs to be divided into three steps; firstly, a mobile user transmits task related data to a base station through a wireless link; when the base station forwards the subtasks unloaded by the mobile user n to the MEC server, the MEC server allocates the computing resources for the MEC server, and the computing resource allocation decision is represented as:
F={F1,F2,...,Fn,...,F|N|};
and satisfy the conditions
Figure FDA0003256348090000041
Finally, the MEC server feeds back the execution result to the mobile equipment, and the task execution result is fed backThe data volume is usually very small and far lower than the uploaded task data, and the return rate of the wireless network is very high, so that the time delay and the energy consumption cost for returning the task result to the user are ignored; the required transmission time is delayed in the process of transmitting the unloading task data to the base station by the nth user terminal
Figure FDA0003256348090000042
Is defined as:
Figure FDA0003256348090000043
corresponding offload data xnDnThe energy consumption during transmission is:
Figure FDA0003256348090000044
wherein, PnThe uplink transmission power of the nth mobile user; after the data transmission is completed, task AnThe computation time of the offload portion on the edge server is:
Figure FDA0003256348090000045
wherein, FnProportion of computing resources allocated to the nth mobile user for the MEC server, fCComputing power for the CPU of the MEC server; the total processing delay of the tasks unloaded to the MEC server is the sum of the transmission delay and the execution delay of the tasks, and is represented as follows:
Figure FDA0003256348090000046
in summary, for the time slice t, since the user local and the server in the MEC system can execute the task in parallel, the mobile user n in the cellTask AnThe total processing delay is:
Figure FDA0003256348090000047
complete task AnThe total cost of energy consumption required is:
Figure FDA0003256348090000048
5. the method for offloading partial computation of a server in a mobile edge environment of claim 1, wherein in the fourth step, the MEC system total cost problem planning comprises the following steps:
aiming at the problems of task part unloading and network bandwidth and MEC computing resource distribution, a weighting factor omega of time delay and energy consumption is introduced1And omega2For adjusting the weight ratio of time and energy consumption costs according to user-specific preferences, and121, and the objective function is formulated as follows:
Figure FDA0003256348090000051
s.t.
Figure FDA0003256348090000052
Figure FDA0003256348090000053
Figure FDA0003256348090000054
Figure FDA0003256348090000055
Figure FDA0003256348090000056
as shown in the above equation, according to the task execution model,
Figure FDA0003256348090000057
and
Figure FDA0003256348090000058
adopting task unloading decision x ═ x under t time slices respectively1,x2,...,x|N|Bandwidth and computing resource allocation policy B ═ B1,B2,...,Bn,...,B|N|F ═ F1,F2,...,Fn,...,F|N|The task processing time delay and energy consumption are reduced; to pair
Figure FDA0003256348090000059
Maximizing aims to reduce task A as much as possiblenThe feedback time and the weighted sum of the energy consumption of the local mobile equipment realize the utility maximization of the whole MEC system model;
the constraint conditions in C1-C3 respectively represent that the unloading proportion allocated to all mobile users in the cell is not more than 1, and the sum of the allocated bandwidth resources and the calculation resource proportion is less than or equal to 1; c4 represents task AnThe time delay required for the local execution part and the offload to the server-side part must not exceed its tolerable maximum deadline
Figure FDA00032563480900000510
And C5 and C6 ensure that the time and energy costs required to employ the present computing offloading scheme are no greater than the time delay and energy consumption of a full local execution.
6. The method for offloading computation of a single server portion in a mobile edge environment of claim 1, wherein in the fifth step, the joint optimization of latency and energy consumption comprises the following steps:
and giving a next action decision based on the current state, and using the quadruple M ═ S, As,Pss',a,Rs,a) To describe this process;
wherein S is a limited set of states, A is a limited set of actions, S is the system state under the current time slice and S belongs to S, S 'is the next state of the system and S' belongs to S, a is the selected action and a belongs to A, Pss',aRepresenting the probability of a transition from the current state s to the next new state s' when performing action a, Rs,aThe instant direct reward obtained by converting the state s into s' after the action a is executed;
in addition, discount factor γ ∈ [0,1] is used to measure the reward value that the state of future time slices has, i.e., the impact of actions taken based on the state of later time slices on the overall reward value gradually decays, and the weighted sum g (t) of all reward values resulting from the selection of a set of actions on time slice t in the markov decision process is described as:
Figure FDA0003256348090000061
wherein, γiR (t + i +1) is the value expression of the reward obtained by the time slice t + i +1 on the time slice t;
and (3) function three-element design, namely respectively designing the states, actions and reward functions as follows according to the application scene of the single MEC server of the multi-terminal user on the basis of a system model:
the state is as follows: the state space firstly ensures that all information in the environment can be contained, and the change of the environment in each time slice is fully reflected; the present system state is therefore defined as:
s(t)=[D1(t),D2(t),...,D|N|(t),L1(t),L2(t),...,L|N|(t),K1(t),K2(t),...,K|N|(t),r1(t),r2(t),...,r|N|(t)];
the system state comprises four parts, namely data volume D (t) of tasks reached by the mobile user in a cell on a current time slice t, required computing resource number L (t), task volume K (t) to be executed on a task cache queue and data transmission rate r (t) between the mobile user and a server;
the actions are as follows: according to the data volume D (t) of the new arriving task of the user on each time slice in the state s (t), the required computing resource number L (t) and the task volume K (t) to be executed on the task cache queue, the Agent of the Agent is the task AnMaking an offload proportion decision x (t), a ratio of bandwidth resources allocated by each mobile user B (t), and a server-calculated resource allocation ratio F (t), i.e.:
a(t)=[x1(t),x2(t),...,x|N|(t),B1(t),B2(t),...,B|N|(t),F1(t),F2(t),...,F|N|(t)];
the reward function: the instantaneous reward return obtained by the system on the time slice t is set as the objective function value in the formula, namely
Figure FDA0003256348090000071
The larger the reward R (t) obtained by executing the action a (t), the higher the time cost for executing all the user tasks in the current time slice t is
Figure FDA0003256348090000072
And cost of energy consumption
Figure FDA0003256348090000073
The smaller the weighted sum;
optimizing a depth deterministic strategy gradient (DDPG) algorithm, and dynamically solving a task unloading decision and resource allocation scheme under each time slice by using the DDPG algorithm so as to minimize a target function and reduce the weighted total cost of time delay and energy consumption;
the DDPG network structure is provided with an Actor and a Critic module and comprises four neural networks which are an Actor current network, an Actor target network, a Critic current network and a Critic target network respectively;
the action network module is used for selecting an action and delivering the action to the Agent of the intelligent Agent to execute the action, and the criticic module is used for evaluating a Q value according to a state s (t) and an action a (t); the experience playback unit stores a state transition data sample obtained by interaction with the environment for later sampling;
the objective function of the criticc module in the DDPG algorithm is a time sequence difference TD-error and is used for representing the difference between the current action and the expected action, and the loss function of the criticc network is defined as the square of the TD-error:
Figure FDA0003256348090000074
wherein m is the number of state transition samples { s (t), a (t), R (t), s (t +1) } randomly drawn from the empirical playback unit, R (t) is the reward resulting from performing action a (t) over time slice t, Q '(s (t +1), a (t +1), θ (T }) is the reward resulting from performing action a (t) over time slice t, Q' (s (t +1), a (t +1), θ (T +1), and (T })Q') Then represents the evaluation value, θ, of the state s (t +1) and action a (t +1) pair at the next time slice t +1 given by the Critic target networkQ、θQ′The weighting parameters of the current and target networks, Q (s (t), a (t), θQ) Then the value of the corresponding state s (t) and the action a (t) at the current moment t is judged through the Critic current network, and gamma is a discount factor;
the Actor module in the DDPG algorithm updates parameters of the network in a gradient descending mode, and aims to select an action a (t) ═ mu (s (t)), theta (t) capable of enabling evaluation value to be maximum as far as possibleμ) The loss function is:
Figure FDA0003256348090000075
wherein, thetaμIs the weight parameter of the Actor's current network;
the multi-user single-server joint task unloading and resource allocation method based on the DDPG comprises the following steps:
input task request set { A1,A2,...,A|N|}, uplink network bandwidth W and MEC server computing capacity fCInitializing the experience playback unit M and simultaneously randomly initializing the current network mu (s (t), theta) of the Actorμ) And Critic Current network Q (s (t), a (t), θQ) Weight of thetaμAnd thetaQ
Randomly initializing m state transition data for action detection, and receiving an initial state s (0);
traversing and generating action a (t) mu (s (t) and theta according to the current strategy and the detection noiseμ)+noise(t);
Executing the unloading decision and resource allocation of the action a (t) and obtaining a report R (t) and a state s (t +1) of the next time slice;
randomly selecting M state conversion tuples { s (t), a (t), R (t), s (t +1) }fromthe empirical playback unit M;
updating Critic and Actor target network by using soft updating mode based on loss function
Seventhly, repeating the step III to the step III for T times, wherein T is the number of time slices;
eighthly, repeating the step (E) and the step (C), wherein E is the number of epicode;
and ninthly, outputting a task unloading decision x, a bandwidth resource allocation scheme B and a computing resource allocation scheme F.
7. A single server part computation offload system in a mobile edge environment for implementing the single server part computation offload method in the mobile edge environment according to any one of claims 1 to 6, wherein the single server part computation offload system in the mobile edge environment comprises:
the MEC service scene construction module is used for abstracting an application scene that a single base station and an edge server are deployed around a plurality of mobile users in a single cell;
the network communication module is used for assuming the condition that a single base station deployed in a cell provides wireless network service for users, and sending each user task input data to the MEC server through a base station uplink;
the task execution module is used for dividing a calculation unloading model of a task into a local execution model and an edge unloading model, and executing the local processing part and the edge unloading part simultaneously in parallel so as to reduce the task processing time delay and improve the service feedback time;
the MEC system total cost problem planning module is used for improving the experience quality of each mobile user in a cell, measuring the performance of a calculation unloading model from two aspects of time delay and energy consumption, and performing problem planning on the MEC system total cost in a single-cell multi-mobile user scene according to the network communication and task execution model;
and the delay and energy consumption combined optimization module is used for solving the problem in each time slice, so that the delay and energy consumption combined optimization in the MEC scene of multiple mobile users under all the time slices is realized.
8. A computer device, characterized in that the computer device comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of:
a single base station and multi-user terminal single MEC server scene is established, a network communication model is designed, and each user task input data is sent to the MEC server by depending on a base station uplink;
a local processing and edge unloading parallel task execution module is built, problem planning is carried out on the total cost of the MEC system under the single-cell multi-mobile-user scene by means of network communication delay and task execution energy consumption, and delay and energy consumption are jointly optimized by means of a depth deterministic gradient algorithm.
9. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
a single base station and multi-user terminal single MEC server scene is established, a network communication model is designed, and each user task input data is sent to the MEC server by depending on a base station uplink;
a local processing and edge unloading parallel task execution module is built, problem planning is carried out on the total cost of the MEC system under the single-cell multi-mobile-user scene by means of network communication delay and task execution energy consumption, and delay and energy consumption are jointly optimized by means of a depth deterministic gradient algorithm.
CN202111060966.0A 2021-09-10 2021-09-10 Single server part calculation unloading method, system and equipment under mobile edge environment Active CN113950066B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111060966.0A CN113950066B (en) 2021-09-10 2021-09-10 Single server part calculation unloading method, system and equipment under mobile edge environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111060966.0A CN113950066B (en) 2021-09-10 2021-09-10 Single server part calculation unloading method, system and equipment under mobile edge environment

Publications (2)

Publication Number Publication Date
CN113950066A true CN113950066A (en) 2022-01-18
CN113950066B CN113950066B (en) 2023-01-17

Family

ID=79328000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111060966.0A Active CN113950066B (en) 2021-09-10 2021-09-10 Single server part calculation unloading method, system and equipment under mobile edge environment

Country Status (1)

Country Link
CN (1) CN113950066B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114340016A (en) * 2022-03-16 2022-04-12 北京邮电大学 Power grid edge calculation unloading distribution method and system
CN114615705A (en) * 2022-03-11 2022-06-10 广东技术师范大学 Single user resource allocation strategy method based on 5G network
CN114614878A (en) * 2022-02-14 2022-06-10 哈尔滨工业大学(深圳) Matrix-vector multiplication task-based coding calculation allocation method in satellite-ground network
CN114640675A (en) * 2022-03-21 2022-06-17 中国联合网络通信集团有限公司 Unloading strategy determination method and device, electronic equipment and storage medium
CN114786215A (en) * 2022-03-22 2022-07-22 国网浙江省电力有限公司信息通信分公司 Transmission calculation joint optimization system and method for multi-base-station mobile edge calculation system
CN114866548A (en) * 2022-04-26 2022-08-05 中南大学 Task unloading method based on mobile fog calculation
CN114860345A (en) * 2022-05-31 2022-08-05 南京邮电大学 Cache-assisted calculation unloading method in smart home scene
CN114884949A (en) * 2022-05-07 2022-08-09 重庆邮电大学 Low-orbit satellite Internet of things task unloading method based on MADDPG algorithm
CN115002123A (en) * 2022-05-25 2022-09-02 西南交通大学 Fast adaptive task unloading system and method based on mobile edge calculation
CN115002113A (en) * 2022-05-26 2022-09-02 南京邮电大学 Mobile base station edge computing power resource scheduling method, system and electronic equipment
CN115002799A (en) * 2022-04-25 2022-09-02 燕山大学 Task unloading and resource allocation method for industrial hybrid network
CN115002801A (en) * 2022-04-27 2022-09-02 燕山大学 Method and device for dynamically unloading edge computing resources based on passive relay cooperation
CN115208894A (en) * 2022-07-26 2022-10-18 福州大学 Pricing and calculation unloading method based on Stackelberg game in mobile edge calculation
CN115334076A (en) * 2022-07-08 2022-11-11 电子科技大学 Service migration method and system of edge server and edge server equipment
CN115623540A (en) * 2022-11-11 2023-01-17 南京邮电大学 Edge optimization unloading method of mobile equipment
CN115858048A (en) * 2023-03-03 2023-03-28 成都信息工程大学 Hybrid key level task oriented dynamic edge arrival unloading method
CN116489711A (en) * 2023-04-25 2023-07-25 北京交通大学 Task migration method of edge computing network based on deep reinforcement learning
CN116709428A (en) * 2023-08-04 2023-09-05 华东交通大学 Calculation unloading method and system based on mobile edge calculation
CN117499999A (en) * 2023-12-29 2024-02-02 四川华鲲振宇智能科技有限责任公司 Task unloading method based on edge calculation
CN117793805A (en) * 2024-02-27 2024-03-29 厦门宇树康信息技术有限公司 Dynamic user random access mobile edge computing resource allocation method and system
CN117793805B (en) * 2024-02-27 2024-04-26 厦门宇树康信息技术有限公司 Dynamic user random access mobile edge computing resource allocation method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951897A (en) * 2019-03-08 2019-06-28 东华大学 A kind of MEC discharging method under energy consumption and deferred constraint
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN111918339A (en) * 2020-07-17 2020-11-10 西安交通大学 AR task unloading and resource allocation method based on reinforcement learning in mobile edge network
CN112689296A (en) * 2020-12-14 2021-04-20 山东师范大学 Edge calculation and cache method and system in heterogeneous IoT network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951897A (en) * 2019-03-08 2019-06-28 东华大学 A kind of MEC discharging method under energy consumption and deferred constraint
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN111918339A (en) * 2020-07-17 2020-11-10 西安交通大学 AR task unloading and resource allocation method based on reinforcement learning in mobile edge network
CN112689296A (en) * 2020-12-14 2021-04-20 山东师范大学 Edge calculation and cache method and system in heterogeneous IoT network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LINGLING AN等: "Joint Task Offloading and Resource Allocation via Proximal Policy Optimization for Mobile Edge Computing Network", 《2021 INTERNATIONAL CONFERENCE ON NETWORKING AND NETWORK APPLICATIONS》 *
张文献,杜永文: "基于深度强化学习多用户移动边缘计算轻量任务卸载优化", 《JOURNAL OF MEASUREMENT SCIENCE AND INSTRUMENTATION》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114614878A (en) * 2022-02-14 2022-06-10 哈尔滨工业大学(深圳) Matrix-vector multiplication task-based coding calculation allocation method in satellite-ground network
CN114614878B (en) * 2022-02-14 2023-08-29 哈尔滨工业大学(深圳) Coding calculation distribution method based on matrix-vector multiplication task in star-to-ground network
CN114615705B (en) * 2022-03-11 2022-12-20 广东技术师范大学 Single-user resource allocation strategy method based on 5G network
CN114615705A (en) * 2022-03-11 2022-06-10 广东技术师范大学 Single user resource allocation strategy method based on 5G network
CN114340016A (en) * 2022-03-16 2022-04-12 北京邮电大学 Power grid edge calculation unloading distribution method and system
CN114340016B (en) * 2022-03-16 2022-07-26 北京邮电大学 Power grid edge calculation unloading distribution method and system
CN114640675A (en) * 2022-03-21 2022-06-17 中国联合网络通信集团有限公司 Unloading strategy determination method and device, electronic equipment and storage medium
CN114640675B (en) * 2022-03-21 2024-02-09 中国联合网络通信集团有限公司 Unloading strategy determining method and device, electronic equipment and storage medium
CN114786215B (en) * 2022-03-22 2023-10-20 国网浙江省电力有限公司信息通信分公司 Transmission and calculation joint optimization system and method for multi-base-station mobile edge calculation system
CN114786215A (en) * 2022-03-22 2022-07-22 国网浙江省电力有限公司信息通信分公司 Transmission calculation joint optimization system and method for multi-base-station mobile edge calculation system
CN115002799B (en) * 2022-04-25 2024-04-12 燕山大学 Task unloading and resource allocation method for industrial hybrid network
CN115002799A (en) * 2022-04-25 2022-09-02 燕山大学 Task unloading and resource allocation method for industrial hybrid network
CN114866548A (en) * 2022-04-26 2022-08-05 中南大学 Task unloading method based on mobile fog calculation
CN115002801A (en) * 2022-04-27 2022-09-02 燕山大学 Method and device for dynamically unloading edge computing resources based on passive relay cooperation
CN115002801B (en) * 2022-04-27 2024-04-16 燕山大学 Edge computing resource dynamic unloading method and device based on passive relay collaboration
CN114884949B (en) * 2022-05-07 2024-03-26 深圳泓越信息科技有限公司 Task unloading method for low-orbit satellite Internet of things based on MADDPG algorithm
CN114884949A (en) * 2022-05-07 2022-08-09 重庆邮电大学 Low-orbit satellite Internet of things task unloading method based on MADDPG algorithm
CN115002123A (en) * 2022-05-25 2022-09-02 西南交通大学 Fast adaptive task unloading system and method based on mobile edge calculation
CN115002123B (en) * 2022-05-25 2023-05-05 西南交通大学 System and method for rapidly adapting task offloading based on mobile edge computation
CN115002113B (en) * 2022-05-26 2023-08-01 南京邮电大学 Mobile base station edge computing power resource scheduling method, system and electronic equipment
CN115002113A (en) * 2022-05-26 2022-09-02 南京邮电大学 Mobile base station edge computing power resource scheduling method, system and electronic equipment
CN114860345B (en) * 2022-05-31 2023-09-08 南京邮电大学 Calculation unloading method based on cache assistance in smart home scene
CN114860345A (en) * 2022-05-31 2022-08-05 南京邮电大学 Cache-assisted calculation unloading method in smart home scene
CN115334076A (en) * 2022-07-08 2022-11-11 电子科技大学 Service migration method and system of edge server and edge server equipment
CN115208894A (en) * 2022-07-26 2022-10-18 福州大学 Pricing and calculation unloading method based on Stackelberg game in mobile edge calculation
CN115208894B (en) * 2022-07-26 2023-10-13 福州大学 Pricing and calculating unloading method based on Stackelberg game in mobile edge calculation
CN115623540B (en) * 2022-11-11 2023-10-03 南京邮电大学 Edge optimization unloading method for mobile equipment
CN115623540A (en) * 2022-11-11 2023-01-17 南京邮电大学 Edge optimization unloading method of mobile equipment
CN115858048A (en) * 2023-03-03 2023-03-28 成都信息工程大学 Hybrid key level task oriented dynamic edge arrival unloading method
CN115858048B (en) * 2023-03-03 2023-04-25 成都信息工程大学 Hybrid critical task oriented dynamic arrival edge unloading method
CN116489711A (en) * 2023-04-25 2023-07-25 北京交通大学 Task migration method of edge computing network based on deep reinforcement learning
CN116709428B (en) * 2023-08-04 2023-11-24 华东交通大学 Calculation unloading method and system based on mobile edge calculation
CN116709428A (en) * 2023-08-04 2023-09-05 华东交通大学 Calculation unloading method and system based on mobile edge calculation
CN117499999B (en) * 2023-12-29 2024-04-12 四川华鲲振宇智能科技有限责任公司 Task unloading method based on edge calculation
CN117499999A (en) * 2023-12-29 2024-02-02 四川华鲲振宇智能科技有限责任公司 Task unloading method based on edge calculation
CN117793805A (en) * 2024-02-27 2024-03-29 厦门宇树康信息技术有限公司 Dynamic user random access mobile edge computing resource allocation method and system
CN117793805B (en) * 2024-02-27 2024-04-26 厦门宇树康信息技术有限公司 Dynamic user random access mobile edge computing resource allocation method and system
CN117834643B (en) * 2024-03-05 2024-05-03 南京邮电大学 Deep neural network collaborative reasoning method for industrial Internet of things

Also Published As

Publication number Publication date
CN113950066B (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN113242568B (en) Task unloading and resource allocation method in uncertain network environment
CN113950103B (en) Multi-server complete computing unloading method and system under mobile edge environment
CN108920280B (en) Mobile edge computing task unloading method under single-user scene
CN110096362B (en) Multitask unloading method based on edge server cooperation
CN112988345B (en) Dependency task unloading method and device based on mobile edge calculation
CN112799823B (en) Online dispatching and scheduling method and system for edge computing tasks
CN111813506A (en) Resource sensing calculation migration method, device and medium based on particle swarm algorithm
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN113760511B (en) Vehicle edge calculation task unloading method based on depth certainty strategy
CN113993218A (en) Multi-agent DRL-based cooperative unloading and resource allocation method under MEC architecture
CN113626104A (en) Multi-objective optimization unloading strategy based on deep reinforcement learning under edge cloud architecture
CN114205353A (en) Calculation unloading method based on hybrid action space reinforcement learning algorithm
CN116366576A (en) Method, device, equipment and medium for scheduling computing power network resources
CN115714820A (en) Distributed micro-service scheduling optimization method
Hu et al. Dynamic task offloading in MEC-enabled IoT networks: A hybrid DDPG-D3QN approach
Lorido-Botran et al. ImpalaE: Towards an optimal policy for efficient resource management at the edge
CN112445617B (en) Load strategy selection method and system based on mobile edge calculation
CN117579701A (en) Mobile edge network computing and unloading method and system
CN117436485A (en) Multi-exit point end-edge-cloud cooperative system and method based on trade-off time delay and precision
CN114693141B (en) Transformer substation inspection method based on end edge cooperation
CN115858048A (en) Hybrid key level task oriented dynamic edge arrival unloading method
CN113157344B (en) DRL-based energy consumption perception task unloading method in mobile edge computing environment
WO2023092466A1 (en) Resource sharing-aware online task unloading method
Feng et al. An intelligent scheduling framework for DNN task acceleration in heterogeneous edge networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant