CN113687924A - Intelligent dynamic task computing unloading method based on edge computing system - Google Patents

Intelligent dynamic task computing unloading method based on edge computing system Download PDF

Info

Publication number
CN113687924A
CN113687924A CN202110513404.0A CN202110513404A CN113687924A CN 113687924 A CN113687924 A CN 113687924A CN 202110513404 A CN202110513404 A CN 202110513404A CN 113687924 A CN113687924 A CN 113687924A
Authority
CN
China
Prior art keywords
task
denotes
unloading
user
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110513404.0A
Other languages
Chinese (zh)
Other versions
CN113687924B (en
Inventor
王克浩
熊振华
刘克中
陈默子
曾旭明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202110513404.0A priority Critical patent/CN113687924B/en
Publication of CN113687924A publication Critical patent/CN113687924A/en
Application granted granted Critical
Publication of CN113687924B publication Critical patent/CN113687924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Abstract

The invention provides an intelligent dynamic task computing unloading method based on an edge computing system. Firstly, a novel mobile edge calculation model with intelligent overclocking capability is provided, and then a user task model, a system unloading decision model, a system overclocking decision model, a task unloading profit model, a local calculation overhead model and an edge calculation overhead model are constructed under an intelligent overclocking MEC system model. Then, the multi-slot dynamic task computation unloading problem is determined to be only relevant to the current time slot by utilizing the Lyapunov principle, and a mathematical problem for balancing the system effectiveness and the system stability is provided. Finally, aiming at the optimization problem, the invention provides a corresponding solution to obtain a calculation unloading scheme of the dynamic task unloading problem, and the effectiveness and the superiority of the scheme are demonstrated through an embodiment.

Description

Intelligent dynamic task computing unloading method based on edge computing system
Technical Field
The invention belongs to the field of computation unloading in mobile edge computation, and relates to an intelligent dynamic task computation unloading method based on an edge computing system.
Background
In recent years, various network technologies are rapidly developed, and more internet of things devices are accessed to a wireless network, which brings great pressure to the current communication network. The emergence of complex distributed application programs and the popularization of emerging 5G networks also put more and more new requirements on Internet of things equipment. To alleviate the pressure of wireless networks and internet of things devices, many new network architectures are proposed, and Mobile Edge Computing (MEC) is one of the most studied network architectures as a promising example of the internet of things.
The initial architecture of MEC was to integrate fog computing with the internet of things, which could deploy some network devices (e.g., servers, etc.) at the edge of the network. At the heart of this internet of things is how to build MEC networks to support the ability of a large number of users to offload data or computing tasks to an edge network for storage and execution. Considering a 5G network with fast data transmission capability and a novel MEC server with more computing resources, the Internet of things supporting the MEC can well meet the computing requirements of emerging applications, and can simultaneously reduce the infrastructure pressure of a backbone network and a central cloud.
One of the key technologies of MEC networks is computational offloading, i.e. how to decide which users to offload tasks to the MEC server for execution. However, this task processing approach creates additional overhead associated with latency and power consumption. Therefore, minimizing system computational overhead while meeting the quality of service of the task is a considerable problem to be investigated.
In the existing MEC calculation unloading related model and algorithm, most researches consider a static network scene, namely a user unloading task, and the resources of the MEC system are fixed and unchangeable. In real life, network scenes are in a dynamic form, that is, the offloading task of the user equipment is continuously generated, and the MEC server also continuously processes the offloading task.
In static networks, some researches consider that MEC resources become in short supply when the offloading task is too large and too much, so that they combine Mobile Cloud Computing (MCC) and MEC to extend edge computing resources, i.e. to set up a multi-layer cloud offloading mode according to different scenarios. Although this may indeed alleviate the burden of the MEC system when the offloading task is too much, the drawback of the multi-layer cloud is the high delay between the user equipment and the multi-layer cloud, which would compromise the model, and the energy consumption required for transmitting data would also put a great strain on the battery of the user equipment.
In dynamic networks, the MEC system must account for computational offload issues at successive time slots. Due to the uncertainty of the size of the offloading task, the processing time and the transmission time are difficult to determine, which leads to offloading decisions of the system and resource allocation decisions becoming more difficult to solve. In many researches, corresponding weight parameters are set for simplifying the problem, but the solution obviously is neither practical nor can the optimal system resource allocation be achieved. In the dynamic task computing unloading method, not only the problem of maximizing the utility of the system but also the problem of stability of the system need to be considered.
Aiming at the problems, the invention provides an intelligent overclocking MEC system model, which effectively solves the problem that the MEC computing resources are insufficient when the unloading tasks are too much and too large. On the basis of the model, an advanced calculation unloading method is provided to solve the problem of balancing the benefit and stability of the MEC system in a dynamic network scene.
Disclosure of Invention
The invention considers and optimizes the performance of the mobile edge computing system from the mobile edge computing server level, particularly builds an intelligent ultra-frequency mobile edge computing system model, and solves the balance problem of system efficiency and system stability in dynamic task computing unloading on the basis of the system.
In order to solve the above problems, the present invention provides an intelligent overclocking mobile edge computing system model. The invention also provides the sub-model setting of the model, which comprises a server over-frequency model, a dynamic task model, a calculation unloading model and a resource allocation model.
Based on the model, the balance problem of jointly considering the system utility and stability is provided, and the unloading decision, the overclocking decision, the resource allocation and the task unloading rate of the system are jointly optimized. The resource allocation and the task unloading rate problem have a strong coupling relation, and a better solution can be obtained by jointly optimizing the resource allocation and the task unloading rate.
Step 1: aiming at a multi-user scene under a mobile edge computing system with intelligent super-frequency capability, a loss model L (t) of an intelligent super-frequency mobile edge computing server and a dynamic task queue set { I) of a user are constructednWhere t denotes the time the server runs, the index n denotes the nth user,
Figure BDA0003061184040000021
step 2: the intelligent overclocking server in the step 1 needs to perform overclocking decision and set an overclocking decision variable pi, and the user in the step 1 needs to perform unloading decision and set an unloading decision variable x;
and step 3: when the unloading decision in the step 2 is executed locally, considering that the unloading rate meets the limit when the user task is calculated locally, and providing an unloading rate variable during task execution
Figure BDA0003061184040000022
Satisfy the requirement of
Figure BDA0003061184040000023
For unloading rate variable
Figure BDA0003061184040000024
The index n of which indicates the nth user,
Figure BDA0003061184040000025
the superscript l denotes the task local execution index.
Figure BDA0003061184040000026
Indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,
Figure BDA0003061184040000027
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000028
and task processing time
Figure BDA0003061184040000029
Meeting quality of service requirements
Figure BDA00030611840400000210
Wherein
Figure BDA00030611840400000211
Is the local processing latency of the offload task, with the index n denoting the nth user,
Figure BDA00030611840400000212
the superscript l denotes the task local execution index.
Figure BDA00030611840400000213
Indicating the maximum delay allowed for offloading the task, with the index n indicating the nth user,
Figure BDA00030611840400000214
and 4, step 4: when the unloading decision in the step 2 is edge execution, considering the service quality limit of the unloading task uploaded to the edge server by the user for execution and the computing resource limit of the edge server, and providing the execution time t of the unloading taskrSatisfy the requirement of
Figure BDA00030611840400000215
Wherein t isrIs the total latency required to unload a task decision to an edge execution, and the subscript r denotes the task edge execution index.
Figure BDA00030611840400000216
Is the quality of service requirement of the offload task. And computing resources allocated to the user by the edge server
Figure BDA00030611840400000217
Satisfy the requirement of
Figure BDA00030611840400000218
Figure BDA00030611840400000219
Wherein
Figure BDA00030611840400000220
Indicating the computing resources allocated to the offloading task for the nth user, with the index n indicating the nth user,
Figure BDA00030611840400000221
the superscript r denotes the task edge execution index.
Figure BDA00030611840400000222
Representing a collection of users who offload tasks to a mobile edge computing server, FrIs the maximum computing resource that the server can allocate, and the subscript r denotes the task edge execution index. And also user task offload rate variables
Figure BDA00030611840400000223
Satisfy the requirement of
Figure BDA0003061184040000031
For unloading rate variable
Figure BDA0003061184040000032
The index n of which indicates the nth user,
Figure BDA0003061184040000033
the superscript r denotes the task edge execution index.
Figure BDA0003061184040000034
Representing the maximum queue backlog for the nth user at time slot s.
And 5: and 4, when the mobile edge computing server processes the unloading task, considering the ultra-frequency time limit of the mobile edge computing server, and providing that the ultra-frequency working time t of the mobile edge computing server meets the requirement
t≤T0
Wherein T is0Is the maximum allowable operating duration of the server;
step 6: when the system processes the unloading tasks of the user in the step 3 and the step 4, the average queue backlog of the system is provided by considering the stability limit of the system
Figure BDA0003061184040000035
Should satisfy
Figure BDA0003061184040000036
And 7: based on the steps, the benefits brought by the fact that the system completes all unloading tasks, the total time cost and the energy cost are considered as main evaluation indexes of the constructed system, and a user unloading benefit model X is constructednThe index n indicates the number of users n,
Figure BDA0003061184040000037
computational overhead model for offloading tasks to be performed locally
Figure BDA0003061184040000038
The index n of which indicates the nth user,
Figure BDA0003061184040000039
the superscript l denotes the task local execution index. Computational overhead model for offloading task execution at edge
Figure BDA00030611840400000310
The index n of which indicates the nth user,
Figure BDA00030611840400000311
the superscript r denotes the task edge execution index. And average unload utility model of system
Figure BDA00030611840400000312
And 8: based on the cost models provided in the step 6 and the step 7, the problem of balancing the utility and the stability of the system is provided, the problems of joint optimization unloading decision, calculation resource allocation and unloading rate decision and overclocking decision are mainly solved, and a corresponding calculation unloading method is provided to solve the problem.
Further, the intelligent overclocking mobile edge computing system scenario in step 1 is composed of a mobile edge computing server with intelligent overclocking function and
Figure BDA00030611840400000313
consisting of individual user equipment, in particular, the offloading system being in discrete time slots
Figure BDA00030611840400000314
Of duration T per time slot scyc(TcycIs a scalar quantity representing a fixed duration). The loss function l (t) generated in the over-frequency state is given by:
Figure BDA00030611840400000315
where α > 0 is a fixed value representing the rate of increase of the loss function L (T) over time T, TcycThe period of the loss function.
In time slot s, dynamic task queue model In(the subscript n denotes the nth user,
Figure BDA00030611840400000316
) Expressed as:
Figure BDA00030611840400000317
wherein Qn(s) (subscript n denotes the nth user,
Figure BDA0003061184040000041
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000042
) Is the length of the queue backlog; dn(s) (subscript n denotes the nth user,
Figure BDA0003061184040000043
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000044
) Is the size of the task data to be processed at the current time slot; cn(s) (subscript n denotes the nth user,
Figure BDA0003061184040000045
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000046
) Is to process task data Dn(s) number of CPU cycles required, which is expressed in particular as Cn(s)=μn(s)Dn(s) wherein μn(s) (table n below indicates the nth user,
Figure BDA0003061184040000047
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000048
) Is the complexity coefficient of the unloading task under the current time slot;
Figure BDA0003061184040000049
(the subscript n denotes the nth user,
Figure BDA00030611840400000410
the reference sign s indicates the s-th time slot,
Figure BDA00030611840400000411
) Representing the maximum execution time of the current offload task. Using the sets A(s) { a) respectively1(s),a2(s),…,an(s),…,aN(s)}(an(s) denotes the received new offload task size for the nth user in time slot s, with the index n denoting the nth user,
Figure BDA00030611840400000412
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000413
) And Q(s) { Q ═ Q1(s),Q2(s),…,Qn(s),…,QN(s)}(Qn(s) represents the queue backlog for the nth user under time slot s, with the index n representing the nth user,
Figure BDA00030611840400000414
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000415
) The set of users receiving new offload tasks and the queue backlog set of users on behalf of the user at the current time slot.
Further, the over-clocking decision model of the server described in step 2 can be defined as pi(s) e {0,1} at time slot s, with reference s denoting the s-th time slot,
Figure BDA00030611840400000416
wherein pi(s) ═ 0 indicates that the mobile edge compute server is not booting up the supercomputerThe frequency state, and pi(s) ═ 1 indicates that the mobile edge compute server has initiated the overclocking state. The unloading decision model described in step 2 can be defined as
Figure BDA00030611840400000417
The index n indicates the nth user and,
Figure BDA00030611840400000418
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000419
wherein xn(s) is e {0,1 }. When x isnWhen(s) ═ 0, the offload tasks will be processed locally. When x isnWhen(s) — 1, the offload task will be offloaded to the mobile edge computing server for processing.
Further, in time slot s, when the offloading decision of the user task in step 3 is performed locally, offloading variable xnWhen(s) is 0, the maximum limit of the offloading rate of the user can be specifically described as:
Figure BDA00030611840400000420
Figure BDA00030611840400000421
indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,
Figure BDA00030611840400000422
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000423
the limitation on the unloading rate is mainly the requirement of the execution time of the task
Figure BDA00030611840400000424
The service quality is satisfied:
Figure BDA00030611840400000425
wherein
Figure BDA00030611840400000426
Indicating the latency of the task executing locally, the index n indicates the nth user,
Figure BDA00030611840400000427
the superscript l denotes the task local execution index, the index s denotes the s-th slot,
Figure BDA0003061184040000051
Figure BDA0003061184040000052
indicating the size of the local computing resource for the nth user, the superscript l indicating the task local execution index, the subscript n indicating the nth user,
Figure BDA0003061184040000053
further, in time slot s, when the offloading decision of the user task in step 4 is marginal execution, offloading variable xn(s) < 1, the execution time of the uninstalling task satisfies
Figure BDA0003061184040000054
Figure BDA0003061184040000055
And
Figure BDA0003061184040000056
respectively, the transmission delay and the processing delay for the offloading task to be performed on the mobile edge computing server, where the index n indicates the nth user,
Figure BDA0003061184040000057
the superscripts p and r denote the transmission index and the task edge execution index, respectively, the index s denotes the s-th slot,
Figure BDA0003061184040000058
computing resources allocated to a user by an edge server
Figure BDA0003061184040000059
(the subscript n denotes the nth user,
Figure BDA00030611840400000510
the superscript r denotes the task edge execution index, the index s denotes the s-th slot,
Figure BDA00030611840400000511
) Satisfies the following conditions:
Figure BDA00030611840400000512
wherein the set
Figure BDA00030611840400000513
Representing the set of users who offload tasks to the computational execution of the moving edge,
Figure BDA00030611840400000514
is a constant number of times that the number of the first,
Figure BDA00030611840400000515
f is the maximum computing resource of the mobile edge compute server. When the mobile edge computing server does not start the overclocking state, the maximum computing resource of the mobile edge computing server is F. When the mobile edge computing server starts the overclocking state, the maximum computing resource of the mobile edge computing server is
Figure BDA00030611840400000516
Further, in the time slot s, the mobile edge computing server overclocking operating time t in step 5 satisfies:
Figure BDA00030611840400000517
where the subscript n denotes the nth user,
Figure BDA00030611840400000518
the superscript r denotes the task edge execution index, the index s denotes the s-th slot,
Figure BDA00030611840400000519
T0representing the maximum over-clocking time allowed by the mobile edge computing server.
Further, at time slot s, the stability constraint of the task queue in step 6 is satisfied:
Figure BDA00030611840400000520
wherein
Figure BDA00030611840400000521
Represents the average queue backlog of the system, S → ∞ represents letting the number of time slots S approach positive infinity, n represents the nth user,
Figure BDA00030611840400000522
indicating that the queue backlog for the nth user is averaged.
Further, in time slot s, the user offload benefit model X considered in step 7nCan be expressed as:
Xn(s)=ρn(s)log2[1+Dn(s)]
where ρ isn(s) denotes the offload revenue weight for the nth user, the subscript n denotes the nth user,
Figure BDA00030611840400000523
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000524
computational overhead model for offloading tasks to be performed locally
Figure BDA00030611840400000525
Can be expressed as:
Figure BDA0003061184040000061
wherein
Figure BDA0003061184040000062
And
Figure BDA0003061184040000063
is task data Dn(s), a delay weight and an energy consumption weight, subscript n denotes the nth user,
Figure BDA0003061184040000064
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000065
the superscript t indicates the delay loss index and the superscript e indicates the energy loss index.
Figure BDA0003061184040000066
Is the energy consumption of the task to be performed locally, where the index n indicates the nth user,
Figure BDA0003061184040000067
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000068
the superscript l denotes the task local execution index. Computational overhead model for offloading task execution at edge
Figure BDA0003061184040000069
Can be expressed as:
Figure BDA00030611840400000610
for the
Figure BDA00030611840400000611
The index n of which indicates the nth user,
Figure BDA00030611840400000612
the superscript r denotes the task edge execution index, the index s denotes the s-th slot,
Figure BDA00030611840400000613
Figure BDA00030611840400000614
indicating the energy consumption of the transmission of the offloading task, with the index n indicating the nth user,
Figure BDA00030611840400000615
the superscript p denotes the transmission index. Average utility model of system
Figure BDA00030611840400000616
Expressed as:
Figure BDA00030611840400000617
wherein. Wherein Hn(s) is the subscriber's offload benefit, with the index n denoting the nth subscriber,
Figure BDA00030611840400000618
reference symbol s denotes the s-th time slot.
Further, at time slot s, the tradeoff system utility and stability problem set forth in step 8 can be expressed as:
Figure BDA00030611840400000619
s.t.C1:xn(s)={0,1},
Figure BDA00030611840400000620
C2:π(s)∈{0,1}
C3:
Figure BDA00030611840400000621
C4:
Figure BDA00030611840400000622
C5:
Figure BDA00030611840400000623
C6:
Figure BDA00030611840400000624
C7:
Figure BDA00030611840400000625
where V is the drift-penalty factor. C1 represents the offload decision of the system in s-slot, and C2 represents the over-clocking decision of the system in s-slot. Condition C3 ensures that the computing resources assigned by the mobile edge compute server to each offload task are positive during the s-slot. C4 indicates that in the s-slot, the total computing resources used to process the offload tasks are limited by the maximum resources of the mobile edge computing server. C5 shows that in s time slot, when the mobile edge computing server starts the overclocking state, the working time of the server can not exceed T0. C6 shows that in s time slot, the task data DnThe execution time of(s) should meet its quality of service. C7 ensures the stability of the system. In the case of the process under C8,
Figure BDA00030611840400000626
and the maximum queue backlog of the user is represented, and the task data volume unloaded by the user equipment n each time in the time slot s is ensured not to exceed the local queue backlog. For the above symbols, the subscript n indicates the nth user,
Figure BDA0003061184040000071
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000072
superscript p denotes a transmission index; the superscript r represents the task edge execution label; the superscript l represents the task local execution label; collection
Figure BDA0003061184040000073
Representing the set of users who offload tasks to the computational execution of the moving edge.
Further, it can be seen that the offload decision x(s) and the overclocking decision pi(s) are binary integers, and the resources f(s) allocated by the mobile edge computing server and the offload data d(s) of the decided ue are continuous, wherein
Figure BDA0003061184040000074
D(s)={D1(s),D2(s),...,DN(s) }. Thus, problems arise
Figure BDA0003061184040000075
The optimized problem is a non-linear mixed integer programming problem that is NP-hard.
Further, the mathematical problem solving step proposed in step 8 is:
initialization: user task set
Figure BDA0003061184040000076
All users in the set area offload their tasks to the mobile edge computing server at the optimal offload rate for the task when executed locally.
Step 8.1: and under two conditions of service overclocking work and non-overclocking work, respectively, solving the optimal unloading decision x(s) by using a Lagrange algorithm and an iterative algorithm taking a greedy algorithm as a core idea.
Step 8.2: and (4) according to the unloading decision obtained in the step 8.1, using Lagrange's algorithm and a heuristic algorithm and a comparison sorting algorithm to obtain a resource allocation decision f(s) of the mobile edge computing server and an unloading rate decision D(s) of the user unloading task.
Step 8.3: repeating step 8.1 and step 8.2 until the difference between the two objective functions is less than a minimum
Figure BDA0003061184040000077
(can be provided with
Figure BDA0003061184040000078
) A final offload decision x(s), a resource allocation decision f(s), and a task offload rate decision d(s) may be obtained.
Step 8.4: and (4) solving the overclocking decision pi(s) of the intelligent overclocking mobile edge computing system by using a comparison algorithm according to the calculation unloading method obtained in the step 8.3, so as to obtain solutions { x(s), f(s), D(s), pi(s) } of the original problem.
The invention has the following technical effects:
the invention provides a novel mobile edge calculation model with an intelligent overclocking function, and the basic problem of optimizing the system utility is considered under the model. For complex and changeable application scenes, the intelligent overclocking mobile edge computing server realizes the flexible application of an overclocking function to minimize the energy consumption of the system.
The invention expands the working time of the system into a multi-time slot, establishes a dynamic task model, and researches the balance problem of the utility and the stability of the system in a dynamic network by introducing a drift-penalty item. The invention provides an advanced calculation unloading method, and the scheme jointly considers the unloading strategy, the data unloading strategy, the calculation resource allocation strategy and the overclocking strategy of a system and proves the superiority of the novel model and the calculation unloading method.
Drawings
FIG. 1: an intelligent over-frequency mobile edge computing system model;
FIG. 2: a user task queue model;
FIG. 3: offloading the sub-problem technical route;
FIG. 4: jointly optimizing a technical route map of the sub-problem of calculating resource allocation and unloading rate;
FIG. 5: comparing the calculation cost under different servers;
FIG. 6: the server is in the calculation income graph under the two states of overclocking and non-overclocking;
FIG. 7: the system calculates a relationship graph of the overhead and the number of users;
FIG. 8: the number of user devices in different time periods;
FIG. 9: the computational overhead of the system in different time periods;
FIG. 10: a trade-off relationship between system utility and queue backlog;
FIG. 11: under four unloading algorithms, averaging the relation between queue backlog and time slot;
FIG. 12: under four unloading algorithms, the relation between the average unloading utility of the system and the time slot;
FIG. 13: iterative unloading algorithm, all unloading algorithm and the relation between the average unloading utility of the system and the time slot under the all-unloading algorithm.
FIG. 14: method flow chart
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Embodiments of the present invention are described below with reference to fig. 1-14, in which:
step 1: aiming at a multi-user scene under a mobile edge computing system with intelligent super-frequency capability, a loss model L (t) of an intelligent super-frequency mobile edge computing server and a dynamic task queue set { I) of a user are constructednWhere t denotes the time the server runs, the index n denotes the nth user,
Figure BDA0003061184040000081
N=50;
the intelligent overclocking mobile edge computing system scene in the step 1 is composed of a mobile edge computing server with an intelligent overclocking function and a mobile edge computing server with an intelligent overclocking function
Figure BDA0003061184040000082
Consisting of individual user equipment, in particular, the offloading system being in discrete time slots
Figure BDA0003061184040000083
S100, each time slot S lasting Tcyc(TcycIs a scalar quantity, representing a fixed duration, Tcyc2 s). The loss function l (t) generated in the over-frequency state is given by:
Figure BDA0003061184040000084
where α > 0 is a fixed value, set to α ═ 0.3, and represents the rate of increase of the loss function l (T) over time T, TcycIs the period of the loss function.
In time slot s, dynamic task queue model In(the subscript n denotes the nth user,
Figure BDA0003061184040000085
) Expressed as:
Figure BDA0003061184040000086
wherein Qn(s) (subscript n denotes the nth user,
Figure BDA0003061184040000087
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000088
) Is the length of the queue backlog; dn(s) (subscript n denotes the nth user,
Figure BDA0003061184040000089
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000810
) Is the size of the task data to be processed at the current time slot; cn(s) (subscript n denotes the nth user,
Figure BDA00030611840400000811
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000812
) Is to process task data Dn(s) number of CPU cycles required, which is expressed in particular as Cn(s)=μn(s)Dn(s) wherein μn(s) (table n below indicates the nth user,
Figure BDA00030611840400000813
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000814
) Is the complexity coefficient, mu, of the offloading task at the current time slotn(s)=1;
Figure BDA00030611840400000815
(the subscript n denotes the nth user,
Figure BDA0003061184040000091
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000092
) Indicating the maximum execution time of the current offload task,
Figure BDA0003061184040000093
using the sets A(s) { a) respectively1(s),a2(s),...,an(s),...,aN(s)}(an(s) denotes the received new offload task size for the nth user in time slot s, an(s)=5Mbit/s, with the index n denoting the nth user,
Figure BDA0003061184040000094
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000095
) And Q(s) { Q ═ Q1(s),Q2(s),...,Qn(s),...,QN(s)}(Qn(s) represents the queue backlog for the nth user under time slot s, with the index n representing the nth user,
Figure BDA0003061184040000096
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000097
) A set of new offload tasks and a set of queue backlogs for users on behalf of the user at the current time slot.
Step 2: the intelligent overclocking server in the step 1 needs to perform overclocking decision and set an overclocking decision variable pi, and the user in the step 1 needs to perform unloading decision and set an unloading decision variable x;
the over-clocking decision model for the server described in step 2 can be defined as pi(s) ∈ {0,1} at time slot s, with the index s denoting the s-th time slot,
Figure BDA0003061184040000098
where pi(s) ═ 0 indicates that the mobile edge compute server has not initiated the turbo state, and pi(s) ═ 1 indicates that the mobile edge compute server has initiated the turbo state. The unloading decision model described in step 2 can be defined as
Figure BDA0003061184040000099
The index n indicates the nth user and,
Figure BDA00030611840400000910
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000911
wherein xn(s) is e {0,1 }. When x isnWhen(s) ═ 0, the offload tasks will be processed locally. When x isnWhen(s) — 1, the offload task will be offloaded to the mobile edge computing server for processing.
And step 3: when the unloading decision in the step 2 is executed locally, considering that the unloading rate meets the limit when the user task is calculated locally, and providing an unloading rate variable during task execution
Figure BDA00030611840400000912
Satisfy the requirement of
Figure BDA00030611840400000913
For unloading rate variable
Figure BDA00030611840400000914
The index n of which indicates the nth user,
Figure BDA00030611840400000915
the superscript l denotes the task local execution index.
Figure BDA00030611840400000916
Indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,
Figure BDA00030611840400000917
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400000918
and task processing time
Figure BDA00030611840400000919
Meeting quality of service requirements
Figure BDA00030611840400000920
Wherein
Figure BDA00030611840400000921
Is the local processing latency of the offload task, with the index n denoting the nth user,
Figure BDA00030611840400000922
the superscript l denotes the task local execution index.
Figure BDA00030611840400000923
Indicating the maximum delay allowed for offloading the task, with the index n indicating the nth user,
Figure BDA00030611840400000924
in time slot s, when the unloading decision of the user task in step 3 is local execution, unloading variable xnWhere(s) is 0, the user maximum limit on the offload rate may be described specifically as:
Figure BDA00030611840400000925
Figure BDA0003061184040000101
indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,
Figure BDA0003061184040000102
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000103
the limitation on the unloading rate is mainly the requirement of the execution time of the task
Figure BDA0003061184040000104
The service quality is satisfied:
Figure BDA0003061184040000105
wherein
Figure BDA0003061184040000106
Indicating the latency of the task executing locally, the index n indicates the nth user,
Figure BDA0003061184040000107
the superscript l denotes the task local execution index, the index s denotes the s-th slot,
Figure BDA0003061184040000108
Figure BDA0003061184040000109
representing the size of the local computing resource of the nth user,
Figure BDA00030611840400001010
is [0.5,0.7 ]](Gigacycle), the superscript l denotes the task-local execution index, the subscript n denotes the nth user,
Figure BDA00030611840400001011
and 4, step 4: when the unloading decision in the step 2 is edge execution, considering the service quality limit of the unloading task uploaded to the edge server by the user for execution and the computing resource limit of the edge server, and providing the execution time t of the unloading taskrSatisfy the requirement of
Figure BDA00030611840400001012
Wherein t isrIs the total latency required to unload a task decision to an edge execution, and the subscript r denotes the task edge execution index.
Figure BDA00030611840400001013
Is the quality of service requirement of the offload task. And computing resources allocated to the user by the edge server
Figure BDA00030611840400001014
Satisfy the requirement of
Figure BDA00030611840400001015
Figure BDA00030611840400001016
Wherein
Figure BDA00030611840400001017
Indicating the computing resources allocated to the offloading task for the nth user, with the index n indicating the nth user,
Figure BDA00030611840400001018
the superscript r denotes the task edge execution index.
Figure BDA00030611840400001019
Representing a collection of users who offload tasks to a mobile edge computing server, FrIs the maximum computing resource that the server can allocate, FrThe subscript r denotes the task edge execution index, 10 (GHz). And also user task offload rate variables
Figure BDA00030611840400001020
Satisfy the requirement of
Figure BDA00030611840400001021
For unloading rate variable
Figure BDA00030611840400001022
The index n of which indicates the nth user,
Figure BDA00030611840400001023
the superscript r denotes task edge executionAnd (6) marking.
Figure BDA00030611840400001024
Representing the maximum queue backlog for the nth user at time slot s.
In time slot s, when the unloading decision of the user task in step 4 is edge execution, unloading variable xn(s) 1, the execution time of the unloading task is satisfied
Figure BDA0003061184040000111
Figure BDA0003061184040000112
And
Figure BDA0003061184040000113
respectively, the transmission delay and the processing delay for the offloading task to be performed on the mobile edge computing server, where the index n indicates the nth user,
Figure BDA0003061184040000114
the superscripts p and r denote the transmission index and the task edge execution index, respectively, the index s denotes the s-th slot,
Figure BDA0003061184040000115
computing resources allocated to a user by an edge server
Figure BDA0003061184040000116
(the subscript n denotes the nth user,
Figure BDA0003061184040000117
the superscript r denotes the task edge execution index, the index s denotes the s-th slot,
Figure BDA0003061184040000118
) Satisfies the following conditions:
Figure BDA0003061184040000119
wherein the set
Figure BDA00030611840400001110
Representing the set of users who offload tasks to the computational execution of the moving edge,
Figure BDA00030611840400001111
is a constant number of times that the number of the first,
Figure BDA00030611840400001112
Figure BDA00030611840400001122
f is the maximum computing resource of the mobile edge compute server. When the mobile edge computing server does not start the overclocking state, the maximum computing resource of the mobile edge computing server is F. When the mobile edge computing server starts the overclocking state, the maximum computing resource of the mobile edge computing server is
Figure BDA00030611840400001113
And 5: and 4, when the mobile edge computing server processes the unloading task, considering the ultra-frequency time limit of the mobile edge computing server, and providing that the ultra-frequency working time t of the mobile edge computing server meets the requirement
t≤T0
Wherein T is0Is the maximum allowable operating time of the server, T0=1s;
In the time slot s, the mobile edge computing server overclocking working time t in the step 5 meets the following conditions:
Figure BDA00030611840400001114
where the subscript n denotes the nth user,
Figure BDA00030611840400001115
the superscript r denotes the task edge execution index, the index s denotes the s-th slot,
Figure BDA00030611840400001116
T0representing the maximum over-clocking time allowed by the mobile edge computing server.
Step 6: when the system processes the unloading tasks of the user in the step 3 and the step 4, the average queue backlog of the system is provided by considering the stability limit of the system
Figure BDA00030611840400001117
Should satisfy
Figure BDA00030611840400001118
In time slot s, the stability constraint of the task queue in step 6 is satisfied:
Figure BDA00030611840400001119
wherein
Figure BDA00030611840400001120
Represents the average queue backlog of the system, S → ∞ represents letting the number of time slots S approach positive infinity, n represents the nth user,
Figure BDA00030611840400001121
indicating that the queue backlog for the nth user is averaged.
And 7: based on the steps, the benefits brought by the fact that the system completes all unloading tasks, the total time cost and the energy cost are considered as main evaluation indexes of the constructed system, and a user unloading benefit model X is constructednThe index n indicates the number of users n,
Figure BDA0003061184040000121
computational overhead model for offloading tasks to be performed locally
Figure BDA0003061184040000122
The index n of which indicates the nth user,
Figure BDA0003061184040000123
the superscript l denotes the task local execution index. Computational overhead model for offloading task execution at edge
Figure BDA0003061184040000124
The index n of which indicates the nth user,
Figure BDA0003061184040000125
the superscript r denotes the task edge execution index. And average unload utility model of system
Figure BDA0003061184040000126
User offload benefit model X considered in step 7 at time slot snCan be expressed as:
Xn(s)=ρn(s)log2[1+Dn(s)]
where ρ isn(s) represents the offload gain weight, ρ, for the nth usern(s) ═ 2.5, subscript n denotes the nth user,
Figure BDA0003061184040000127
the reference sign s denotes the s-th time slot,
Figure BDA0003061184040000128
computational overhead model for offloading tasks to be performed locally
Figure BDA0003061184040000129
Can be expressed as:
Figure BDA00030611840400001210
wherein
Figure BDA00030611840400001211
And
Figure BDA00030611840400001212
is task data DnA delay weight and an energy consumption weight of(s),
Figure BDA00030611840400001213
the index n indicates the nth user and,
Figure BDA00030611840400001214
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400001215
the superscript t indicates the delay loss index and the superscript e indicates the energy loss index.
Figure BDA00030611840400001216
Is the energy consumption of the task to be performed locally, where the index n denotes the nth user,
Figure BDA00030611840400001217
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400001218
the superscript l denotes the task local execution index. Computational overhead model for offloading task execution at edge
Figure BDA00030611840400001219
Can be expressed as:
Figure BDA00030611840400001220
for the
Figure BDA00030611840400001221
The index n of which indicates the nth user,
Figure BDA00030611840400001222
the superscript r denotes the taskThe edge carries out a reference number, s, which denotes the s-th slot,
Figure BDA00030611840400001223
Figure BDA00030611840400001224
indicating the energy consumption of the transmission of the offloading task, with the index n indicating the nth user,
Figure BDA00030611840400001225
the superscript p denotes the transmission index. Average utility model of system
Figure BDA00030611840400001226
Expressed as:
Figure BDA00030611840400001227
wherein. Wherein Hn(s) is the subscriber's offload benefit, with the index n denoting the nth subscriber,
Figure BDA00030611840400001228
reference symbol s denotes the s-th time slot.
And 8: based on the cost models provided in the step 6 and the step 7, the problem of balancing the utility and the stability of the system is provided, the problems of joint optimization unloading decision, calculation resource allocation and unloading rate decision and overclocking decision are mainly solved, and a corresponding calculation unloading method is provided to solve the problem.
At time slot s, the tradeoff system utility and stability problem set forth in step 8 can be expressed as:
Figure BDA0003061184040000131
s.t.C1:xn(s)={0,1},
Figure BDA0003061184040000132
C2:π(s)∈{0,1}
C3:
Figure BDA0003061184040000133
C4:
Figure BDA0003061184040000134
C5:
Figure BDA0003061184040000135
C6:
Figure BDA0003061184040000136
C7:
Figure BDA0003061184040000137
where V is the drift-penalty factor. C1 represents the offload decision of the system in s-slot, and C2 represents the over-clocking decision of the system in s-slot. Condition C3 ensures that the computing resources assigned by the mobile edge compute server to each offload task are positive during the s-slot. C4 indicates that in the s-slot, the total computing resources used to process the offload tasks are limited by the maximum resources of the mobile edge computing server. C5 shows that in s time slot, when the mobile edge computing server starts the overclocking state, the working time of the server can not exceed T0. C6 shows that in s time slot, the task data DnThe execution time of(s) should meet its quality of service. C7 ensures the stability of the system. In the case of the process under C8,
Figure BDA0003061184040000138
and the maximum queue backlog of the user is represented, and the task data volume unloaded by the user equipment n each time in the time slot s is ensured not to exceed the local queue backlog. For the above symbols, the subscript n indicates the nth user,
Figure BDA0003061184040000139
the reference sign s denotes the s-th time slot,
Figure BDA00030611840400001310
superscript p denotes a transmission index; the superscript r represents the task edge execution label; the superscript l represents the task local execution label; collection
Figure BDA00030611840400001311
Representing the set of users who offload tasks to the computational execution of the moving edge.
It can be seen that the offload decision x(s) and the overclocking decision pi(s) are binary integers, and the resources f(s) allocated by the mobile edge computing server and the offload data d(s) of the decided ue are continuous, wherein
Figure BDA00030611840400001312
Thus, problems arise
Figure BDA00030611840400001313
The optimized problem is a non-linear mixed integer programming problem that is NP-hard.
The mathematical problem solving steps proposed in the step 8 are as follows:
initialization: user task set
Figure BDA00030611840400001314
All users in the set area offload their tasks to the mobile edge computing server at the optimal offload rate for the task when executed locally.
Step 8.1: and under two conditions of service overclocking work and non-overclocking work, respectively, solving the optimal unloading decision x(s) by using a Lagrange algorithm and an iterative algorithm taking a greedy algorithm as a core idea.
Step 8.2: and (4) according to the unloading decision obtained in the step 8.1, using Lagrange's algorithm and a heuristic algorithm and a comparison sorting algorithm to obtain a resource allocation decision f(s) of the mobile edge computing server and an unloading rate decision D(s) of the user unloading task.
Step 8.3: repeating step 8.1 and step 8.2 until the difference between the two objective functions is less than a minimum
Figure BDA00030611840400001315
(can be provided with
Figure BDA0003061184040000141
) A final offload decision x(s), a resource allocation decision f(s), and a task offload rate decision d(s) may be obtained.
Step 8.4: and (4) solving the overclocking decision pi(s) of the intelligent overclocking mobile edge computing system by using a comparison algorithm according to the calculation unloading method obtained in the step 8.3, so as to obtain solutions { x(s), f(s), D(s), pi(s) } of the original problem.
FIG. 1 illustrates a service model for dynamic task offloading in an intelligent overfrequency mobile edge computing system. As shown in fig. 1, there is a mobile edge computing server having an intelligent overclocking function,
Figure BDA0003061184040000142
and (4) users. Assume that each user has a certain amount of local data (e.g., images, video, etc.) to be processed using edge intelligence services (e.g., image recognition programs, large interactive games, etc.), and this process is done in a dynamic, multi-slot scenario. FIG. 2 shows a task queue model for each user, and in each time slot, the user is continuously receiving new data while sending requests to the mobile edge computing system to process a portion of the data in the queue.
In order to avoid that the queue backlog of the user is too large to damage the stability of the whole mobile edge computing network, the queue backlog of the user needs to be ensured while considering how to decide the unloading of the user task and the allocation of various computing resources to achieve the maximum utility of the mobile edge computing network. In view of this embodiment, the present disclosure may be utilized to address problems
Figure BDA0003061184040000143
The following were used:
Figure BDA0003061184040000144
in consideration of the problems
Figure BDA0003061184040000145
In time, due to the discreteness of the offloading decision, the task offloading rate and the continuity of the mobile edge computing server resources, and the strong coupling between the two, the problem is difficult to solve directly. Therefore, the invention solves the problem of the embodiment by using the technical scheme of decomposing the original problem into a plurality of sub-problems, and the specific scheme is as follows.
(1) Offloading the decision sub-problem: when the computational resource vector f and the overclocking decision a have been determined, then the offload decision sub-problem can be written as:
Figure BDA0003061184040000146
to solve the problems
Figure BDA0003061184040000147
The invention provides an iterative solution scheme based on greedy algorithm thought to solve the problem. The technical route is shown in figure 3.
(2) The joint optimization resource allocation and data offload rate sub-problem: assuming that the offload decision x(s), and the turbo decision pi(s) are known, the problem is
Figure BDA0003061184040000148
Can be written as:
Figure BDA0003061184040000149
wherein
Figure BDA00030611840400001410
The technical route for solving this subproblem is shown in fig. 4, and the results are as follows.
1. Data offload rate solution:
Figure BDA0003061184040000151
wherein
Figure BDA0003061184040000152
Is the execution time of the offloading task of the user equipment n,
Figure BDA0003061184040000153
is the resource allocated to the user equipment n when the server overclocking the processing task,
Figure BDA0003061184040000154
is the computing resource allocated to user equipment n when the server is not overclocking.
Figure BDA0003061184040000155
Is a handle
Figure BDA0003061184040000156
Is updated to
Figure BDA0003061184040000157
The new sequence of time weights of the latter is,
Figure BDA0003061184040000158
and if so, the execution time of the unloading task of the user equipment n.
2. The resource allocation scheme comprises the following steps:
Figure BDA0003061184040000159
wherein
Figure BDA00030611840400001510
Is a handle
Figure BDA00030611840400001511
Is updated to
Figure BDA00030611840400001512
And t is the working time of the mobile edge computing server.
(3) Overclocking decision sub-problem: under s time slot, judging whether the following mathematical expression holds:
maxx,π(s)=0,f,DI(s)<maxx,π(s)=1,f,DI(s) (6)
if the equation (6) is satisfied, the mobile edge computing server starts the over-frequency state, otherwise, the mobile edge computing server does not start.
Further, in order to obtain the final computation offload method of this embodiment, it is necessary to perform iterative processing on the results of the three seed problems, and the specific steps are as follows:
1: and unloading the task decisions of all the users to the mobile edge computing server for execution, and distributing computing resources according to the solving technology of the second sub-problem.
2: calculating the value of the drift-penalty term, and recording the value as I0(s)。
3: and determining the unloading decision at the current time slot according to the solving technology of the first subproblem.
4: and updating the computing resource allocation strategy and the user task unloading rate according to the new unloading decision and the equations (4) and (5). Calculating the value of the drift-penalty term, denoted as I1(s)。
5: if I0(s)<I1(s) then let I0(s)=I1(s) and repeating steps 3 and 4 until
Figure BDA0003061184040000161
6: and solving the overclocking decision according to the technical scheme of the third sub-query, and giving a final calculation unloading method, namely the calculation unloading method of the embodiment.
In this embodiment, all data processing and algorithms are implemented using Matlab 2108b, and the experimental environment is configured on a host computer having 64-bit Windows 10, 3.00GHz Intel Core i5 processor and 8GB 2400MHz DDR4 memory.
In order to prove the superiority of the intelligent over-frequency model and the dynamic computation unloading method in the embodiment, comparison with the following three reference algorithms is considered
(1) Random offload decision making: after the task offloading rate obtained in step 8.1, the offloading decision x for each user is generated according to {0,1} random number.
(2) All offloading decisions: after the task unloading rate obtained in step 8.1, the task of each user is unloaded to the mobile edge computing server for execution.
(3) And (3) local execution: after the task offloading rate according to step 8.1, the tasks of each user are executed locally.
In this embodiment, the distances between the users and the base station are equal to 150 meters, a specific scenario simulates a variation situation of the number of users at a bus stop in a central urban area in 24 hours, the bandwidth adopts an average allocation scheme, and the remaining parameters are specifically set as in table 1.
Table 1 simulation parameter settings
Figure BDA0003061184040000162
Figure BDA0003061184040000171
1. Intelligent overclocking system calculation overhead performance result analysis
The experiment compares the computation unloading performance of a common server, an intelligent overclocking server and an unlimited overclocking server in a mobile edge computing system. When the number of user equipments is relatively small, it can be seen from fig. 5 and 6 that the mobile edge computing server is not worth initiating the over-clocking state. In this experiment, the number of ues is between 3 and 10, and the server is not worth overclocking. Specifically, the negative gain caused by the over-frequency is a trend of increasing first and then decreasing, because when the number of users is extremely small, the operation time of the server is extremely short, and the over-frequency loss function and the over-frequency gain function of the server are a linear function and a high-order concave-like function with respect to the time t, respectively, and the difference between the two functions must be increased first and then decreased, and finally is inversely exceeded.
With the increase of the number of the user equipment, the advantages brought by the over-frequency are gradually embodied and become larger and larger. However, due to the limitation of the overclocking time, the increasing trend of the gain brought by the overclocking is slowly slowed down. Comparing the red and green curves in fig. 5, it can be seen that the number of users is between 10 and 27, which are overlapping. This is because when the number of tasks is small, the mobile edge computing server can complete the processing of all tasks within the overclock time limit. When the number of user equipments is increased to 27, the red curve is slightly higher than the green curve. This is because the server overclocking time of the intelligent mobile edge computing system reaches T0Thereafter, the mobile edge computing server will be out of the overclocking state, and therefore, the computing overhead of the system will be slightly increased, but will still be smaller than that of the normal server. In fig. 6, when the number of the user equipments is increased to more than 30, the gain of the over-clocking is not increased sometimes because the offloading task is a discrete set of data, and when the newly added offloading data does not meet the offloading condition, the gain of the over-clocking is not affected.
2. Performance comparison result analysis of four calculation unloading methods
In a second experiment, the number of user equipments was set from 3 to 50 and the computational overhead of the system under different offload decisions and different mobile edge computing servers were compared.
As can be seen from fig. 7, when all tasks are executed locally, the computational overhead is relatively large, and a linearly increasing trend is exhibited as the number of users increases. This is because when the system uses the local offload decision, the computation resource allocation decision, the system overclocking decision will not be considered, and the computation overhead is only equal to the computation capability of the ue
Figure BDA0003061184040000172
And computational complexity C of the offload tasknIt is related. In this experiment, the user equipment calculated resourcesThe number of CPU cycles required for offloading tasks is a random value within a small range, and the overhead must increase in a form of approximately proportional trend as the number of user equipments increases.
If all tasks are performed on the mobile edge computing server, the computational overhead is related to the number of user devices as indicated by the black line in FIG. 7. Under the condition of less user equipment, because the resources of the mobile edge computing server and the bandwidth resources of the network are rich, the time delay and the energy consumption for processing the unloading task are small, and the computing overhead of the system is low. However, when the number of the user equipments increases, the network bandwidth resources and the computational resources allocated to each offload task by the system will decrease, and the computational overhead indicators of the offload tasks will increase sharply, which is the reason why the computational overhead of the system increases exponentially as shown by the black line.
When random offload decisions are employed, the computational overhead of the system is intermediate between the local offload overhead and the total offload overhead, for reasons that will become apparent.
For the JOOC algorithm, it is clear that the computational overhead of the system is acceptable even if the number of user devices is large. When the number of users is relatively small, the system may decide to offload all tasks to the mobile edge compute server for execution. However, in this case, the mobile edge computing server with overclocking capability will not choose to start the overclocking state because the loss caused by starting the overclocking state is greater than the gain caused by starting the overclocking state. When the number of the user equipment is increased, the advantages of the mobile edge computing server with the overclocking capability are gradually reflected, and in the experiment, when the number of the users is increased to 14, the overclocking state is started, so that the computing benefit is brought.
Intelligent over-frequency system real scene performance result analysis
The experiment simulates the fluctuation situation of the number of users in certain scenes (such as canteens, stations and the like) in real life at different time periods, and compares the calculation overhead of four calculation unloading algorithms and the performances of two different mobile edge calculation systems.
As shown in fig. 8, the peak hours of work and work are 8:00 to 9:00, 12:00 to 13:00 and 18:00 to 19:00 per day, and the number of users is large. And in these late night and early morning hours, the number of users is small. As can be seen from fig. 9, the intelligent overclocking server has the best performance no matter during that time period. From 23 o 'clock late at night to 5 o' clock early in the morning, the number of user devices is very small and the server chooses not to initiate the overclocking state. And in the daytime, the number of users gradually increases, and the server starts to start the overclocking state. And it can be clearly seen that the more the number of users, the greater the profit obtained by the intelligent over-frequency mobile edge computing system.
System utility and queue backlog tradeoff result analysis
The experiment researches the trade-off relationship between the system utility and the queue backlog based on the Lyapunov optimization framework, and mainly analyzes the influence of a penalty coefficient on the system utility and the queue backlog respectively by adjusting the value of the penalty coefficient V (from V to 100 to V to 1500 in the experiment).
As can be seen from fig. 10, as the penalty factor V increases, the negative system utility exhibits an inverse proportional function decreasing trend. This shows that the desired system utility can be achieved by adjusting the value of the penalty factor V. As can be further seen from FIG. 10, there is a linear growth trend between the queue backlog and the penalty coefficient V, and by combining the analysis of the queue backlog and the penalty coefficient V, it can be seen that there is a linear growth trend between the system utility and the queue backlog with respect to the penalty coefficient V
Figure BDA0003061184040000181
A trade-off is made. Therefore, when the penalty coefficient V is used for regulating and controlling the system efficiency, the current penalty coefficient V needs to be considered so as to avoid causing higher queue backlog and seriously influencing the service quality of the user.
Analysis of influence results of different calculation unloading methods on system
The experiment compares the influence of three standard unloading methods and the unloading method provided by the experiment on the system performance in the intelligent ultra-frequency mobile edge computing system.
It can be seen from fig. 11 that the average queue backlog of the user equipment is always zero when the total offloading algorithm is employed. When the local offload algorithm is adopted, due to the limitation of the computing resources of the user equipment, the data volume of the task processed by the user equipment each time cannot be too large on the premise of meeting the quality of service of the offload task, so that by adopting the computing offload mode, the queue backlog of the user equipment can be increased all the time, and the trend is a linear increase. The random unloading method is adopted, the queue pressure of the user equipment has no specific rule, sometimes is very large, sometimes is very small, the processing method can seriously affect the experience of the user, and even can cause the task execution failure. And by adopting the calculation unloading mode of the iterative algorithm, the queue backlog of the user equipment presents a logarithmic rising trend and finally tends to be stable, and the final queue backlog can be accepted by the system.
Fig. 12 and 13 show the offload utility versus time slot produced by four offload algorithms. When the random algorithm is adopted for unloading, the unloading effect of the random algorithm starts to have a great descending trend, and finally the fluctuation amplitude is weakened. This is because the system consumption increases exponentially with the task data, and when the unloading task data is too large in a time slot, the consumption of the system increases rapidly, which is why the utility of the system starts to fluctuate dramatically in the random unloading method. However, even if the unloading is random, the task data executed each time fluctuates within a range, so as to make the average unloading utility of the system gradually fluctuate less and not stable as the running time of the system is longer, and the average unloading utility is very low compared with the other three cases.
As can be seen in fig. 13, the gain of the iterative algorithm is highest at the beginning of the slot. This is because initially, the queue backlog for the user equipment is small and the iterative algorithm is biased towards optimizing system utility. As the time slot increases, the system utility decreases slowly, even much less than the utility of executing the algorithm locally, because the algorithm decisions sacrifice a portion of the offload utility for system stability purposes in order to stabilize the system queue backlog. Although the system implemented locally is highly effective, it can be seen from fig. 9 that the average queue backlog of the user equipment is very large, and such a system is unstable. While the average utility of the system using the iterative algorithm may decrease at all times, it is not less than the average utility of the system for the full offload algorithm. This is because, under the idea of the algorithm, when the system reaches final stability, the size of the task data offloaded in each timeslot is equal to the size of the task data newly added by the ue, which is "total offloading" in the current timeslot. The offload utility at its current time slot is the same as the full offload utility. However, since the iterative algorithm has gained more utility in the previous time slot, the average system utility of the iterative algorithm is always greater than the average system utility of the entire offload algorithm.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not by way of limitation, and that alterations and modifications may be effected without departing from the scope of the invention as defined in the claims appended hereto, by a person of ordinary skill in the art in light of the teachings of the present invention.

Claims (9)

1. An intelligent dynamic task computing unloading method based on an edge computing system is characterized in that,
step 1: aiming at a multi-user scene under a mobile edge computing system with intelligent super-frequency capability, a loss model L (t) of an intelligent super-frequency mobile edge computing server and a dynamic task queue set { I) of a user are constructednWhere t denotes the time the server runs, the index n denotes the nth user,
Figure FDA0003061184030000011
step 2: the intelligent overclocking server in the step 1 needs to perform overclocking decision and set an overclocking decision variable pi, and the user in the step 1 needs to perform unloading decision and set an unloading decision variable x;
and step 3: when the unloading decision in the step 2 is executed locally, considering that the unloading rate meets the limit when the user task is calculated locally, and providing an unloading rate variable during task execution
Figure FDA0003061184030000012
Satisfy the requirement of
Figure FDA0003061184030000013
For unloading rate variable
Figure FDA0003061184030000014
The index n of which indicates the nth user,
Figure FDA0003061184030000015
the superscript l represents the task local execution label;
Figure FDA0003061184030000016
indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,
Figure FDA0003061184030000017
the reference sign s denotes the s-th time slot,
Figure FDA0003061184030000018
and task processing time
Figure FDA0003061184030000019
The service quality requirement is met:
Figure FDA00030611840300000110
wherein
Figure FDA00030611840300000111
Is the local processing latency of the offload task, with the index n denoting the nth user,
Figure FDA00030611840300000112
the superscript l represents the task local execution label;
Figure FDA00030611840300000113
indicating the maximum delay allowed for offloading the task, with the index n indicating the nth user,
Figure FDA00030611840300000114
and 4, step 4: when the unloading decision in the step 2 is edge execution, considering the service quality limit of the unloading task uploaded to the edge server by the user and the computing resource limit of the edge server, and providing the execution time t of the unloading taskrSatisfy the requirement of
Figure FDA00030611840300000115
Wherein t isrIs the total time delay required from the unloading of the task decision to the edge execution, and the subscript r represents the task edge execution label;
Figure FDA00030611840300000116
is the quality of service requirement of the offload task; and computing resources allocated to the user by the edge server
Figure FDA00030611840300000117
Satisfy the requirement of
Figure FDA00030611840300000118
Figure FDA00030611840300000119
Wherein
Figure FDA00030611840300000120
Indicating the computing resources allocated to the offloading task for the nth user, with the index n indicating the nth user,
Figure FDA00030611840300000121
the superscript r represents the task edge execution label;
Figure FDA00030611840300000122
representing a collection of users who offload tasks to a mobile edge computing server, FrIs the maximum computing resource that the server can allocate, and the subscript r represents the task edge execution label; and also user task offload rate variables
Figure FDA00030611840300000123
Satisfy the requirement of
Figure FDA0003061184030000021
For unloading rate variable
Figure FDA0003061184030000022
The index n of which indicates the nth user,
Figure FDA0003061184030000023
the superscript r represents the task edge execution label;
Figure FDA0003061184030000024
representing the maximum queue backlog of the nth user under the time slot s;
and 5: and 4, when the mobile edge computing server processes the unloading task, considering the excess frequency time limit of the mobile edge computing server, and providing that the excess frequency working time t of the mobile edge computing server meets the requirement
t≤T0
Wherein T is0Is the maximum allowable operating duration of the server;
step 6: when the system processes the unloading tasks of the user in the step 3 and the step 4, the average queue backlog of the system is provided by considering the stability limit of the system
Figure FDA0003061184030000025
Should satisfy
Figure FDA0003061184030000026
And 7: based on the steps, the benefits brought by the fact that the system completes all unloading tasks, the total time cost and the energy cost are considered as main evaluation indexes of the constructed system, and a user unloading benefit model X is constructednThe index n indicates the number of users n,
Figure FDA0003061184030000027
computational overhead model for offloading tasks to be performed locally
Figure FDA0003061184030000028
The index n of which indicates the nth user,
Figure FDA0003061184030000029
the superscript l represents the task local execution label; computational overhead model for offloading task execution at edge
Figure FDA00030611840300000210
The index n of which indicates the nth user,
Figure FDA00030611840300000211
the superscript r represents the task edge execution label; and the plane of the systemEqual unloading utility model
Figure FDA00030611840300000212
And 8: based on the cost models provided in the step 6 and the step 7, the problem of balancing the utility and the stability of the system is provided, the problems of joint optimization unloading decision, calculation resource allocation and unloading rate decision and overclocking decision are mainly solved, and a corresponding calculation unloading method is provided to solve the problem.
2. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
the intelligent overclocking mobile edge computing system scene in the step 1 is formed by a mobile edge computing server with an intelligent overclocking function and
Figure FDA00030611840300000213
consisting of individual user equipment, in particular, the offloading system being in discrete time slots
Figure FDA00030611840300000214
Of duration T per time slot scyc(TcycIs a scalar quantity representing a fixed duration); the loss function l (t) generated in the over-frequency state is given by:
Figure FDA00030611840300000215
where α > 0 is a fixed value representing the rate of increase of the loss function L (T) over time T, TcycIs the period of the loss function;
in time slot s, dynamic task queue model In(the subscript n denotes the nth user,
Figure FDA00030611840300000216
) Expressed as:
Figure FDA0003061184030000031
wherein Qn(s) (subscript n denotes the nth user,
Figure FDA0003061184030000032
the reference sign s denotes the s-th time slot,
Figure FDA0003061184030000033
) Is the length of the queue backlog; dn(s) (subscript n denotes the nth user,
Figure FDA0003061184030000034
the reference sign s denotes the s-th time slot,
Figure FDA0003061184030000035
) Is the size of the task data to be processed at the current time slot; cn(s) (subscript n denotes the nth user,
Figure FDA0003061184030000036
the reference sign s denotes the s-th time slot,
Figure FDA0003061184030000037
) Is to process task data Dn(s) number of CPU cycles required, which is expressed in particular as Cn(s)=μn(s)Dn(s) wherein μn(s) (table n below indicates the nth user,
Figure FDA0003061184030000038
the reference sign s denotes the s-th time slot,
Figure FDA0003061184030000039
) Is the complexity coefficient of the unloading task under the current time slot;
Figure FDA00030611840300000310
(s) (subscript n denotes the nth user,
Figure FDA00030611840300000311
the reference sign s denotes the s-th time slot,
Figure FDA00030611840300000312
) Representing the maximum execution time of the current unloading task; using the sets A(s) { a) respectively1(s),a2(s),...,an(s),...,aN(s)}(an(s) denotes the received new offload task size for the nth user at time slot s, with the index n denoting the nth user,
Figure FDA00030611840300000313
the reference sign s denotes the s-th time slot,
Figure FDA00030611840300000314
) And Q(s) { Q ═ Q1(s),Q2(s),...,Qn(s),...,QN(s)}(Qn(s) represents the queue backlog for the nth user under time slot s, with the index n representing the nth user,
Figure FDA00030611840300000315
the reference sign s denotes the s-th time slot,
Figure FDA00030611840300000316
) A set of new offload tasks and a set of queue backlogs for users on behalf of the user at the current time slot.
3. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
the over-clocking decision model for the server described in step 2 can be defined as pi(s) ∈ {0,1} at time slot s, with the index s denoting the s-th time slot,
Figure FDA00030611840300000317
wherein pi(s) ═ 0 indicates that the mobile edge computing server does not start the over-frequency state, and pi(s) ═ 1 indicates that the mobile edge computing server starts the over-frequency state; the unloading decision model described in step 2 can be defined as
Figure FDA00030611840300000318
The index n indicates the nth user and,
Figure FDA00030611840300000319
the reference sign s denotes the s-th time slot,
Figure FDA00030611840300000320
wherein xn(s) ∈ {0,1 }; when x isnWhen(s) is 0, the offload task will be processed locally; when x isnWhen(s) — 1, the offload task will be offloaded to the mobile edge computing server for processing.
4. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
in time slot s, when the unloading decision of the user task in step 3 is local execution, unloading variable xnWhen(s) is 0, the maximum limit of the offloading rate of the user can be specifically described as:
Figure FDA00030611840300000321
Figure FDA00030611840300000322
indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,
Figure FDA00030611840300000323
the reference sign s denotes the s-th time slot,
Figure FDA00030611840300000324
the limitation on the unloading rate is mainly the requirement of the execution time of the task
Figure FDA0003061184030000041
The service quality is satisfied:
Figure FDA0003061184030000042
wherein
Figure FDA0003061184030000043
Indicating the latency of the task executing locally, the index n indicates the nth user,
Figure FDA0003061184030000044
the superscript l denotes the task local execution index, the index s denotes the s-th slot,
Figure FDA0003061184030000045
Figure FDA0003061184030000046
indicating the size of the local computing resource for the nth user, the superscript l indicating the task local execution index, the subscript n indicating the nth user,
Figure FDA0003061184030000047
5. the intelligent dynamic task computing offload method based on edge computing system of claim 1,
in time slot s, when the unloading decision of the user task in step 4 is edge execution, unloading variable xn(s) < 1, the execution time of the uninstalling task satisfies
Figure FDA0003061184030000048
Figure FDA0003061184030000049
And
Figure FDA00030611840300000410
respectively, the transmission delay and the processing delay for the offloading task to be performed on the mobile edge computing server, where the index n indicates the nth user,
Figure FDA00030611840300000411
the superscripts p and r denote the transmission index and the task edge execution index, respectively, the index s denotes the s-th slot,
Figure FDA00030611840300000412
computing resources allocated to a user by an edge server
Figure FDA00030611840300000413
(the subscript n denotes the nth user,
Figure FDA00030611840300000414
the superscript r denotes the task edge execution index, the index s denotes the s-th slot,
Figure FDA00030611840300000415
) Satisfies the following conditions:
Figure FDA00030611840300000416
wherein the set
Figure FDA00030611840300000417
Representing the set of users who offload tasks to the computational execution of the moving edge,
Figure FDA00030611840300000418
is a constant number of times that the number of the first,
Figure FDA00030611840300000419
f is the maximum computing resource of the mobile edge computing server; when the mobile edge computing server does not start the overclocking state, the maximum computing resource of the mobile edge computing server is F; when the mobile edge computing server starts the overclocking state, the maximum computing resource of the mobile edge computing server is
Figure FDA00030611840300000420
6. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
in the time slot s, the mobile edge computing server overclocking working time t in the step 5 meets the following conditions:
Figure FDA00030611840300000421
where the subscript n denotes the nth user,
Figure FDA00030611840300000422
the superscript r denotes the task edge execution index, the index s denotes the s-th slot,
Figure FDA00030611840300000423
T0representing the maximum over-clocking time allowed by the mobile edge computing server.
7. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
in time slot s, the stability constraint of the task queue in step 6 is satisfied:
Figure FDA0003061184030000051
wherein
Figure FDA0003061184030000052
Represents the average queue backlog of the system, S → ∞ represents letting the number of time slots S approach positive infinity, n represents the nth user,
Figure FDA0003061184030000053
Figure FDA0003061184030000054
indicating that the queue backlog for the nth user is averaged.
8. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
user offload benefit model X considered in step 7 at time slot snCan be expressed as:
Xn(s)=ρn(s)log2[1+Dn(s)]
where ρ isn(s) denotes the offload revenue weight for the nth user, the subscript n denotes the nth user,
Figure FDA0003061184030000055
the reference sign s denotes the s-th time slot,
Figure FDA0003061184030000056
computational overhead model for offloading tasks to be performed locally
Figure FDA0003061184030000057
Can be expressed as:
Figure FDA0003061184030000058
wherein
Figure FDA0003061184030000059
And
Figure FDA00030611840300000510
is task data Dn(s), a delay weight and an energy consumption weight, subscript n denotes the nth user,
Figure FDA00030611840300000511
the reference sign s denotes the s-th time slot,
Figure FDA00030611840300000512
the superscript t represents the time delay loss label, and the superscript e represents the energy loss label;
Figure FDA00030611840300000513
is the energy consumption of the task to be performed locally, where the index n denotes the nth user,
Figure FDA00030611840300000514
the reference sign s denotes the s-th time slot,
Figure FDA00030611840300000515
the superscript l represents the task local execution label; computational overhead model for offloading task execution at edge
Figure FDA00030611840300000516
Can be expressed as:
Figure FDA00030611840300000517
for the
Figure FDA00030611840300000518
The index n of which indicates the nth user,
Figure FDA00030611840300000519
the superscript r denotes the task edge execution index, the index s denotes the s-th slot,
Figure FDA00030611840300000520
Figure FDA00030611840300000521
indicating the energy consumption of the transmission of the offloading task, with the index n indicating the nth user,
Figure FDA00030611840300000522
superscript p denotes a transmission index; average utility model of system
Figure FDA00030611840300000523
Expressed as:
Figure FDA00030611840300000524
wherein; wherein Hn(s) is the subscriber's offload benefit, with the index n denoting the nth subscriber,
Figure FDA00030611840300000525
reference symbol s denotes the s-th slot;
at time slot s, the tradeoff system utility and stability problem set forth in step 8 can be expressed as:
Figure FDA0003061184030000061
Figure FDA0003061184030000062
s.t.C1:
Figure FDA0003061184030000063
C2:π(s)∈{0,1}
C3:
Figure FDA0003061184030000064
C4:
Figure FDA0003061184030000065
C5:
Figure FDA0003061184030000066
C6:
Figure FDA0003061184030000067
C7:
Figure FDA0003061184030000068
wherein V is a drift-penalty coefficient; c1 represents the offloading decision of the system in s slot, C2 represents the over-clocking decision of the system in s slot; condition C3 ensures that the computing resources allocated by the mobile edge compute server to each offload task are positive during the s time slot; c4 shows that in s time slot, the total computing resources used for processing the unloading task are limited by the maximum resources of the mobile edge computing server; c5 shows that in s time slot, when the mobile edge computing server starts the overclocking state, the working time of the server can not exceed T0(ii) a C6 shows that in s time slot, the task data Dn(s) the execution time should meet its quality of service; c7 ensures the stability of the system; in the case of the process under C8,
Figure FDA0003061184030000069
the maximum queue backlog of the user is represented, and the task data volume unloaded by the user equipment n each time in the time slot s is ensured not to exceed the local queue backlog; for the above symbols, the subscript n indicates the nth user,
Figure FDA00030611840300000610
the reference sign s denotes the s-th time slot,
Figure FDA00030611840300000611
superscript p denotes a transmission index; the superscript r represents the task edge execution label; the superscript l represents the task local execution label; collection
Figure FDA00030611840300000612
Representing a set of users who offload tasks to a moving edge computational execution;
it can be seen that the offload decision x(s) and the overclocking decision pi(s) are binary integers, and the resources f(s) allocated by the mobile edge computing server and the offload data d(s) of the decided ue are continuous, wherein
Figure FDA00030611840300000613
D(s)={D1(s),D2(s),...,DN(s) }; thus, problems arise
Figure FDA00030611840300000614
The optimized problem is a non-linear mixed integer programming problem that is NP-hard.
9. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
the mathematical problem solving steps proposed in the step 8 are as follows:
initialization: user task set
Figure FDA00030611840300000615
All users in the set area unload the tasks to the mobile edge computing server, wherein the unloading rate is the optimal unloading rate of the tasks during local execution;
step 8.1: respectively under two conditions of service over-frequency operation and non-over-frequency operation, solving an optimal unloading decision x(s) by using a Lagrange algorithm and an iterative algorithm taking a greedy algorithm as a core idea;
step 8.2: according to the unloading decision obtained in the step 8.1, a Lagrange's algorithm, a heuristic algorithm and a comparison sorting algorithm are used for solving a resource allocation decision f(s) of the mobile edge computing server and an unloading rate decision D(s) of the user unloading task;
step 8.3: repeating step 8.1 and step 8.2 until the difference between the two objective functions is less than a minimum
Figure FDA00030611840300000616
(can be provided with
Figure FDA00030611840300000617
) A final offload decision x(s), a resource allocation decision f(s), and a task offload rate decision d(s) may be obtained;
step 8.4: and (4) solving the overclocking decision pi(s) of the intelligent overclocking mobile edge computing system by using a comparison algorithm according to the calculation unloading method obtained in the step 8.3, so as to obtain solutions { x(s), f(s), D(s), pi(s) } of the original problem.
CN202110513404.0A 2021-05-11 2021-05-11 Intelligent dynamic task computing and unloading method based on edge computing system Active CN113687924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110513404.0A CN113687924B (en) 2021-05-11 2021-05-11 Intelligent dynamic task computing and unloading method based on edge computing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110513404.0A CN113687924B (en) 2021-05-11 2021-05-11 Intelligent dynamic task computing and unloading method based on edge computing system

Publications (2)

Publication Number Publication Date
CN113687924A true CN113687924A (en) 2021-11-23
CN113687924B CN113687924B (en) 2023-10-20

Family

ID=78576400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110513404.0A Active CN113687924B (en) 2021-05-11 2021-05-11 Intelligent dynamic task computing and unloading method based on edge computing system

Country Status (1)

Country Link
CN (1) CN113687924B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109121151A (en) * 2018-11-01 2019-01-01 南京邮电大学 Distributed discharging method under the integrated mobile edge calculations of cellulor
CN110099384A (en) * 2019-04-25 2019-08-06 南京邮电大学 Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user
WO2021012584A1 (en) * 2019-07-25 2021-01-28 北京工业大学 Method for formulating single-task migration strategy in mobile edge computing scenario
CN112422346A (en) * 2020-11-19 2021-02-26 北京航空航天大学 Variable-period mobile edge computing unloading decision method considering multi-resource limitation
KR20210026171A (en) * 2019-08-29 2021-03-10 인제대학교 산학협력단 Multi-access edge computing based Heterogeneous Networks System
CN112600921A (en) * 2020-12-15 2021-04-02 重庆邮电大学 Heterogeneous mobile edge network-oriented dynamic task unloading method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109121151A (en) * 2018-11-01 2019-01-01 南京邮电大学 Distributed discharging method under the integrated mobile edge calculations of cellulor
CN110099384A (en) * 2019-04-25 2019-08-06 南京邮电大学 Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user
WO2021012584A1 (en) * 2019-07-25 2021-01-28 北京工业大学 Method for formulating single-task migration strategy in mobile edge computing scenario
KR20210026171A (en) * 2019-08-29 2021-03-10 인제대학교 산학협력단 Multi-access edge computing based Heterogeneous Networks System
CN112422346A (en) * 2020-11-19 2021-02-26 北京航空航天大学 Variable-period mobile edge computing unloading decision method considering multi-resource limitation
CN112600921A (en) * 2020-12-15 2021-04-02 重庆邮电大学 Heterogeneous mobile edge network-oriented dynamic task unloading method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢人超;廉晓飞;贾庆民;黄韬;刘韵洁;: "移动边缘计算卸载技术综述", 通信学报, no. 11 *

Also Published As

Publication number Publication date
CN113687924B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN107995660B (en) Joint task scheduling and resource allocation method supporting D2D-edge server unloading
CN110418416B (en) Resource allocation method based on multi-agent reinforcement learning in mobile edge computing system
CN109857546B (en) Multi-server mobile edge computing unloading method and device based on Lyapunov optimization
CN109343904B (en) Lyapunov optimization-based fog calculation dynamic unloading method
CN110798849A (en) Computing resource allocation and task unloading method for ultra-dense network edge computing
CN109151864B (en) Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network
CN110096362B (en) Multitask unloading method based on edge server cooperation
CN110941667A (en) Method and system for calculating and unloading in mobile edge calculation network
CN109922479B (en) Calculation task unloading method based on time delay estimation
CN110968366B (en) Task unloading method, device and equipment based on limited MEC resources
CN108924796B (en) Resource allocation and unloading proportion joint decision method
CN110519370B (en) Edge computing resource allocation method based on facility site selection problem
CN109756912A (en) A kind of multiple base stations united task unloading of multi-user and resource allocation methods
CN112887905B (en) Task unloading method based on periodic resource scheduling in Internet of vehicles
CN112231085A (en) Mobile terminal task migration method based on time perception in collaborative environment
CN109639833A (en) A kind of method for scheduling task based on wireless MAN thin cloud load balancing
CN112202847B (en) Server resource allocation method based on mobile edge calculation
CN111511028B (en) Multi-user resource allocation method, device, system and storage medium
Tian et al. User preference-based hierarchical offloading for collaborative cloud-edge computing
CN110780986B (en) Internet of things task scheduling method and system based on mobile edge computing
CN114116061B (en) Workflow task unloading method and system in mobile edge computing environment
CN113747450B (en) Service deployment method and device in mobile network and electronic equipment
CN114339891A (en) Edge unloading resource allocation method and system based on Q learning
CN112822707B (en) Task unloading and resource allocation method in computing resource limited MEC
CN110768827B (en) Task unloading method based on group intelligent algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant