CN113687924A - Intelligent dynamic task computing unloading method based on edge computing system - Google Patents
Intelligent dynamic task computing unloading method based on edge computing system Download PDFInfo
- Publication number
- CN113687924A CN113687924A CN202110513404.0A CN202110513404A CN113687924A CN 113687924 A CN113687924 A CN 113687924A CN 202110513404 A CN202110513404 A CN 202110513404A CN 113687924 A CN113687924 A CN 113687924A
- Authority
- CN
- China
- Prior art keywords
- task
- denotes
- unloading
- user
- index
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44594—Unloading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/509—Offload
Abstract
The invention provides an intelligent dynamic task computing unloading method based on an edge computing system. Firstly, a novel mobile edge calculation model with intelligent overclocking capability is provided, and then a user task model, a system unloading decision model, a system overclocking decision model, a task unloading profit model, a local calculation overhead model and an edge calculation overhead model are constructed under an intelligent overclocking MEC system model. Then, the multi-slot dynamic task computation unloading problem is determined to be only relevant to the current time slot by utilizing the Lyapunov principle, and a mathematical problem for balancing the system effectiveness and the system stability is provided. Finally, aiming at the optimization problem, the invention provides a corresponding solution to obtain a calculation unloading scheme of the dynamic task unloading problem, and the effectiveness and the superiority of the scheme are demonstrated through an embodiment.
Description
Technical Field
The invention belongs to the field of computation unloading in mobile edge computation, and relates to an intelligent dynamic task computation unloading method based on an edge computing system.
Background
In recent years, various network technologies are rapidly developed, and more internet of things devices are accessed to a wireless network, which brings great pressure to the current communication network. The emergence of complex distributed application programs and the popularization of emerging 5G networks also put more and more new requirements on Internet of things equipment. To alleviate the pressure of wireless networks and internet of things devices, many new network architectures are proposed, and Mobile Edge Computing (MEC) is one of the most studied network architectures as a promising example of the internet of things.
The initial architecture of MEC was to integrate fog computing with the internet of things, which could deploy some network devices (e.g., servers, etc.) at the edge of the network. At the heart of this internet of things is how to build MEC networks to support the ability of a large number of users to offload data or computing tasks to an edge network for storage and execution. Considering a 5G network with fast data transmission capability and a novel MEC server with more computing resources, the Internet of things supporting the MEC can well meet the computing requirements of emerging applications, and can simultaneously reduce the infrastructure pressure of a backbone network and a central cloud.
One of the key technologies of MEC networks is computational offloading, i.e. how to decide which users to offload tasks to the MEC server for execution. However, this task processing approach creates additional overhead associated with latency and power consumption. Therefore, minimizing system computational overhead while meeting the quality of service of the task is a considerable problem to be investigated.
In the existing MEC calculation unloading related model and algorithm, most researches consider a static network scene, namely a user unloading task, and the resources of the MEC system are fixed and unchangeable. In real life, network scenes are in a dynamic form, that is, the offloading task of the user equipment is continuously generated, and the MEC server also continuously processes the offloading task.
In static networks, some researches consider that MEC resources become in short supply when the offloading task is too large and too much, so that they combine Mobile Cloud Computing (MCC) and MEC to extend edge computing resources, i.e. to set up a multi-layer cloud offloading mode according to different scenarios. Although this may indeed alleviate the burden of the MEC system when the offloading task is too much, the drawback of the multi-layer cloud is the high delay between the user equipment and the multi-layer cloud, which would compromise the model, and the energy consumption required for transmitting data would also put a great strain on the battery of the user equipment.
In dynamic networks, the MEC system must account for computational offload issues at successive time slots. Due to the uncertainty of the size of the offloading task, the processing time and the transmission time are difficult to determine, which leads to offloading decisions of the system and resource allocation decisions becoming more difficult to solve. In many researches, corresponding weight parameters are set for simplifying the problem, but the solution obviously is neither practical nor can the optimal system resource allocation be achieved. In the dynamic task computing unloading method, not only the problem of maximizing the utility of the system but also the problem of stability of the system need to be considered.
Aiming at the problems, the invention provides an intelligent overclocking MEC system model, which effectively solves the problem that the MEC computing resources are insufficient when the unloading tasks are too much and too large. On the basis of the model, an advanced calculation unloading method is provided to solve the problem of balancing the benefit and stability of the MEC system in a dynamic network scene.
Disclosure of Invention
The invention considers and optimizes the performance of the mobile edge computing system from the mobile edge computing server level, particularly builds an intelligent ultra-frequency mobile edge computing system model, and solves the balance problem of system efficiency and system stability in dynamic task computing unloading on the basis of the system.
In order to solve the above problems, the present invention provides an intelligent overclocking mobile edge computing system model. The invention also provides the sub-model setting of the model, which comprises a server over-frequency model, a dynamic task model, a calculation unloading model and a resource allocation model.
Based on the model, the balance problem of jointly considering the system utility and stability is provided, and the unloading decision, the overclocking decision, the resource allocation and the task unloading rate of the system are jointly optimized. The resource allocation and the task unloading rate problem have a strong coupling relation, and a better solution can be obtained by jointly optimizing the resource allocation and the task unloading rate.
Step 1: aiming at a multi-user scene under a mobile edge computing system with intelligent super-frequency capability, a loss model L (t) of an intelligent super-frequency mobile edge computing server and a dynamic task queue set { I) of a user are constructednWhere t denotes the time the server runs, the index n denotes the nth user,
step 2: the intelligent overclocking server in the step 1 needs to perform overclocking decision and set an overclocking decision variable pi, and the user in the step 1 needs to perform unloading decision and set an unloading decision variable x;
and step 3: when the unloading decision in the step 2 is executed locally, considering that the unloading rate meets the limit when the user task is calculated locally, and providing an unloading rate variable during task executionSatisfy the requirement of
For unloading rate variableThe index n of which indicates the nth user,the superscript l denotes the task local execution index.Indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,the reference sign s denotes the s-th time slot,and task processing timeMeeting quality of service requirements
WhereinIs the local processing latency of the offload task, with the index n denoting the nth user,the superscript l denotes the task local execution index.Indicating the maximum delay allowed for offloading the task, with the index n indicating the nth user,
and 4, step 4: when the unloading decision in the step 2 is edge execution, considering the service quality limit of the unloading task uploaded to the edge server by the user for execution and the computing resource limit of the edge server, and providing the execution time t of the unloading taskrSatisfy the requirement of
Wherein t isrIs the total latency required to unload a task decision to an edge execution, and the subscript r denotes the task edge execution index.Is the quality of service requirement of the offload task. And computing resources allocated to the user by the edge serverSatisfy the requirement of
WhereinIndicating the computing resources allocated to the offloading task for the nth user, with the index n indicating the nth user,the superscript r denotes the task edge execution index.Representing a collection of users who offload tasks to a mobile edge computing server, FrIs the maximum computing resource that the server can allocate, and the subscript r denotes the task edge execution index. And also user task offload rate variablesSatisfy the requirement of
For unloading rate variableThe index n of which indicates the nth user,the superscript r denotes the task edge execution index.Representing the maximum queue backlog for the nth user at time slot s.
And 5: and 4, when the mobile edge computing server processes the unloading task, considering the ultra-frequency time limit of the mobile edge computing server, and providing that the ultra-frequency working time t of the mobile edge computing server meets the requirement
t≤T0
Wherein T is0Is the maximum allowable operating duration of the server;
step 6: when the system processes the unloading tasks of the user in the step 3 and the step 4, the average queue backlog of the system is provided by considering the stability limit of the systemShould satisfy
And 7: based on the steps, the benefits brought by the fact that the system completes all unloading tasks, the total time cost and the energy cost are considered as main evaluation indexes of the constructed system, and a user unloading benefit model X is constructednThe index n indicates the number of users n,computational overhead model for offloading tasks to be performed locallyThe index n of which indicates the nth user,the superscript l denotes the task local execution index. Computational overhead model for offloading task execution at edgeThe index n of which indicates the nth user,the superscript r denotes the task edge execution index. And average unload utility model of system
And 8: based on the cost models provided in the step 6 and the step 7, the problem of balancing the utility and the stability of the system is provided, the problems of joint optimization unloading decision, calculation resource allocation and unloading rate decision and overclocking decision are mainly solved, and a corresponding calculation unloading method is provided to solve the problem.
Further, the intelligent overclocking mobile edge computing system scenario in step 1 is composed of a mobile edge computing server with intelligent overclocking function andconsisting of individual user equipment, in particular, the offloading system being in discrete time slotsOf duration T per time slot scyc(TcycIs a scalar quantity representing a fixed duration). The loss function l (t) generated in the over-frequency state is given by:
where α > 0 is a fixed value representing the rate of increase of the loss function L (T) over time T, TcycThe period of the loss function.
wherein Qn(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Is the length of the queue backlog; dn(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Is the size of the task data to be processed at the current time slot; cn(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Is to process task data Dn(s) number of CPU cycles required, which is expressed in particular as Cn(s)=μn(s)Dn(s) wherein μn(s) (table n below indicates the nth user,the reference sign s denotes the s-th time slot,) Is the complexity coefficient of the unloading task under the current time slot;(the subscript n denotes the nth user,the reference sign s indicates the s-th time slot,) Representing the maximum execution time of the current offload task. Using the sets A(s) { a) respectively1(s),a2(s),…,an(s),…,aN(s)}(an(s) denotes the received new offload task size for the nth user in time slot s, with the index n denoting the nth user,the reference sign s denotes the s-th time slot,) And Q(s) { Q ═ Q1(s),Q2(s),…,Qn(s),…,QN(s)}(Qn(s) represents the queue backlog for the nth user under time slot s, with the index n representing the nth user,the reference sign s denotes the s-th time slot,) The set of users receiving new offload tasks and the queue backlog set of users on behalf of the user at the current time slot.
Further, the over-clocking decision model of the server described in step 2 can be defined as pi(s) e {0,1} at time slot s, with reference s denoting the s-th time slot,wherein pi(s) ═ 0 indicates that the mobile edge compute server is not booting up the supercomputerThe frequency state, and pi(s) ═ 1 indicates that the mobile edge compute server has initiated the overclocking state. The unloading decision model described in step 2 can be defined asThe index n indicates the nth user and,the reference sign s denotes the s-th time slot,wherein xn(s) is e {0,1 }. When x isnWhen(s) ═ 0, the offload tasks will be processed locally. When x isnWhen(s) — 1, the offload task will be offloaded to the mobile edge computing server for processing.
Further, in time slot s, when the offloading decision of the user task in step 3 is performed locally, offloading variable xnWhen(s) is 0, the maximum limit of the offloading rate of the user can be specifically described as:
indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,the reference sign s denotes the s-th time slot,the limitation on the unloading rate is mainly the requirement of the execution time of the taskThe service quality is satisfied:
whereinIndicating the latency of the task executing locally, the index n indicates the nth user,the superscript l denotes the task local execution index, the index s denotes the s-th slot, indicating the size of the local computing resource for the nth user, the superscript l indicating the task local execution index, the subscript n indicating the nth user,
further, in time slot s, when the offloading decision of the user task in step 4 is marginal execution, offloading variable xn(s) < 1, the execution time of the uninstalling task satisfies
Andrespectively, the transmission delay and the processing delay for the offloading task to be performed on the mobile edge computing server, where the index n indicates the nth user,the superscripts p and r denote the transmission index and the task edge execution index, respectively, the index s denotes the s-th slot,computing resources allocated to a user by an edge server(the subscript n denotes the nth user,the superscript r denotes the task edge execution index, the index s denotes the s-th slot,) Satisfies the following conditions:
wherein the setRepresenting the set of users who offload tasks to the computational execution of the moving edge,is a constant number of times that the number of the first,f is the maximum computing resource of the mobile edge compute server. When the mobile edge computing server does not start the overclocking state, the maximum computing resource of the mobile edge computing server is F. When the mobile edge computing server starts the overclocking state, the maximum computing resource of the mobile edge computing server is
Further, in the time slot s, the mobile edge computing server overclocking operating time t in step 5 satisfies:
where the subscript n denotes the nth user,the superscript r denotes the task edge execution index, the index s denotes the s-th slot,T0representing the maximum over-clocking time allowed by the mobile edge computing server.
Further, at time slot s, the stability constraint of the task queue in step 6 is satisfied:
whereinRepresents the average queue backlog of the system, S → ∞ represents letting the number of time slots S approach positive infinity, n represents the nth user,indicating that the queue backlog for the nth user is averaged.
Further, in time slot s, the user offload benefit model X considered in step 7nCan be expressed as:
Xn(s)=ρn(s)log2[1+Dn(s)]
where ρ isn(s) denotes the offload revenue weight for the nth user, the subscript n denotes the nth user,the reference sign s denotes the s-th time slot,computational overhead model for offloading tasks to be performed locallyCan be expressed as:
whereinAndis task data Dn(s), a delay weight and an energy consumption weight, subscript n denotes the nth user,the reference sign s denotes the s-th time slot,the superscript t indicates the delay loss index and the superscript e indicates the energy loss index.Is the energy consumption of the task to be performed locally, where the index n indicates the nth user,the reference sign s denotes the s-th time slot,the superscript l denotes the task local execution index. Computational overhead model for offloading task execution at edgeCan be expressed as:
for theThe index n of which indicates the nth user,the superscript r denotes the task edge execution index, the index s denotes the s-th slot, indicating the energy consumption of the transmission of the offloading task, with the index n indicating the nth user,the superscript p denotes the transmission index. Average utility model of systemExpressed as:
wherein. Wherein Hn(s) is the subscriber's offload benefit, with the index n denoting the nth subscriber,reference symbol s denotes the s-th time slot.
Further, at time slot s, the tradeoff system utility and stability problem set forth in step 8 can be expressed as:
C2:π(s)∈{0,1}
where V is the drift-penalty factor. C1 represents the offload decision of the system in s-slot, and C2 represents the over-clocking decision of the system in s-slot. Condition C3 ensures that the computing resources assigned by the mobile edge compute server to each offload task are positive during the s-slot. C4 indicates that in the s-slot, the total computing resources used to process the offload tasks are limited by the maximum resources of the mobile edge computing server. C5 shows that in s time slot, when the mobile edge computing server starts the overclocking state, the working time of the server can not exceed T0. C6 shows that in s time slot, the task data DnThe execution time of(s) should meet its quality of service. C7 ensures the stability of the system. In the case of the process under C8,and the maximum queue backlog of the user is represented, and the task data volume unloaded by the user equipment n each time in the time slot s is ensured not to exceed the local queue backlog. For the above symbols, the subscript n indicates the nth user,the reference sign s denotes the s-th time slot,superscript p denotes a transmission index; the superscript r represents the task edge execution label; the superscript l represents the task local execution label; collectionRepresenting the set of users who offload tasks to the computational execution of the moving edge.
Further, it can be seen that the offload decision x(s) and the overclocking decision pi(s) are binary integers, and the resources f(s) allocated by the mobile edge computing server and the offload data d(s) of the decided ue are continuous, whereinD(s)={D1(s),D2(s),...,DN(s) }. Thus, problems ariseThe optimized problem is a non-linear mixed integer programming problem that is NP-hard.
Further, the mathematical problem solving step proposed in step 8 is:
initialization: user task setAll users in the set area offload their tasks to the mobile edge computing server at the optimal offload rate for the task when executed locally.
Step 8.1: and under two conditions of service overclocking work and non-overclocking work, respectively, solving the optimal unloading decision x(s) by using a Lagrange algorithm and an iterative algorithm taking a greedy algorithm as a core idea.
Step 8.2: and (4) according to the unloading decision obtained in the step 8.1, using Lagrange's algorithm and a heuristic algorithm and a comparison sorting algorithm to obtain a resource allocation decision f(s) of the mobile edge computing server and an unloading rate decision D(s) of the user unloading task.
Step 8.3: repeating step 8.1 and step 8.2 until the difference between the two objective functions is less than a minimum(can be provided with) A final offload decision x(s), a resource allocation decision f(s), and a task offload rate decision d(s) may be obtained.
Step 8.4: and (4) solving the overclocking decision pi(s) of the intelligent overclocking mobile edge computing system by using a comparison algorithm according to the calculation unloading method obtained in the step 8.3, so as to obtain solutions { x(s), f(s), D(s), pi(s) } of the original problem.
The invention has the following technical effects:
the invention provides a novel mobile edge calculation model with an intelligent overclocking function, and the basic problem of optimizing the system utility is considered under the model. For complex and changeable application scenes, the intelligent overclocking mobile edge computing server realizes the flexible application of an overclocking function to minimize the energy consumption of the system.
The invention expands the working time of the system into a multi-time slot, establishes a dynamic task model, and researches the balance problem of the utility and the stability of the system in a dynamic network by introducing a drift-penalty item. The invention provides an advanced calculation unloading method, and the scheme jointly considers the unloading strategy, the data unloading strategy, the calculation resource allocation strategy and the overclocking strategy of a system and proves the superiority of the novel model and the calculation unloading method.
Drawings
FIG. 1: an intelligent over-frequency mobile edge computing system model;
FIG. 2: a user task queue model;
FIG. 3: offloading the sub-problem technical route;
FIG. 4: jointly optimizing a technical route map of the sub-problem of calculating resource allocation and unloading rate;
FIG. 5: comparing the calculation cost under different servers;
FIG. 6: the server is in the calculation income graph under the two states of overclocking and non-overclocking;
FIG. 7: the system calculates a relationship graph of the overhead and the number of users;
FIG. 8: the number of user devices in different time periods;
FIG. 9: the computational overhead of the system in different time periods;
FIG. 10: a trade-off relationship between system utility and queue backlog;
FIG. 11: under four unloading algorithms, averaging the relation between queue backlog and time slot;
FIG. 12: under four unloading algorithms, the relation between the average unloading utility of the system and the time slot;
FIG. 13: iterative unloading algorithm, all unloading algorithm and the relation between the average unloading utility of the system and the time slot under the all-unloading algorithm.
FIG. 14: method flow chart
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Embodiments of the present invention are described below with reference to fig. 1-14, in which:
step 1: aiming at a multi-user scene under a mobile edge computing system with intelligent super-frequency capability, a loss model L (t) of an intelligent super-frequency mobile edge computing server and a dynamic task queue set { I) of a user are constructednWhere t denotes the time the server runs, the index n denotes the nth user,N=50;
the intelligent overclocking mobile edge computing system scene in the step 1 is composed of a mobile edge computing server with an intelligent overclocking function and a mobile edge computing server with an intelligent overclocking functionConsisting of individual user equipment, in particular, the offloading system being in discrete time slotsS100, each time slot S lasting Tcyc(TcycIs a scalar quantity, representing a fixed duration, Tcyc2 s). The loss function l (t) generated in the over-frequency state is given by:
where α > 0 is a fixed value, set to α ═ 0.3, and represents the rate of increase of the loss function l (T) over time T, TcycIs the period of the loss function.
wherein Qn(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Is the length of the queue backlog; dn(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Is the size of the task data to be processed at the current time slot; cn(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Is to process task data Dn(s) number of CPU cycles required, which is expressed in particular as Cn(s)=μn(s)Dn(s) wherein μn(s) (table n below indicates the nth user,the reference sign s denotes the s-th time slot,) Is the complexity coefficient, mu, of the offloading task at the current time slotn(s)=1;(the subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Indicating the maximum execution time of the current offload task,using the sets A(s) { a) respectively1(s),a2(s),...,an(s),...,aN(s)}(an(s) denotes the received new offload task size for the nth user in time slot s, an(s)=5Mbit/s, with the index n denoting the nth user,the reference sign s denotes the s-th time slot,) And Q(s) { Q ═ Q1(s),Q2(s),...,Qn(s),...,QN(s)}(Qn(s) represents the queue backlog for the nth user under time slot s, with the index n representing the nth user,the reference sign s denotes the s-th time slot,) A set of new offload tasks and a set of queue backlogs for users on behalf of the user at the current time slot.
Step 2: the intelligent overclocking server in the step 1 needs to perform overclocking decision and set an overclocking decision variable pi, and the user in the step 1 needs to perform unloading decision and set an unloading decision variable x;
the over-clocking decision model for the server described in step 2 can be defined as pi(s) ∈ {0,1} at time slot s, with the index s denoting the s-th time slot,where pi(s) ═ 0 indicates that the mobile edge compute server has not initiated the turbo state, and pi(s) ═ 1 indicates that the mobile edge compute server has initiated the turbo state. The unloading decision model described in step 2 can be defined asThe index n indicates the nth user and,the reference sign s denotes the s-th time slot,wherein xn(s) is e {0,1 }. When x isnWhen(s) ═ 0, the offload tasks will be processed locally. When x isnWhen(s) — 1, the offload task will be offloaded to the mobile edge computing server for processing.
And step 3: when the unloading decision in the step 2 is executed locally, considering that the unloading rate meets the limit when the user task is calculated locally, and providing an unloading rate variable during task executionSatisfy the requirement of
For unloading rate variableThe index n of which indicates the nth user,the superscript l denotes the task local execution index.Indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,the reference sign s denotes the s-th time slot,and task processing timeMeeting quality of service requirements
WhereinIs the local processing latency of the offload task, with the index n denoting the nth user,the superscript l denotes the task local execution index.Indicating the maximum delay allowed for offloading the task, with the index n indicating the nth user,
in time slot s, when the unloading decision of the user task in step 3 is local execution, unloading variable xnWhere(s) is 0, the user maximum limit on the offload rate may be described specifically as:
indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,the reference sign s denotes the s-th time slot,the limitation on the unloading rate is mainly the requirement of the execution time of the taskThe service quality is satisfied:
whereinIndicating the latency of the task executing locally, the index n indicates the nth user,the superscript l denotes the task local execution index, the index s denotes the s-th slot, representing the size of the local computing resource of the nth user,is [0.5,0.7 ]](Gigacycle), the superscript l denotes the task-local execution index, the subscript n denotes the nth user,
and 4, step 4: when the unloading decision in the step 2 is edge execution, considering the service quality limit of the unloading task uploaded to the edge server by the user for execution and the computing resource limit of the edge server, and providing the execution time t of the unloading taskrSatisfy the requirement of
Wherein t isrIs the total latency required to unload a task decision to an edge execution, and the subscript r denotes the task edge execution index.Is the quality of service requirement of the offload task. And computing resources allocated to the user by the edge serverSatisfy the requirement of
WhereinIndicating the computing resources allocated to the offloading task for the nth user, with the index n indicating the nth user,the superscript r denotes the task edge execution index.Representing a collection of users who offload tasks to a mobile edge computing server, FrIs the maximum computing resource that the server can allocate, FrThe subscript r denotes the task edge execution index, 10 (GHz). And also user task offload rate variablesSatisfy the requirement of
For unloading rate variableThe index n of which indicates the nth user,the superscript r denotes task edge executionAnd (6) marking.Representing the maximum queue backlog for the nth user at time slot s.
In time slot s, when the unloading decision of the user task in step 4 is edge execution, unloading variable xn(s) 1, the execution time of the unloading task is satisfied
Andrespectively, the transmission delay and the processing delay for the offloading task to be performed on the mobile edge computing server, where the index n indicates the nth user,the superscripts p and r denote the transmission index and the task edge execution index, respectively, the index s denotes the s-th slot,computing resources allocated to a user by an edge server(the subscript n denotes the nth user,the superscript r denotes the task edge execution index, the index s denotes the s-th slot,) Satisfies the following conditions:
wherein the setRepresenting the set of users who offload tasks to the computational execution of the moving edge,is a constant number of times that the number of the first, f is the maximum computing resource of the mobile edge compute server. When the mobile edge computing server does not start the overclocking state, the maximum computing resource of the mobile edge computing server is F. When the mobile edge computing server starts the overclocking state, the maximum computing resource of the mobile edge computing server is
And 5: and 4, when the mobile edge computing server processes the unloading task, considering the ultra-frequency time limit of the mobile edge computing server, and providing that the ultra-frequency working time t of the mobile edge computing server meets the requirement
t≤T0
Wherein T is0Is the maximum allowable operating time of the server, T0=1s;
In the time slot s, the mobile edge computing server overclocking working time t in the step 5 meets the following conditions:
where the subscript n denotes the nth user,the superscript r denotes the task edge execution index, the index s denotes the s-th slot,T0representing the maximum over-clocking time allowed by the mobile edge computing server.
Step 6: when the system processes the unloading tasks of the user in the step 3 and the step 4, the average queue backlog of the system is provided by considering the stability limit of the systemShould satisfy
In time slot s, the stability constraint of the task queue in step 6 is satisfied:
whereinRepresents the average queue backlog of the system, S → ∞ represents letting the number of time slots S approach positive infinity, n represents the nth user,indicating that the queue backlog for the nth user is averaged.
And 7: based on the steps, the benefits brought by the fact that the system completes all unloading tasks, the total time cost and the energy cost are considered as main evaluation indexes of the constructed system, and a user unloading benefit model X is constructednThe index n indicates the number of users n,computational overhead model for offloading tasks to be performed locallyThe index n of which indicates the nth user,the superscript l denotes the task local execution index. Computational overhead model for offloading task execution at edgeThe index n of which indicates the nth user,the superscript r denotes the task edge execution index. And average unload utility model of system
User offload benefit model X considered in step 7 at time slot snCan be expressed as:
Xn(s)=ρn(s)log2[1+Dn(s)]
where ρ isn(s) represents the offload gain weight, ρ, for the nth usern(s) ═ 2.5, subscript n denotes the nth user,the reference sign s denotes the s-th time slot,computational overhead model for offloading tasks to be performed locallyCan be expressed as:
whereinAndis task data DnA delay weight and an energy consumption weight of(s),the index n indicates the nth user and,the reference sign s denotes the s-th time slot,the superscript t indicates the delay loss index and the superscript e indicates the energy loss index.Is the energy consumption of the task to be performed locally, where the index n denotes the nth user,the reference sign s denotes the s-th time slot,the superscript l denotes the task local execution index. Computational overhead model for offloading task execution at edgeCan be expressed as:
for theThe index n of which indicates the nth user,the superscript r denotes the taskThe edge carries out a reference number, s, which denotes the s-th slot, indicating the energy consumption of the transmission of the offloading task, with the index n indicating the nth user,the superscript p denotes the transmission index. Average utility model of systemExpressed as:
wherein. Wherein Hn(s) is the subscriber's offload benefit, with the index n denoting the nth subscriber,reference symbol s denotes the s-th time slot.
And 8: based on the cost models provided in the step 6 and the step 7, the problem of balancing the utility and the stability of the system is provided, the problems of joint optimization unloading decision, calculation resource allocation and unloading rate decision and overclocking decision are mainly solved, and a corresponding calculation unloading method is provided to solve the problem.
At time slot s, the tradeoff system utility and stability problem set forth in step 8 can be expressed as:
C2:π(s)∈{0,1}
where V is the drift-penalty factor. C1 represents the offload decision of the system in s-slot, and C2 represents the over-clocking decision of the system in s-slot. Condition C3 ensures that the computing resources assigned by the mobile edge compute server to each offload task are positive during the s-slot. C4 indicates that in the s-slot, the total computing resources used to process the offload tasks are limited by the maximum resources of the mobile edge computing server. C5 shows that in s time slot, when the mobile edge computing server starts the overclocking state, the working time of the server can not exceed T0. C6 shows that in s time slot, the task data DnThe execution time of(s) should meet its quality of service. C7 ensures the stability of the system. In the case of the process under C8,and the maximum queue backlog of the user is represented, and the task data volume unloaded by the user equipment n each time in the time slot s is ensured not to exceed the local queue backlog. For the above symbols, the subscript n indicates the nth user,the reference sign s denotes the s-th time slot,superscript p denotes a transmission index; the superscript r represents the task edge execution label; the superscript l represents the task local execution label; collectionRepresenting the set of users who offload tasks to the computational execution of the moving edge.
It can be seen that the offload decision x(s) and the overclocking decision pi(s) are binary integers, and the resources f(s) allocated by the mobile edge computing server and the offload data d(s) of the decided ue are continuous, whereinThus, problems ariseThe optimized problem is a non-linear mixed integer programming problem that is NP-hard.
The mathematical problem solving steps proposed in the step 8 are as follows:
initialization: user task setAll users in the set area offload their tasks to the mobile edge computing server at the optimal offload rate for the task when executed locally.
Step 8.1: and under two conditions of service overclocking work and non-overclocking work, respectively, solving the optimal unloading decision x(s) by using a Lagrange algorithm and an iterative algorithm taking a greedy algorithm as a core idea.
Step 8.2: and (4) according to the unloading decision obtained in the step 8.1, using Lagrange's algorithm and a heuristic algorithm and a comparison sorting algorithm to obtain a resource allocation decision f(s) of the mobile edge computing server and an unloading rate decision D(s) of the user unloading task.
Step 8.3: repeating step 8.1 and step 8.2 until the difference between the two objective functions is less than a minimum(can be provided with) A final offload decision x(s), a resource allocation decision f(s), and a task offload rate decision d(s) may be obtained.
Step 8.4: and (4) solving the overclocking decision pi(s) of the intelligent overclocking mobile edge computing system by using a comparison algorithm according to the calculation unloading method obtained in the step 8.3, so as to obtain solutions { x(s), f(s), D(s), pi(s) } of the original problem.
FIG. 1 illustrates a service model for dynamic task offloading in an intelligent overfrequency mobile edge computing system. As shown in fig. 1, there is a mobile edge computing server having an intelligent overclocking function,and (4) users. Assume that each user has a certain amount of local data (e.g., images, video, etc.) to be processed using edge intelligence services (e.g., image recognition programs, large interactive games, etc.), and this process is done in a dynamic, multi-slot scenario. FIG. 2 shows a task queue model for each user, and in each time slot, the user is continuously receiving new data while sending requests to the mobile edge computing system to process a portion of the data in the queue.
In order to avoid that the queue backlog of the user is too large to damage the stability of the whole mobile edge computing network, the queue backlog of the user needs to be ensured while considering how to decide the unloading of the user task and the allocation of various computing resources to achieve the maximum utility of the mobile edge computing network. In view of this embodiment, the present disclosure may be utilized to address problemsThe following were used:
in consideration of the problemsIn time, due to the discreteness of the offloading decision, the task offloading rate and the continuity of the mobile edge computing server resources, and the strong coupling between the two, the problem is difficult to solve directly. Therefore, the invention solves the problem of the embodiment by using the technical scheme of decomposing the original problem into a plurality of sub-problems, and the specific scheme is as follows.
(1) Offloading the decision sub-problem: when the computational resource vector f and the overclocking decision a have been determined, then the offload decision sub-problem can be written as:
to solve the problemsThe invention provides an iterative solution scheme based on greedy algorithm thought to solve the problem. The technical route is shown in figure 3.
(2) The joint optimization resource allocation and data offload rate sub-problem: assuming that the offload decision x(s), and the turbo decision pi(s) are known, the problem isCan be written as:
The technical route for solving this subproblem is shown in fig. 4, and the results are as follows.
1. Data offload rate solution:
whereinIs the execution time of the offloading task of the user equipment n,is the resource allocated to the user equipment n when the server overclocking the processing task,is the computing resource allocated to user equipment n when the server is not overclocking.Is a handleIs updated toThe new sequence of time weights of the latter is,and if so, the execution time of the unloading task of the user equipment n.
2. The resource allocation scheme comprises the following steps:
(3) Overclocking decision sub-problem: under s time slot, judging whether the following mathematical expression holds:
maxx,π(s)=0,f,DI(s)<maxx,π(s)=1,f,DI(s) (6)
if the equation (6) is satisfied, the mobile edge computing server starts the over-frequency state, otherwise, the mobile edge computing server does not start.
Further, in order to obtain the final computation offload method of this embodiment, it is necessary to perform iterative processing on the results of the three seed problems, and the specific steps are as follows:
1: and unloading the task decisions of all the users to the mobile edge computing server for execution, and distributing computing resources according to the solving technology of the second sub-problem.
2: calculating the value of the drift-penalty term, and recording the value as I0(s)。
3: and determining the unloading decision at the current time slot according to the solving technology of the first subproblem.
4: and updating the computing resource allocation strategy and the user task unloading rate according to the new unloading decision and the equations (4) and (5). Calculating the value of the drift-penalty term, denoted as I1(s)。
6: and solving the overclocking decision according to the technical scheme of the third sub-query, and giving a final calculation unloading method, namely the calculation unloading method of the embodiment.
In this embodiment, all data processing and algorithms are implemented using Matlab 2108b, and the experimental environment is configured on a host computer having 64-bit Windows 10, 3.00GHz Intel Core i5 processor and 8GB 2400MHz DDR4 memory.
In order to prove the superiority of the intelligent over-frequency model and the dynamic computation unloading method in the embodiment, comparison with the following three reference algorithms is considered
(1) Random offload decision making: after the task offloading rate obtained in step 8.1, the offloading decision x for each user is generated according to {0,1} random number.
(2) All offloading decisions: after the task unloading rate obtained in step 8.1, the task of each user is unloaded to the mobile edge computing server for execution.
(3) And (3) local execution: after the task offloading rate according to step 8.1, the tasks of each user are executed locally.
In this embodiment, the distances between the users and the base station are equal to 150 meters, a specific scenario simulates a variation situation of the number of users at a bus stop in a central urban area in 24 hours, the bandwidth adopts an average allocation scheme, and the remaining parameters are specifically set as in table 1.
Table 1 simulation parameter settings
1. Intelligent overclocking system calculation overhead performance result analysis
The experiment compares the computation unloading performance of a common server, an intelligent overclocking server and an unlimited overclocking server in a mobile edge computing system. When the number of user equipments is relatively small, it can be seen from fig. 5 and 6 that the mobile edge computing server is not worth initiating the over-clocking state. In this experiment, the number of ues is between 3 and 10, and the server is not worth overclocking. Specifically, the negative gain caused by the over-frequency is a trend of increasing first and then decreasing, because when the number of users is extremely small, the operation time of the server is extremely short, and the over-frequency loss function and the over-frequency gain function of the server are a linear function and a high-order concave-like function with respect to the time t, respectively, and the difference between the two functions must be increased first and then decreased, and finally is inversely exceeded.
With the increase of the number of the user equipment, the advantages brought by the over-frequency are gradually embodied and become larger and larger. However, due to the limitation of the overclocking time, the increasing trend of the gain brought by the overclocking is slowly slowed down. Comparing the red and green curves in fig. 5, it can be seen that the number of users is between 10 and 27, which are overlapping. This is because when the number of tasks is small, the mobile edge computing server can complete the processing of all tasks within the overclock time limit. When the number of user equipments is increased to 27, the red curve is slightly higher than the green curve. This is because the server overclocking time of the intelligent mobile edge computing system reaches T0Thereafter, the mobile edge computing server will be out of the overclocking state, and therefore, the computing overhead of the system will be slightly increased, but will still be smaller than that of the normal server. In fig. 6, when the number of the user equipments is increased to more than 30, the gain of the over-clocking is not increased sometimes because the offloading task is a discrete set of data, and when the newly added offloading data does not meet the offloading condition, the gain of the over-clocking is not affected.
2. Performance comparison result analysis of four calculation unloading methods
In a second experiment, the number of user equipments was set from 3 to 50 and the computational overhead of the system under different offload decisions and different mobile edge computing servers were compared.
As can be seen from fig. 7, when all tasks are executed locally, the computational overhead is relatively large, and a linearly increasing trend is exhibited as the number of users increases. This is because when the system uses the local offload decision, the computation resource allocation decision, the system overclocking decision will not be considered, and the computation overhead is only equal to the computation capability of the ueAnd computational complexity C of the offload tasknIt is related. In this experiment, the user equipment calculated resourcesThe number of CPU cycles required for offloading tasks is a random value within a small range, and the overhead must increase in a form of approximately proportional trend as the number of user equipments increases.
If all tasks are performed on the mobile edge computing server, the computational overhead is related to the number of user devices as indicated by the black line in FIG. 7. Under the condition of less user equipment, because the resources of the mobile edge computing server and the bandwidth resources of the network are rich, the time delay and the energy consumption for processing the unloading task are small, and the computing overhead of the system is low. However, when the number of the user equipments increases, the network bandwidth resources and the computational resources allocated to each offload task by the system will decrease, and the computational overhead indicators of the offload tasks will increase sharply, which is the reason why the computational overhead of the system increases exponentially as shown by the black line.
When random offload decisions are employed, the computational overhead of the system is intermediate between the local offload overhead and the total offload overhead, for reasons that will become apparent.
For the JOOC algorithm, it is clear that the computational overhead of the system is acceptable even if the number of user devices is large. When the number of users is relatively small, the system may decide to offload all tasks to the mobile edge compute server for execution. However, in this case, the mobile edge computing server with overclocking capability will not choose to start the overclocking state because the loss caused by starting the overclocking state is greater than the gain caused by starting the overclocking state. When the number of the user equipment is increased, the advantages of the mobile edge computing server with the overclocking capability are gradually reflected, and in the experiment, when the number of the users is increased to 14, the overclocking state is started, so that the computing benefit is brought.
Intelligent over-frequency system real scene performance result analysis
The experiment simulates the fluctuation situation of the number of users in certain scenes (such as canteens, stations and the like) in real life at different time periods, and compares the calculation overhead of four calculation unloading algorithms and the performances of two different mobile edge calculation systems.
As shown in fig. 8, the peak hours of work and work are 8:00 to 9:00, 12:00 to 13:00 and 18:00 to 19:00 per day, and the number of users is large. And in these late night and early morning hours, the number of users is small. As can be seen from fig. 9, the intelligent overclocking server has the best performance no matter during that time period. From 23 o 'clock late at night to 5 o' clock early in the morning, the number of user devices is very small and the server chooses not to initiate the overclocking state. And in the daytime, the number of users gradually increases, and the server starts to start the overclocking state. And it can be clearly seen that the more the number of users, the greater the profit obtained by the intelligent over-frequency mobile edge computing system.
System utility and queue backlog tradeoff result analysis
The experiment researches the trade-off relationship between the system utility and the queue backlog based on the Lyapunov optimization framework, and mainly analyzes the influence of a penalty coefficient on the system utility and the queue backlog respectively by adjusting the value of the penalty coefficient V (from V to 100 to V to 1500 in the experiment).
As can be seen from fig. 10, as the penalty factor V increases, the negative system utility exhibits an inverse proportional function decreasing trend. This shows that the desired system utility can be achieved by adjusting the value of the penalty factor V. As can be further seen from FIG. 10, there is a linear growth trend between the queue backlog and the penalty coefficient V, and by combining the analysis of the queue backlog and the penalty coefficient V, it can be seen that there is a linear growth trend between the system utility and the queue backlog with respect to the penalty coefficient VA trade-off is made. Therefore, when the penalty coefficient V is used for regulating and controlling the system efficiency, the current penalty coefficient V needs to be considered so as to avoid causing higher queue backlog and seriously influencing the service quality of the user.
Analysis of influence results of different calculation unloading methods on system
The experiment compares the influence of three standard unloading methods and the unloading method provided by the experiment on the system performance in the intelligent ultra-frequency mobile edge computing system.
It can be seen from fig. 11 that the average queue backlog of the user equipment is always zero when the total offloading algorithm is employed. When the local offload algorithm is adopted, due to the limitation of the computing resources of the user equipment, the data volume of the task processed by the user equipment each time cannot be too large on the premise of meeting the quality of service of the offload task, so that by adopting the computing offload mode, the queue backlog of the user equipment can be increased all the time, and the trend is a linear increase. The random unloading method is adopted, the queue pressure of the user equipment has no specific rule, sometimes is very large, sometimes is very small, the processing method can seriously affect the experience of the user, and even can cause the task execution failure. And by adopting the calculation unloading mode of the iterative algorithm, the queue backlog of the user equipment presents a logarithmic rising trend and finally tends to be stable, and the final queue backlog can be accepted by the system.
Fig. 12 and 13 show the offload utility versus time slot produced by four offload algorithms. When the random algorithm is adopted for unloading, the unloading effect of the random algorithm starts to have a great descending trend, and finally the fluctuation amplitude is weakened. This is because the system consumption increases exponentially with the task data, and when the unloading task data is too large in a time slot, the consumption of the system increases rapidly, which is why the utility of the system starts to fluctuate dramatically in the random unloading method. However, even if the unloading is random, the task data executed each time fluctuates within a range, so as to make the average unloading utility of the system gradually fluctuate less and not stable as the running time of the system is longer, and the average unloading utility is very low compared with the other three cases.
As can be seen in fig. 13, the gain of the iterative algorithm is highest at the beginning of the slot. This is because initially, the queue backlog for the user equipment is small and the iterative algorithm is biased towards optimizing system utility. As the time slot increases, the system utility decreases slowly, even much less than the utility of executing the algorithm locally, because the algorithm decisions sacrifice a portion of the offload utility for system stability purposes in order to stabilize the system queue backlog. Although the system implemented locally is highly effective, it can be seen from fig. 9 that the average queue backlog of the user equipment is very large, and such a system is unstable. While the average utility of the system using the iterative algorithm may decrease at all times, it is not less than the average utility of the system for the full offload algorithm. This is because, under the idea of the algorithm, when the system reaches final stability, the size of the task data offloaded in each timeslot is equal to the size of the task data newly added by the ue, which is "total offloading" in the current timeslot. The offload utility at its current time slot is the same as the full offload utility. However, since the iterative algorithm has gained more utility in the previous time slot, the average system utility of the iterative algorithm is always greater than the average system utility of the entire offload algorithm.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not by way of limitation, and that alterations and modifications may be effected without departing from the scope of the invention as defined in the claims appended hereto, by a person of ordinary skill in the art in light of the teachings of the present invention.
Claims (9)
1. An intelligent dynamic task computing unloading method based on an edge computing system is characterized in that,
step 1: aiming at a multi-user scene under a mobile edge computing system with intelligent super-frequency capability, a loss model L (t) of an intelligent super-frequency mobile edge computing server and a dynamic task queue set { I) of a user are constructednWhere t denotes the time the server runs, the index n denotes the nth user,
step 2: the intelligent overclocking server in the step 1 needs to perform overclocking decision and set an overclocking decision variable pi, and the user in the step 1 needs to perform unloading decision and set an unloading decision variable x;
and step 3: when the unloading decision in the step 2 is executed locally, considering that the unloading rate meets the limit when the user task is calculated locally, and providing an unloading rate variable during task executionSatisfy the requirement of
For unloading rate variableThe index n of which indicates the nth user,the superscript l represents the task local execution label;indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,the reference sign s denotes the s-th time slot,and task processing timeThe service quality requirement is met:
whereinIs the local processing latency of the offload task, with the index n denoting the nth user,the superscript l represents the task local execution label;indicating the maximum delay allowed for offloading the task, with the index n indicating the nth user,
and 4, step 4: when the unloading decision in the step 2 is edge execution, considering the service quality limit of the unloading task uploaded to the edge server by the user and the computing resource limit of the edge server, and providing the execution time t of the unloading taskrSatisfy the requirement of
Wherein t isrIs the total time delay required from the unloading of the task decision to the edge execution, and the subscript r represents the task edge execution label;is the quality of service requirement of the offload task; and computing resources allocated to the user by the edge serverSatisfy the requirement of
WhereinIndicating the computing resources allocated to the offloading task for the nth user, with the index n indicating the nth user,the superscript r represents the task edge execution label;representing a collection of users who offload tasks to a mobile edge computing server, FrIs the maximum computing resource that the server can allocate, and the subscript r represents the task edge execution label; and also user task offload rate variablesSatisfy the requirement of
For unloading rate variableThe index n of which indicates the nth user,the superscript r represents the task edge execution label;representing the maximum queue backlog of the nth user under the time slot s;
and 5: and 4, when the mobile edge computing server processes the unloading task, considering the excess frequency time limit of the mobile edge computing server, and providing that the excess frequency working time t of the mobile edge computing server meets the requirement
t≤T0
Wherein T is0Is the maximum allowable operating duration of the server;
step 6: when the system processes the unloading tasks of the user in the step 3 and the step 4, the average queue backlog of the system is provided by considering the stability limit of the systemShould satisfy
And 7: based on the steps, the benefits brought by the fact that the system completes all unloading tasks, the total time cost and the energy cost are considered as main evaluation indexes of the constructed system, and a user unloading benefit model X is constructednThe index n indicates the number of users n,computational overhead model for offloading tasks to be performed locallyThe index n of which indicates the nth user,the superscript l represents the task local execution label; computational overhead model for offloading task execution at edgeThe index n of which indicates the nth user,the superscript r represents the task edge execution label; and the plane of the systemEqual unloading utility model
And 8: based on the cost models provided in the step 6 and the step 7, the problem of balancing the utility and the stability of the system is provided, the problems of joint optimization unloading decision, calculation resource allocation and unloading rate decision and overclocking decision are mainly solved, and a corresponding calculation unloading method is provided to solve the problem.
2. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
the intelligent overclocking mobile edge computing system scene in the step 1 is formed by a mobile edge computing server with an intelligent overclocking function andconsisting of individual user equipment, in particular, the offloading system being in discrete time slotsOf duration T per time slot scyc(TcycIs a scalar quantity representing a fixed duration); the loss function l (t) generated in the over-frequency state is given by:
where α > 0 is a fixed value representing the rate of increase of the loss function L (T) over time T, TcycIs the period of the loss function;
wherein Qn(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Is the length of the queue backlog; dn(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Is the size of the task data to be processed at the current time slot; cn(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Is to process task data Dn(s) number of CPU cycles required, which is expressed in particular as Cn(s)=μn(s)Dn(s) wherein μn(s) (table n below indicates the nth user,the reference sign s denotes the s-th time slot,) Is the complexity coefficient of the unloading task under the current time slot;(s) (subscript n denotes the nth user,the reference sign s denotes the s-th time slot,) Representing the maximum execution time of the current unloading task; using the sets A(s) { a) respectively1(s),a2(s),...,an(s),...,aN(s)}(an(s) denotes the received new offload task size for the nth user at time slot s, with the index n denoting the nth user,the reference sign s denotes the s-th time slot,) And Q(s) { Q ═ Q1(s),Q2(s),...,Qn(s),...,QN(s)}(Qn(s) represents the queue backlog for the nth user under time slot s, with the index n representing the nth user,the reference sign s denotes the s-th time slot,) A set of new offload tasks and a set of queue backlogs for users on behalf of the user at the current time slot.
3. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
the over-clocking decision model for the server described in step 2 can be defined as pi(s) ∈ {0,1} at time slot s, with the index s denoting the s-th time slot,wherein pi(s) ═ 0 indicates that the mobile edge computing server does not start the over-frequency state, and pi(s) ═ 1 indicates that the mobile edge computing server starts the over-frequency state; the unloading decision model described in step 2 can be defined asThe index n indicates the nth user and,the reference sign s denotes the s-th time slot,wherein xn(s) ∈ {0,1 }; when x isnWhen(s) is 0, the offload task will be processed locally; when x isnWhen(s) — 1, the offload task will be offloaded to the mobile edge computing server for processing.
4. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
in time slot s, when the unloading decision of the user task in step 3 is local execution, unloading variable xnWhen(s) is 0, the maximum limit of the offloading rate of the user can be specifically described as:
indicating the maximum queue backlog under time slot s for the nth user, the index n indicating the nth user,the reference sign s denotes the s-th time slot,the limitation on the unloading rate is mainly the requirement of the execution time of the taskThe service quality is satisfied:
whereinIndicating the latency of the task executing locally, the index n indicates the nth user,the superscript l denotes the task local execution index, the index s denotes the s-th slot, indicating the size of the local computing resource for the nth user, the superscript l indicating the task local execution index, the subscript n indicating the nth user,
5. the intelligent dynamic task computing offload method based on edge computing system of claim 1,
in time slot s, when the unloading decision of the user task in step 4 is edge execution, unloading variable xn(s) < 1, the execution time of the uninstalling task satisfies
Andrespectively, the transmission delay and the processing delay for the offloading task to be performed on the mobile edge computing server, where the index n indicates the nth user,the superscripts p and r denote the transmission index and the task edge execution index, respectively, the index s denotes the s-th slot,computing resources allocated to a user by an edge server(the subscript n denotes the nth user,the superscript r denotes the task edge execution index, the index s denotes the s-th slot,) Satisfies the following conditions:
wherein the setRepresenting the set of users who offload tasks to the computational execution of the moving edge,is a constant number of times that the number of the first,f is the maximum computing resource of the mobile edge computing server; when the mobile edge computing server does not start the overclocking state, the maximum computing resource of the mobile edge computing server is F; when the mobile edge computing server starts the overclocking state, the maximum computing resource of the mobile edge computing server is
6. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
in the time slot s, the mobile edge computing server overclocking working time t in the step 5 meets the following conditions:
7. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
in time slot s, the stability constraint of the task queue in step 6 is satisfied:
8. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
user offload benefit model X considered in step 7 at time slot snCan be expressed as:
Xn(s)=ρn(s)log2[1+Dn(s)]
where ρ isn(s) denotes the offload revenue weight for the nth user, the subscript n denotes the nth user,the reference sign s denotes the s-th time slot,computational overhead model for offloading tasks to be performed locallyCan be expressed as:
whereinAndis task data Dn(s), a delay weight and an energy consumption weight, subscript n denotes the nth user,the reference sign s denotes the s-th time slot,the superscript t represents the time delay loss label, and the superscript e represents the energy loss label;is the energy consumption of the task to be performed locally, where the index n denotes the nth user,the reference sign s denotes the s-th time slot,the superscript l represents the task local execution label; computational overhead model for offloading task execution at edgeCan be expressed as:
for theThe index n of which indicates the nth user,the superscript r denotes the task edge execution index, the index s denotes the s-th slot, indicating the energy consumption of the transmission of the offloading task, with the index n indicating the nth user,superscript p denotes a transmission index; average utility model of systemExpressed as:
wherein; wherein Hn(s) is the subscriber's offload benefit, with the index n denoting the nth subscriber,reference symbol s denotes the s-th slot;
at time slot s, the tradeoff system utility and stability problem set forth in step 8 can be expressed as:
C2:π(s)∈{0,1}
wherein V is a drift-penalty coefficient; c1 represents the offloading decision of the system in s slot, C2 represents the over-clocking decision of the system in s slot; condition C3 ensures that the computing resources allocated by the mobile edge compute server to each offload task are positive during the s time slot; c4 shows that in s time slot, the total computing resources used for processing the unloading task are limited by the maximum resources of the mobile edge computing server; c5 shows that in s time slot, when the mobile edge computing server starts the overclocking state, the working time of the server can not exceed T0(ii) a C6 shows that in s time slot, the task data Dn(s) the execution time should meet its quality of service; c7 ensures the stability of the system; in the case of the process under C8,the maximum queue backlog of the user is represented, and the task data volume unloaded by the user equipment n each time in the time slot s is ensured not to exceed the local queue backlog; for the above symbols, the subscript n indicates the nth user,the reference sign s denotes the s-th time slot,superscript p denotes a transmission index; the superscript r represents the task edge execution label; the superscript l represents the task local execution label; collectionRepresenting a set of users who offload tasks to a moving edge computational execution;
it can be seen that the offload decision x(s) and the overclocking decision pi(s) are binary integers, and the resources f(s) allocated by the mobile edge computing server and the offload data d(s) of the decided ue are continuous, whereinD(s)={D1(s),D2(s),...,DN(s) }; thus, problems ariseThe optimized problem is a non-linear mixed integer programming problem that is NP-hard.
9. The intelligent dynamic task computing offload method based on edge computing system of claim 1,
the mathematical problem solving steps proposed in the step 8 are as follows:
initialization: user task setAll users in the set area unload the tasks to the mobile edge computing server, wherein the unloading rate is the optimal unloading rate of the tasks during local execution;
step 8.1: respectively under two conditions of service over-frequency operation and non-over-frequency operation, solving an optimal unloading decision x(s) by using a Lagrange algorithm and an iterative algorithm taking a greedy algorithm as a core idea;
step 8.2: according to the unloading decision obtained in the step 8.1, a Lagrange's algorithm, a heuristic algorithm and a comparison sorting algorithm are used for solving a resource allocation decision f(s) of the mobile edge computing server and an unloading rate decision D(s) of the user unloading task;
step 8.3: repeating step 8.1 and step 8.2 until the difference between the two objective functions is less than a minimum(can be provided with) A final offload decision x(s), a resource allocation decision f(s), and a task offload rate decision d(s) may be obtained;
step 8.4: and (4) solving the overclocking decision pi(s) of the intelligent overclocking mobile edge computing system by using a comparison algorithm according to the calculation unloading method obtained in the step 8.3, so as to obtain solutions { x(s), f(s), D(s), pi(s) } of the original problem.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110513404.0A CN113687924B (en) | 2021-05-11 | 2021-05-11 | Intelligent dynamic task computing and unloading method based on edge computing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110513404.0A CN113687924B (en) | 2021-05-11 | 2021-05-11 | Intelligent dynamic task computing and unloading method based on edge computing system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113687924A true CN113687924A (en) | 2021-11-23 |
CN113687924B CN113687924B (en) | 2023-10-20 |
Family
ID=78576400
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110513404.0A Active CN113687924B (en) | 2021-05-11 | 2021-05-11 | Intelligent dynamic task computing and unloading method based on edge computing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113687924B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109121151A (en) * | 2018-11-01 | 2019-01-01 | 南京邮电大学 | Distributed discharging method under the integrated mobile edge calculations of cellulor |
CN110099384A (en) * | 2019-04-25 | 2019-08-06 | 南京邮电大学 | Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user |
WO2021012584A1 (en) * | 2019-07-25 | 2021-01-28 | 北京工业大学 | Method for formulating single-task migration strategy in mobile edge computing scenario |
CN112422346A (en) * | 2020-11-19 | 2021-02-26 | 北京航空航天大学 | Variable-period mobile edge computing unloading decision method considering multi-resource limitation |
KR20210026171A (en) * | 2019-08-29 | 2021-03-10 | 인제대학교 산학협력단 | Multi-access edge computing based Heterogeneous Networks System |
CN112600921A (en) * | 2020-12-15 | 2021-04-02 | 重庆邮电大学 | Heterogeneous mobile edge network-oriented dynamic task unloading method |
-
2021
- 2021-05-11 CN CN202110513404.0A patent/CN113687924B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109121151A (en) * | 2018-11-01 | 2019-01-01 | 南京邮电大学 | Distributed discharging method under the integrated mobile edge calculations of cellulor |
CN110099384A (en) * | 2019-04-25 | 2019-08-06 | 南京邮电大学 | Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user |
WO2021012584A1 (en) * | 2019-07-25 | 2021-01-28 | 北京工业大学 | Method for formulating single-task migration strategy in mobile edge computing scenario |
KR20210026171A (en) * | 2019-08-29 | 2021-03-10 | 인제대학교 산학협력단 | Multi-access edge computing based Heterogeneous Networks System |
CN112422346A (en) * | 2020-11-19 | 2021-02-26 | 北京航空航天大学 | Variable-period mobile edge computing unloading decision method considering multi-resource limitation |
CN112600921A (en) * | 2020-12-15 | 2021-04-02 | 重庆邮电大学 | Heterogeneous mobile edge network-oriented dynamic task unloading method |
Non-Patent Citations (1)
Title |
---|
谢人超;廉晓飞;贾庆民;黄韬;刘韵洁;: "移动边缘计算卸载技术综述", 通信学报, no. 11 * |
Also Published As
Publication number | Publication date |
---|---|
CN113687924B (en) | 2023-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107995660B (en) | Joint task scheduling and resource allocation method supporting D2D-edge server unloading | |
CN110418416B (en) | Resource allocation method based on multi-agent reinforcement learning in mobile edge computing system | |
CN109857546B (en) | Multi-server mobile edge computing unloading method and device based on Lyapunov optimization | |
CN109343904B (en) | Lyapunov optimization-based fog calculation dynamic unloading method | |
CN110798849A (en) | Computing resource allocation and task unloading method for ultra-dense network edge computing | |
CN109151864B (en) | Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network | |
CN110096362B (en) | Multitask unloading method based on edge server cooperation | |
CN110941667A (en) | Method and system for calculating and unloading in mobile edge calculation network | |
CN109922479B (en) | Calculation task unloading method based on time delay estimation | |
CN110968366B (en) | Task unloading method, device and equipment based on limited MEC resources | |
CN108924796B (en) | Resource allocation and unloading proportion joint decision method | |
CN110519370B (en) | Edge computing resource allocation method based on facility site selection problem | |
CN109756912A (en) | A kind of multiple base stations united task unloading of multi-user and resource allocation methods | |
CN112887905B (en) | Task unloading method based on periodic resource scheduling in Internet of vehicles | |
CN112231085A (en) | Mobile terminal task migration method based on time perception in collaborative environment | |
CN109639833A (en) | A kind of method for scheduling task based on wireless MAN thin cloud load balancing | |
CN112202847B (en) | Server resource allocation method based on mobile edge calculation | |
CN111511028B (en) | Multi-user resource allocation method, device, system and storage medium | |
Tian et al. | User preference-based hierarchical offloading for collaborative cloud-edge computing | |
CN110780986B (en) | Internet of things task scheduling method and system based on mobile edge computing | |
CN114116061B (en) | Workflow task unloading method and system in mobile edge computing environment | |
CN113747450B (en) | Service deployment method and device in mobile network and electronic equipment | |
CN114339891A (en) | Edge unloading resource allocation method and system based on Q learning | |
CN112822707B (en) | Task unloading and resource allocation method in computing resource limited MEC | |
CN110768827B (en) | Task unloading method based on group intelligent algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |