CN110096362B - Multitask unloading method based on edge server cooperation - Google Patents

Multitask unloading method based on edge server cooperation Download PDF

Info

Publication number
CN110096362B
CN110096362B CN201910334429.7A CN201910334429A CN110096362B CN 110096362 B CN110096362 B CN 110096362B CN 201910334429 A CN201910334429 A CN 201910334429A CN 110096362 B CN110096362 B CN 110096362B
Authority
CN
China
Prior art keywords
task
edge server
representing
user
execution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910334429.7A
Other languages
Chinese (zh)
Other versions
CN110096362A (en
Inventor
柴蓉
张丽萍
陈前斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Quwei Technology Co.,Ltd.
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910334429.7A priority Critical patent/CN110096362B/en
Publication of CN110096362A publication Critical patent/CN110096362A/en
Application granted granted Critical
Publication of CN110096362B publication Critical patent/CN110096362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a multitask unloading method based on edge server cooperation, and belongs to the field of wireless communication. The method comprises the following steps: s1: modeling an edge server variable; s2: modeling user task characteristics; s3: modeling a user task partition variable, an unloading variable and a time slot distribution variable; s4: modeling local execution time delay of a user task; s5: modeling task execution time delay of an edge server; s6: modeling a user task scheduling limiting condition; s7: and determining a user task unloading strategy based on the minimization of the maximum task processing time delay of the user. The invention can ensure that the user task scheduling strategy is optimal and the unloading ratio is optimal under the condition of effective task execution, thereby realizing the minimization of user time delay.

Description

Multitask unloading method based on edge server cooperation
Technical Field
The invention belongs to the field of wireless communication, and relates to a multitask unloading method for edge server cooperation.
Background
With the development of mobile internet and the popularization of intelligent terminals, new applications such as Augmented Reality (AR), virtual Reality, and natural language processing are emerging. However, the computing resource intensive nature of each new class of applications poses a serious challenge to the smart-terminal task processing capabilities. To solve the above problem, a Mobile Edge Computing (MEC) technology has been developed. According to the technology, the MEC server with strong computing power is deployed in a Radio Access Network (RAN), so that a user is supported to unload a task to the MEC server to execute computing, the task execution time delay and energy consumption of a terminal can be effectively reduced, and the Quality of Service (QoS) of the user is remarkably improved. In the MEC system, the task characteristics and the system available state are comprehensively considered, and an efficient task unloading mechanism is designed.
In the existing research at present, documents design an unloading strategy aiming at a multi-user unloading scene, user delay optimization is realized on the premise of meeting the maximum allowable execution delay, and the unloading strategy of each user is obtained by solving the optimal power distribution and the optimal calculation resource distribution of each user. For another example, there is a literature that researches on minimizing execution delay by using Dynamic Frequency and Voltage Scaling (DFVS) and energy harvesting techniques, and proposes a Dynamic computation offload algorithm based on lyapunov optimization, which first makes a binary offload decision in units of time slots, and then allocates computation resources to locally executed users or allocates power to offloaded users.
In the existing research of the network scenario scheme based on the multi-task offloading user, the optimization problem of the maximum task processing delay user is rarely considered, however, for the delay sensitive user, transmission performance and user experience are difficult to guarantee, and therefore, an optimization scheme based on the maximum task processing delay of the user is urgently needed.
Disclosure of Invention
In view of this, an object of the present invention is to provide a multitask offload method based on edge server cooperation, where a user task request may be executed in three ways, all local execution, local and edge server cooperative execution, and edge server execution, and the user task may be divided into subtasks of any data size, modeling the maximum user task processing delay as an optimization target, determining an optimal user task offload policy, an offload ratio, and a time slot allocation scheme, and minimizing the total task execution delay.
In order to achieve the purpose, the invention provides the following technical scheme:
a multitask unloading method based on edge server cooperation specifically comprises the following steps:
s1: modeling an edge server variable;
s2: modeling user task characteristics;
s3: modeling a user task partition variable, an unloading variable and a time slot distribution variable;
s4: modeling local execution time delay of a user task;
s5: modeling the task execution time delay of the edge server;
s6: modeling a user task scheduling limiting condition;
s7: and determining a user task unloading strategy based on the minimization of the maximum task processing time delay of the user.
Further, the step S1 specifically includes: let E = { E j Denotes an edge server set, where E j J is more than or equal to 1 and less than or equal to N, and N is the number of the edge servers.
Further, the step S2 specifically includes: let a set of User Equipments (UEs) to be tasked in the system be UE = { UE = { (UE) i In which the UE i I is more than or equal to 1 and less than or equal to M, wherein M is the total number of the user equipment; UE (user Equipment) i Task execution request is composed of triple < I i ,S i ,T i d Description of wherein I i 、S i And T i d Respectively represent UEs i The input data volume required by the task to be executed, the data volume to be processed and the task completion deadline are calculated;
supposing that user tasks are executed in a given time period, the time period is divided into P time slots in sequence, and T is ordered t Represents the t-th time slot, and t is more than or equal to 1 and less than or equal to P.
Further, the step S3 specifically includes: UE (user Equipment) i Is divided into L i The method comprises the following steps that subtasks with any data volume are respectively unloaded to different edge servers to be executed or are executed locally by a user;
let lambda i,l ∈[0,1]Representing a UE i Is locally performed data volume ratio of the l sub-task, λ i,l,j ∈[0,1]Representing a UE i Is offloaded to the edge server E j A data amount ratio of execution;
let x i,l = {0,1} represents UE i The ith sub-task locally executes a decision marker, x i,l =1 denotes UE i Is executed locally, otherwise, x i,l =0;
Let x i,l,j = {0,1} represents UE i Is offloaded to the edge server E j Scheduling decision identification of (1), x i,l,j =1 tableShow UE i Is offloaded to the edge server E j Proceed to execute, otherwise, x i,l,j =0;
Let y i,l,j,t = {0,1} represents UE i Subtask offload to edge service E j Perform the corresponding slot allocation identification, y i,l,j,t =1 denotes that at time slot t, UE i Is offloaded to the edge server E j Perform execution, otherwise, y i,l,j,t =0。
Further, the step S4 specifically includes: modeling UE i The time delay required for the local execution of the ith subtask is
Figure BDA0002038710600000021
Wherein f is i Representing a UE i The local computing power of.
Further, the step S5 specifically includes: suppose edge server E j Sequentially executing the tasks unloaded by all the user equipment and enabling D j For edge server E j Performing the sum of the time delays of the sub-tasks offloaded by the UE, i.e.
Figure BDA0002038710600000031
Wherein,
Figure BDA0002038710600000032
representing edge servers E j Performing UE i The time delay required for the unloaded first subtask;
wherein,
Figure BDA0002038710600000033
representing a UE i All tasks of (2) are at the edge server E j The total time delay required for the upper execution;
wherein,
Figure BDA0002038710600000034
representing a UE i Task of (2) to edge server E j The required delay in the transmission of the signal,
Figure BDA0002038710600000035
representing a UE i Is offloaded to the edge server E j A corresponding transmission rate; b i,j Representing a UE i Is offloaded to the edge server E j Occupied transmission bandwidth, P i,j Representing a UE i Is offloaded to the edge server E j Transmission power used, g i,j Representing a UE i And edge server E j Channel gain of the link between, σ 2 Representing the channel noise power;
Figure BDA0002038710600000036
representing a UE i Is at edge server E j On performing a desired processing delay, wherein>
Figure BDA0002038710600000037
Representing edge compute servers E j The computing power of (a).
Further, the step S6 specifically includes:
(1) Task uninstalling constraint conditions: suppose an edge server E j Receiving-maximum UE i A subtask of (i), i.e.
Figure BDA0002038710600000038
UE i Is offloaded to at most one edge server, i.e. </r>
Figure BDA0002038710600000039
And UE i Is unloaded to local, i.e. [ R ] at most one per subtask>
Figure BDA00020387106000000310
(2) The task unloading variables and the task dividing variables meet the following conditions:
Figure BDA00020387106000000311
x i,l,j ⊙y i,l,j,t =1, wherein = indicates the same or operation of two variablesCalculating; task segmentation variable constraint conditions:
Figure BDA00020387106000000312
(3) The user task execution deadline constraint should be satisfied:
Figure BDA00020387106000000313
wherein, T i Representing a user UE i When the task is completed, make->
Figure BDA00020387106000000314
Wherein, T i,l Representing a user UE i When the subtask l finishes the execution time, modeling is carried out as
Figure BDA00020387106000000315
Wherein it is present>
Figure BDA00020387106000000316
Representing a UE i Subtask l at edge Server E j Starting to execute the task;
(4) User Equipment (UE) i The time slot allocation should satisfy:
Figure BDA0002038710600000041
slot continuity constraints:
Figure BDA0002038710600000042
(5)UE i task of (2) is unloaded to C at most simultaneously i An edge server, i.e.
Figure BDA0002038710600000043
Wherein, C i Representing a UE i Number of all edge servers in communication range, C i N is less than or equal to N; user Equipment (UE) i The number of subtasks should satisfy: l is more than or equal to 1 i ≤C i +1。
Further, the step S7 specifically includes: under the condition of meeting the constraint condition of the step S6, determining a user task scheduling strategy based on the minimization of the maximum task processing delay of the user, and realizing the minimization of the total task execution delay, namely
Figure BDA0002038710600000044
Wherein,
Figure BDA0002038710600000045
for the UE i The optimal scheduling decision of the local execution of the ith subtask;
Figure BDA0002038710600000046
Representing a UE i Lth subtask offload to edge Server E j An optimal scheduling policy to execute;
Figure BDA0002038710600000047
For the UE i The optimal proportion of the ith sub-task to execute locally,
Figure BDA0002038710600000048
representing a UE i Lth subtask offload to edge Server E j Optimum ratio of execution>
Figure BDA0002038710600000049
Representing a UE i Lth subtask offload to edge Server E j The optimal slot allocation strategy to implement.
The invention has the beneficial effects that: the invention can ensure that the user task scheduling strategy is optimal and the unloading ratio is optimal under the condition of effective task execution, thereby realizing the minimization of user time delay.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof.
Drawings
For a better understanding of the objects, aspects and advantages of the present invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a network diagram of edge server cooperative multitask offload;
fig. 2 is a flowchart illustrating a multitask unloading method according to the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and embodiments may be combined with each other without conflict.
The invention provides a multi-task unloading method based on edge server cooperation, which is characterized in that a user is supposed to execute a certain calculation task, the edge server and user equipment have certain task calculation and processing capabilities, the user can adopt all local execution, local and edge server cooperation execution and edge server execution, the maximum user task processing delay is modeled as an optimization target, an optimal user task unloading strategy and unloading ratio are determined, and the total task execution delay is minimized.
As shown in fig. 1, there are multiple users to be executed with tasks in the network, and the users select a suitable manner to unload the tasks, and the task execution delay is minimized by optimizing the user task scheduling policy, the unloading ratio, and the time slot allocation scheme.
As shown in fig. 2, the multitask unloading method of the present invention specifically includes the following steps:
1) Modeling edge server variables: let E = { E = j Denotes an edge server set, where E j RepresentJ is more than or equal to 1 and less than or equal to N of the jth edge server, wherein N is the number of the edge servers
2) Modeling user task characteristics:
let a set of User Equipments (UEs) to be tasked in the system be UE = { UE = i Where, UE i I is more than or equal to 1 and less than or equal to M, wherein M is the total number of the user equipment; UE (user Equipment) i Task execution request is composed of three groups < I i ,S i ,T i d Description of wherein I i 、S i And T i d Respectively represent UEs i The data volume required by the task to be executed, the data volume to be processed and the task completion deadline.
Assuming that a user task is executed in a given time period, the time period is divided into P time slots, T t Represents the t-th time slot, and t is more than or equal to 1 and less than or equal to P.
3) Modeling user task partition variables, unloading variables and time slot distribution variables:
UE i is divided into L i Each subtask is respectively unloaded to different edge servers for execution or is executed locally by a user;
let lambda i,l ∈[0,1]Representing a UE i Is locally performed data volume ratio of the l sub-task, lambda i,l,j ∈[0,1]Representing a UE i Is offloaded to the edge server E j The data volume ratio of execution;
let x i,l = {0,1} represents UE i The ith sub-task locally executes a decision marker, x i,l =1 denotes UE i Is locally executed, otherwise, x i,l =0;
Let x i,l,j = {0,1} represents UE i Is offloaded to the edge server E j Scheduling decision identification of, x i,l,j =1 denotes UE i Is offloaded to the edge server E j Perform execution, otherwise, x i,l,j =0;
Let y i,l,j,t = {0,1} represents UE i Subtask offload to edge garmentAffair E j Perform the corresponding slot allocation identification, y i,l,j,t =1 denotes that at time slot t, UE i Is offloaded to the edge server E j Perform execution, otherwise, y i,l,j,t =0。
4) Modeling local execution time delay of user tasks: modeling UE i The local execution of the first sub-task requires a delay of
Figure BDA0002038710600000061
Wherein f is i Representing a UE i The local computing power of.
5) Modeling the task execution delay of the edge server: suppose edge server E j Sequentially executing the tasks unloaded by all the user equipment and enabling D j As an edge server E j Performing the sum of the time delays of the sub-tasks offloaded by the UE, i.e.
Figure BDA0002038710600000062
Wherein,
Figure BDA0002038710600000063
representing edge servers E j Performing UE i The time delay required for the unloaded first subtask; wherein it is present>
Figure BDA0002038710600000064
Representing a UE i All tasks of (2) are at the edge server E j The total delay required for the execution.
Wherein,
Figure BDA0002038710600000065
representing a UE i Task of (2) to edge server E j Desired transmission delay modeled as >>
Figure BDA0002038710600000066
R i,j Representing a UE i Is offloaded to the edge server E j Corresponding transfer rate modeled as->
Figure BDA0002038710600000067
Wherein, B i,j Representing a UE i Is offloaded to the edge server E j Occupied transmission bandwidth, P i,j Representing a UE i Is offloaded to the edge server E j Transmission power used, g i,j Representing a UE i And edge server E j Channel gain, σ, of the link between 2 Representing the channel noise power.
Figure BDA0002038710600000068
Representing a UE i Is at edge server E j On execution of a required processing delay modeled as->
Figure BDA0002038710600000069
Wherein,
Figure BDA00020387106000000610
representing edge compute servers E j The computing power of (a).
6) Modeling user task scheduling constraint:
(1) Task uninstallation constraints: suppose an edge server E j Receiving-maximum UE i A subtask of (i), i.e.
Figure BDA00020387106000000611
UE i Is offloaded to at most one edge server, i.e. </r>
Figure BDA00020387106000000612
And UE i Is unloaded to local, i.e. </r, at most one per subtask>
Figure BDA00020387106000000613
(2) The task unloading variables and the task dividing variables meet the following conditions:
Figure BDA00020387106000000614
x i,l,j ey i,l,j,t =1; task divide variable constraint>
Figure BDA00020387106000000615
(3) The user task execution deadline constraint should be satisfied:
Figure BDA00020387106000000616
wherein, T i Representing a user UE i When the task is completed, make->
Figure BDA00020387106000000617
Wherein, T i,l Representing a user UE i The execution time of the subtask l is modeled as
Figure BDA0002038710600000071
Wherein +>
Figure BDA0002038710600000075
Representing a UE i Subtask l at edge Server E j Moment when execution of a task starts, modeled as->
Figure BDA0002038710600000072
(4) User Equipment (UE) i The time slot allocation should satisfy:
Figure BDA0002038710600000073
slot continuity constraints:
Figure BDA0002038710600000074
(5)UE i can be unloaded to C at most simultaneously i An edge server, i.e.
Figure BDA0002038710600000076
Wherein, C i Representing a UE i Number of all edge servers in communicable range, C i N is less than or equal to N; user Equipment (UE) i The number of subtasks should satisfy: l is more than or equal to 1 i ≤C i +1。
7) Determining a user task unloading strategy based on the minimization of the maximum task processing delay of the user;
determining a user task scheduling policy based on the minimization of the maximum task processing delay of the user, and realizing the minimization of the total task execution delay, namely
Figure BDA0002038710600000077
Wherein it is present>
Figure BDA0002038710600000078
For the UE i The optimal scheduling decision of the local execution of the ith subtask;
Figure BDA0002038710600000079
Representing a UE i Lth subtask offload to edge Server E j An optimal scheduling policy to execute;
Figure BDA00020387106000000710
As a UE i The optimal ratio of the ith sub-task to execute locally, <' >>
Figure BDA00020387106000000711
Representing a UE i Lth subtask offload to edge Server E j Optimum ratio of execution>
Figure BDA00020387106000000712
Representing a UE i Lth subtask offload to edge Server E j The optimal slot allocation strategy to implement.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (2)

1. A multitask unloading method based on edge server cooperation is characterized by specifically comprising the following steps:
s1: modeling edge server variables, specifically including: let E = { E = j Denotes the edge server set, where E j J is more than or equal to 1 and less than or equal to N, and N is the number of the edge servers;
s2: modeling user task characteristics specifically comprises: let a set of User Equipments (UEs) to be tasked in the system be UE = { UE = i Where, UE i I is more than or equal to 1 and less than or equal to M, wherein M is the total number of the user equipment; UE (user Equipment) i Task execution request is composed of three groups < I i ,S i ,T i d Description of wherein I i 、S i And T i d Respectively represent UE i The input data volume required by the task to be executed, the data volume to be processed and the task completion deadline are calculated; supposing that user tasks are executed in a given time period, the time period is divided into P time slots in sequence, and T is ordered t Represents the t-th time slot, t is more than or equal to 1 and less than or equal to P;
s3: modeling a user task segmentation variable, an unloading variable and a time slot distribution variable, and specifically comprising: UE (user Equipment) i Is divided into L i Each subtask is respectively unloaded to different edge servers for execution or is executed locally by a user;
let lambda i,l ∈[0,1]Representing a UE i Is locally performed data volume ratio of the l sub-task, lambda i,l,j ∈[0,1]Representing a UE i Is offloaded to the edge server E j The data volume ratio of execution;
let x i,l = {0,1} represents UE i The ith sub-task locally executes a decision marker, x i,l =1 denotes UE i Is executed locally, otherwise, x i,l =0;
Let x i,l,j = {0,1} represents UE i Is offloaded to the edge server E j Scheduling decision identification of, x i,l,j =1 denotes UE i Is offloaded to the edge server E j Perform execution, otherwise, x i,l,j =0;
Let y i,l,j,t = {0,1} represents UE i Subtask offload to edge service E j Perform the corresponding slot allocation identification, y i,l,j,t =1 denotes that at time slot t, UE i Is offloaded to the edge server E j Perform execution, otherwise, y i,l,j,t =0;
S4: modeling local execution time delay of a user task;
modeling UE i The time delay required for the local execution of the ith subtask is
Figure FDA0004105230130000011
Wherein f is i Representing a UE i Local computing power of S i Representing a UE i Amount of data to be processed, lambda, of the task to be executed i,l ∈[0,1]Representing a UE i The ratio of the local execution data amount of the ith subtask of (1);
s5: modeling the task execution delay of the edge server:
suppose edge server E j Sequentially executing the tasks unloaded by all the user equipment and enabling D j As an edge server E j Performing the sum of the time delays of the sub-tasks offloaded by the UE, i.e.
Figure FDA0004105230130000012
Wherein it is present>
Figure FDA0004105230130000013
Representing edge servers E j Performing UE i The time delay required for the unloaded first subtask;
wherein,
Figure FDA0004105230130000021
representing a UE i All tasks of (2) are at the edge server E j The total time delay required for the upper execution;
wherein,
Figure FDA0004105230130000022
representing a UE i Task of (2) to edge server E j The required delay in the transmission of the signal,
Figure FDA0004105230130000023
representing a UE i Is offloaded to the edge server E j A corresponding transmission rate; b is i,j Representing a UE i Is offloaded to the edge server E j Occupied transmission bandwidth, P i,j Representing a UE i Is offloaded to the edge server E j Transmission power used, g i,j Representing a UE i And edge server E j Channel gain of the link between, σ 2 Representing the channel noise power;
Figure FDA0004105230130000024
representing a UE i At the edge server E j On performs a desired processing delay, wherein>
Figure FDA0004105230130000025
Representing edge compute servers E j The computing power of (a);
s6: modeling a user task scheduling limiting condition, which specifically comprises the following steps:
(1) Task uninstalling constraint conditions: suppose an edge server E j Receiving-maximum UE i A subtask of (i), i.e.
Figure FDA0004105230130000026
UE i Is offloaded to at most one edge server, i.e. < >>
Figure FDA0004105230130000027
And UE i Unload at most one to local, i.e. < >>
Figure FDA0004105230130000028
(2) The task unloading variables and the task dividing variables meet the following conditions:
Figure FDA0004105230130000029
x i,l,j ⊙y i,l,j,t =1, where £ indicates an exclusive or operation of a binary variable; task segmentation variable constraint conditions:
Figure FDA00041052301300000210
(3) The user task execution deadline constraint should be satisfied:
Figure FDA00041052301300000211
wherein, T i Representing a user UE i When the task is completed, make->
Figure FDA00041052301300000212
Wherein, T i,l Representing a user UE i When the subtask l finishes the execution time, modeling is carried out as
Figure FDA00041052301300000213
Wherein it is present>
Figure FDA00041052301300000214
Representing a UE i Subtask l at edge Server E j The moment when the task starts to be executed;
(4) User Equipment (UE) i The time slot allocation should satisfy:
Figure FDA00041052301300000215
slot continuity constraints:
Figure FDA00041052301300000216
(5)UE i Task of (2) is unloaded to C at most simultaneously i An edge serviceDevices, i.e.
Figure FDA0004105230130000031
Wherein, C i Representing a UE i Number of all edge servers in communication range, C i N is less than or equal to N; user Equipment (UE) i The number of subtasks should satisfy: l is more than or equal to 1 i ≤C i +1;
S7: and determining a user task unloading strategy based on the minimization of the maximum task processing time delay of the user.
2. The method for multitask offload based on edge server cooperation according to claim 1, wherein the step S7 specifically includes: under the condition of meeting the constraint condition of the step S6, determining a user task scheduling strategy based on the minimum of the maximum task processing time delay of the user, and realizing the minimum of the total time delay of task execution, namely
Figure FDA0004105230130000032
Wherein,
Figure FDA0004105230130000033
as a UE i The optimal scheduling decision of the local execution of the ith subtask;
Figure FDA0004105230130000034
Representing a UE i Lth subtask offload to edge Server E j An optimal scheduling policy to execute;
Figure FDA0004105230130000035
For the UE i The optimal ratio of the ith sub-task to execute locally, <' >>
Figure FDA0004105230130000036
Representing a UE i Lth subtask offload to edge Server E j Optimum ratio of execution>
Figure FDA0004105230130000037
Representing a UE i Lth subtask offload to edge Server E j The optimal slot allocation strategy implemented. />
CN201910334429.7A 2019-04-24 2019-04-24 Multitask unloading method based on edge server cooperation Active CN110096362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910334429.7A CN110096362B (en) 2019-04-24 2019-04-24 Multitask unloading method based on edge server cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910334429.7A CN110096362B (en) 2019-04-24 2019-04-24 Multitask unloading method based on edge server cooperation

Publications (2)

Publication Number Publication Date
CN110096362A CN110096362A (en) 2019-08-06
CN110096362B true CN110096362B (en) 2023-04-14

Family

ID=67445774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910334429.7A Active CN110096362B (en) 2019-04-24 2019-04-24 Multitask unloading method based on edge server cooperation

Country Status (1)

Country Link
CN (1) CN110096362B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764885B (en) * 2019-08-28 2024-04-09 中科晶上(苏州)信息技术有限公司 Method for splitting and unloading DNN tasks of multiple mobile devices
KR20210067468A (en) 2019-11-29 2021-06-08 삼성전자주식회사 Method and apparatus for offloading data in a wireless communication system
CN111131835B (en) * 2019-12-31 2021-02-26 中南大学 Video processing method and system
CN113727348B (en) * 2020-05-12 2023-07-11 华为技术有限公司 Method, device, system and storage medium for detecting user data of User Equipment (UE)
CN111988805B (en) * 2020-08-28 2022-03-29 重庆邮电大学 End edge cooperation method for reliable time delay guarantee
CN112202886B (en) * 2020-09-30 2023-06-23 广州大学 Task unloading method, system, device and storage medium
CN112822264B (en) * 2021-01-05 2022-07-15 中国科学院计算技术研究所 DNN task unloading method
CN114285847A (en) * 2021-12-17 2022-04-05 中国电信股份有限公司 Data processing method and device, model training method and device, electronic equipment and storage medium
CN115133972B (en) * 2022-03-14 2023-06-27 重庆邮电大学 Satellite system task scheduling and unloading method
CN117635924B (en) * 2024-01-25 2024-05-07 南京慧然科技有限公司 Low-energy-consumption target detection method based on adaptive knowledge distillation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995660A (en) * 2017-12-18 2018-05-04 重庆邮电大学 Support Joint Task scheduling and the resource allocation methods of D2D- Edge Servers unloading
CN108809723A (en) * 2018-06-14 2018-11-13 重庆邮电大学 A kind of unloading of Edge Server Joint Task and convolutional neural networks layer scheduling method
CN108880893A (en) * 2018-06-27 2018-11-23 重庆邮电大学 A kind of mobile edge calculations server consolidation collection of energy and task discharging method
CN109246761A (en) * 2018-09-11 2019-01-18 北京工业大学 Consider the discharging method based on alternating direction multipliers method of delay and energy consumption
CN109240818A (en) * 2018-09-04 2019-01-18 中南大学 Task discharging method based on user experience in a kind of edge calculations network
CN109656703A (en) * 2018-12-19 2019-04-19 重庆邮电大学 A kind of mobile edge calculations auxiliary vehicle task discharging method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10439890B2 (en) * 2016-10-19 2019-10-08 Tata Consultancy Services Limited Optimal deployment of fog computations in IoT environments

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995660A (en) * 2017-12-18 2018-05-04 重庆邮电大学 Support Joint Task scheduling and the resource allocation methods of D2D- Edge Servers unloading
CN108809723A (en) * 2018-06-14 2018-11-13 重庆邮电大学 A kind of unloading of Edge Server Joint Task and convolutional neural networks layer scheduling method
CN108880893A (en) * 2018-06-27 2018-11-23 重庆邮电大学 A kind of mobile edge calculations server consolidation collection of energy and task discharging method
CN109240818A (en) * 2018-09-04 2019-01-18 中南大学 Task discharging method based on user experience in a kind of edge calculations network
CN109246761A (en) * 2018-09-11 2019-01-18 北京工业大学 Consider the discharging method based on alternating direction multipliers method of delay and energy consumption
CN109656703A (en) * 2018-12-19 2019-04-19 重庆邮电大学 A kind of mobile edge calculations auxiliary vehicle task discharging method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Task Execution Cost Minimization-based Joint Computation Offloading and Resource Allocation for Cellular D2D Systems";Junliang Lin et al.;《2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC)》;20181220;第1-5页 *
"移动边缘计算中的计算卸载和资源管理方案";李邱苹等;《电信科学》;20190319;第35卷(第3期);第36-46页 *

Also Published As

Publication number Publication date
CN110096362A (en) 2019-08-06

Similar Documents

Publication Publication Date Title
CN110096362B (en) Multitask unloading method based on edge server cooperation
CN107995660B (en) Joint task scheduling and resource allocation method supporting D2D-edge server unloading
CN109857546B (en) Multi-server mobile edge computing unloading method and device based on Lyapunov optimization
CN109814951B (en) Joint optimization method for task unloading and resource allocation in mobile edge computing network
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN111278132B (en) Resource allocation method for low-delay high-reliability service in mobile edge calculation
CN111240701B (en) Task unloading optimization method for end-side-cloud collaborative computing
CN112888002B (en) Game theory-based mobile edge computing task unloading and resource allocation method
CN109151864B (en) Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network
CN109756912B (en) Multi-user multi-base station joint task unloading and resource allocation method
CN111711962B (en) Cooperative scheduling method for subtasks of mobile edge computing system
CN110489176B (en) Multi-access edge computing task unloading method based on boxing problem
CN110519370B (en) Edge computing resource allocation method based on facility site selection problem
CN113220356B (en) User computing task unloading method in mobile edge computing
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN111511028B (en) Multi-user resource allocation method, device, system and storage medium
CN113835878A (en) Resource allocation method and device, computer equipment and storage medium
CN112596910A (en) Cloud computing resource scheduling method in multi-user MEC system
El Haber et al. Computational cost and energy efficient task offloading in hierarchical edge-clouds
Hmimz et al. Joint radio and local resources optimization for tasks offloading with priority in a mobile edge computing network
CN110780986B (en) Internet of things task scheduling method and system based on mobile edge computing
CN115955479A (en) Task rapid scheduling and resource management method in cloud edge cooperation system
CN113741999A (en) Dependency-oriented task unloading method and device based on mobile edge calculation
Chen et al. Joint optimization of task caching, computation offloading and resource allocation for mobile edge computing
CN112130927B (en) Reliability-enhanced mobile edge computing task unloading method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240827

Address after: 518000 1104, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Hongyue Enterprise Management Consulting Co.,Ltd.

Country or region after: China

Address before: 400065 Chongqing Nan'an District huangjuezhen pass Chongwen Road No. 2

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240920

Address after: No. 4068 Yitian Road, Fu'an Community, Futian Street, Futian District, Shenzhen City, Guangdong Province, China 518033. Excellence Times Square Building 3501-3504, 3510

Patentee after: Shenzhen Quwei Technology Co.,Ltd.

Country or region after: China

Address before: 518000 1104, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Patentee before: Shenzhen Hongyue Enterprise Management Consulting Co.,Ltd.

Country or region before: China