CN111913723B - Cloud-edge-end cooperative unloading method and system based on assembly line - Google Patents

Cloud-edge-end cooperative unloading method and system based on assembly line Download PDF

Info

Publication number
CN111913723B
CN111913723B CN202010544184.3A CN202010544184A CN111913723B CN 111913723 B CN111913723 B CN 111913723B CN 202010544184 A CN202010544184 A CN 202010544184A CN 111913723 B CN111913723 B CN 111913723B
Authority
CN
China
Prior art keywords
task
mobile device
computing
edge
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010544184.3A
Other languages
Chinese (zh)
Other versions
CN111913723A (en
Inventor
开彩红
周浩
黄伟
彭敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202010544184.3A priority Critical patent/CN111913723B/en
Publication of CN111913723A publication Critical patent/CN111913723A/en
Application granted granted Critical
Publication of CN111913723B publication Critical patent/CN111913723B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/62Uninstallation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

The invention provides a cloud-edge-end cooperative unloading method and system based on a production line, and relates to the technical field of mobile edge computing. According to the invention, the tasks are distributed through a pipeline unloading strategy, and the communication resources for transmitting the delay sensitive tasks between the mobile equipment and the edge nodes and between the edge nodes and the cloud center can be effectively reduced, so that the transmission time of the delay sensitive tasks between the mobile equipment and between the edge nodes and the cloud center is reduced, and the purpose of reducing the delay is achieved. Meanwhile, the cloud-edge-end cooperative unloading architecture based on the assembly line jointly considers the assembly line unloading strategy, the computing resource and the communication resource, and provides the problem of minimizing the total waiting time delay of all mobile devices.

Description

Cloud-edge-end cooperative unloading method and system based on assembly line
Technical Field
The invention relates to the technical field of mobile edge computing, in particular to a cloud-edge-end cooperative unloading method and system based on a production line.
Background
With the intelligentization of mobile devices, the demand for new high-computing-capacity applications such as virtual reality, natural language processing, ultra-high-definition video, and online games is increasing, and therefore, providing high computing capacity for mobile devices has become another important target of future wireless communication systems. One of the main approaches to this goal is to offload the computationally intensive tasks to servers that are more resource rich in the vicinity, a method referred to as computational offloading.
In the existing method, complete task offloading is mainly performed through cooperation of an MEC server (mobile edge computing) of an edge node, a cloud center and a mobile device, and in order to process a task at a terminal, an edge and a cloud together, in the prior art, when task offloading is performed, the calculation capability and power consumption limit of the mobile terminal itself, the calculation capability and power consumption limit of the edge node and the like are mainly considered, so that the aim of minimizing the total waiting delay problem of all mobile devices is achieved.
However, the applicant of the present invention finds that the existing computation offloading method cannot meet the requirement of the time-sensitive task on timeliness.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a cloud-edge-end cooperative unloading method and system based on a production line, and solves the technical problem that the existing calculation unloading method cannot meet the requirement of a time delay sensitive task on timeliness.
(II) technical scheme
In order to realize the purpose, the invention is realized by the following technical scheme:
the invention provides a cloud-edge-end cooperative unloading method based on a production line, which distributes tasks through a production line unloading strategy, wherein the production line unloading strategy comprises the following steps: for any task, the mobile device firstly judges whether the local has enough available resources, and when the mobile device has enough capacity to process the task, the mobile device independently processes the whole task; otherwise, the mobile equipment processes part of tasks according to the computing capacity of the mobile equipment and unloads the rest tasks to the nearest edge node; the MEC on the edge node determines how many tasks to process according to the received task amount and the self computing resource; if the MEC cannot process all the unloaded tasks, the MEC processes the tasks matched with the computing capacity of the MEC, and the remaining tasks are unloaded to a cloud center with sufficient computing resources, wherein the method comprises the following steps:
s1, acquiring task data and computing resource data based on the task data and the computing resource dataSetting variable parameters, wherein the variable parameters comprise all mobile equipment tasks I and the current task quantity I of the mobile equipment n n Computing power f of mobile device n n
S2, according to the pipeline unloading strategy of the task execution quantity, the current task quantity I of the mobile device n is converted into the task quantity I n Correspondingly allocating the variable parameters to the mobile equipment n, the edge nodes and the cloud center, and constructing time delay models respectively associated with the mobile equipment n, the edge nodes and the cloud center and the correspondingly allocated task execution amount according to the variable parameters and the initialization values thereof;
s3, obtaining a maximum time delay model when all the mobile equipment tasks I are unloaded and executed according to the time delay model;
s4, defining a pipeline unloading strategy of a joint optimization task, and a distribution problem of computing resources and communication resources, aiming at minimizing the total waiting delay problem of all mobile devices, and converting the distribution problem into an objective function and a corresponding constraint condition according to the maximum delay model;
s5, based on the objective function, the constraint condition, the current task amount of each mobile device, the computing resources needed by the task, and the computing power f of the mobile device n n And acquiring an optimal task pipeline unloading strategy, a computing resource and communication resource allocation strategy.
Preferably, the variable parameters further include: computing resource psi required by a task n And deadline for completion of the task
Figure GDA0002649598540000021
Number of edge nodes S, number of mobile devices N, upper limit of computing resources of edge nodes S
Figure GDA0002649598540000022
And cloud centric computing resource cap
Figure GDA0002649598540000023
Preferably, the computing resources include computing power f allocated to the mobile device n by the edge node n,s And cloud center pointsComputing resources allocated to edge node s
Figure GDA0002649598540000024
Preferably, the time delay models respectively associated with the task execution amounts correspondingly allocated to the mobile device n, the edge node, and the cloud center respectively include:
the delay model of the mobile device n is:
Figure GDA0002649598540000025
in the formula:
Figure GDA0002649598540000026
representing the latency of the mobile device n performing the offloading task,
Figure GDA0002649598540000027
Figure GDA0002649598540000028
representing the computational resources required for local computation by the mobile device n,
Figure GDA0002649598540000029
f n representing the computing power of the mobile device n,
Figure GDA00026495985400000210
the transmission delay and the calculation delay of the task unloaded from the mobile equipment n to the edge node s are as follows:
the delay model of the edge node is as follows:
Figure GDA00026495985400000211
in the formula:
Figure GDA00026495985400000212
Figure GDA0002649598540000031
Figure GDA0002649598540000032
representing the transmission delay of the task off-loading from the mobile device n to the edge node s,
Figure GDA0002649598540000033
Figure GDA0002649598540000034
representing the computational delay of offloading the task from the mobile device n to the edge node s,
Figure GDA0002649598540000035
T n,s representing the latency of the edge node s to perform the offload task,
Figure GDA0002649598540000036
Figure GDA0002649598540000037
indicating the size of the data calculated by the edge node,
Figure GDA0002649598540000038
r n,s representing the transmission rate between the mobile device n and the edge node s,
Figure GDA0002649598540000039
Figure GDA00026495985400000310
representing the computational resources required for the computation of the edge node,
Figure GDA00026495985400000311
f n,s representing the computational power allocated by the edge node s to the mobile device n,
Figure GDA00026495985400000312
the cloud center delay model is as follows:
Figure GDA00026495985400000313
in the formula:
Figure GDA00026495985400000314
Figure GDA00026495985400000315
Figure GDA00026495985400000316
representing the transmission delay of the task offloaded from the edge node s to the cloud center c,
Figure GDA00026495985400000317
Figure GDA00026495985400000318
representing the computational latency of offloading the task from the edge node s to the cloud center c,
Figure GDA00026495985400000319
Figure GDA00026495985400000320
representing the time delay for the cloud center c to perform the offloading task,
Figure GDA00026495985400000321
Figure GDA00026495985400000322
represents the size of the data for the cloud-centric computing,
Figure GDA00026495985400000323
Figure GDA00026495985400000324
representing the transmission rate between edge node s and cloud center c,
Figure GDA00026495985400000325
Figure GDA00026495985400000326
representing the computing resources required for the cloud-centric computing,
Figure GDA00026495985400000327
Figure GDA00026495985400000328
indicating, for task n, the computing power that cloud center c allocates to edge node s,
Figure GDA00026495985400000329
preferably, the obtaining, according to the delay model, a maximum delay model when all the mobile device tasks I are completely unloaded, includes:
using the set x ═ { x ═ x n ,x n,s N ∈ N, S ∈ S } represents a set of pipeline offload policies, where x ∈ N, S ∈ S } represents a set of pipeline offload policies n I n (x n ∈[0,1]) Indicating that the task is processed locallyMoiety (1-x) n )x n,s I n Represents the remaining tasks (1-x) n )I n Edge node s processes x therein n,s ∈[0,1](1-x) remaining n )(1-x n,s )I n The data are unloaded to a cloud center for processing, and whether the task can be completed within the deadline is directly influenced by a pipeline unloading strategy; the sizes of the tasks in local computing, edge computing and cloud-centric computing can be modeled as:
Figure GDA0002649598540000041
Figure GDA0002649598540000042
Figure GDA0002649598540000043
the computing resources required for mobile device local computing, edge computing, and cloud-centric computing may be rewritten as:
Figure GDA0002649598540000044
Figure GDA0002649598540000045
Figure GDA0002649598540000046
and (3) re-modeling the processing time delay of the task, which comprises the following specific steps:
the mobile device locally calculates:
for the local computation scheme, processing is performed on the mobile device n, and the corresponding computation delay is rewritten as:
Figure GDA0002649598540000047
and (3) edge calculation:
if the mobile device is not able to compute all tasks locally, mobile device n needs to have tasks (1-x) left n )I n Offloading to the nearest edge node; since the computing resources of the MEC server deployed on the edge node s are limited, the edge node s processes x in the remaining tasks n,s The size is (1-x) through a wireless channel n )I n The transmission delay for offloading the task from the mobile device n to the edge node s can be redefined as:
Figure GDA0002649598540000048
the edge node s processes x in the remaining tasks n,s The calculated delay of (c) can be redefined as:
Figure GDA0002649598540000049
cloud computing:
when an edge node cannot compute all tasks, the edge node will offload all remaining tasks (1-x) n )(1-x n,s )I n To cloud center c, the size is (1-x) through a wireless fronthaul channel n )(1-x ns )I n The transmission delay of the task unloaded from ENs to the cloud center c may be rewritten as:
Figure GDA00026495985400000410
the computation time delay for the cloud center c to process the remaining tasks may be rewritten as:
Figure GDA0002649598540000051
at this time, the total delay of the mobile device n to complete the task can be expressed as:
Figure GDA0002649598540000052
thus, the maximum delay model is as follows:
Figure GDA0002649598540000053
preferably, the communication resource includes a transmission rate and a transmission power.
Preferably, the objective function and the corresponding constraint condition include:
Figure GDA0002649598540000054
Figure GDA0002649598540000055
Figure GDA0002649598540000056
Figure GDA0002649598540000057
Figure GDA0002649598540000058
Figure GDA0002649598540000059
Figure GDA00026495985400000510
Figure GDA00026495985400000511
wherein T is the total processing latency of all mobile device tasks I, and constraint C1 indicates that the task needs to be completed within the deadline; constraint C2 represents the range of values for the pipeline offload policy; c3 ensures that the amount of power required to accomplish this task cannot exceed the upper limit of the mobile device's existing power; c4 indicates that the computing resource allocation cannot exceed the upper limit of the edge node and cloud center computing resources; c5 indicates that the transmission power of the mobile device and the edge node cannot exceed an upper bound; c6 and C7 indicate that the transmission rate of the mobile device to the edge node and the edge node to the cloud center, respectively, cannot exceed the theoretical upper bound.
Preferably, the method is based on an objective function, constraints, current task volume of each mobile device, computing resources required by the task, computing power f of the mobile device n n Acquiring an optimal task pipeline unloading strategy, a computing resource and communication resource allocation strategy, wherein the optimal task pipeline unloading strategy comprises the following steps:
converting the objective function and the constraint condition into a convex optimization problem to obtain an optimized objective function and a constraint condition;
and inputting the current task amount of each mobile device, the computing resources required by the task and the computing capacities of the mobile device, the edge node and the cloud center into the optimized objective function and the constraint condition to obtain an optimal task pipeline unloading strategy, computing resources and communication resource allocation strategy.
The invention also provides a cloud-edge-end cooperation unloading system based on the pipeline, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the steps of the method when executing the computer program.
(III) advantageous effects
The invention provides a cloud-edge-end cooperation unloading method and system based on a production line. Compared with the prior art, the method has the following beneficial effects:
the invention distributes tasks through a pipeline unloading strategy, wherein the pipeline unloading strategy comprises the following steps: for any task, the mobile equipment firstly judges whether the local available resources are enough, and when the mobile equipment has enough capacity to process the task, the mobile equipment independently processes the whole task; otherwise, the mobile equipment processes part of tasks according to the computing capacity of the mobile equipment and unloads the rest tasks to the nearest edge node; the MEC on the edge node determines how many tasks to process according to the received task amount and the self computing resource; if the MEC cannot process all the unloaded tasks, the MEC processes the tasks matched with the computing capacity of the MEC, and the rest tasks are unloaded to the cloud center with sufficient computing resources. According to the invention, the tasks are distributed through a production line unloading strategy, and communication resources for transmission of the time delay sensitive tasks between the mobile equipment and the edge node and between the edge node and the cloud center can be effectively reduced, so that the transmission time of the time delay sensitive tasks between the mobile equipment and between the edge node and the cloud center is reduced, and the purpose of reducing the time delay is achieved. Meanwhile, the cloud-edge-end cooperative unloading architecture based on the assembly line jointly considers the assembly line unloading strategy, the computing resource and the communication resource, and provides the problem of minimizing the total waiting time delay of all mobile devices.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a block diagram of a method for cloud-edge-end cooperative offloading based on a pipeline according to an embodiment of the present invention;
FIG. 2 is a diagram of a cloud-edge-end collaboration framework including one CC, S ENs and N MDs;
fig. 3 is a schematic diagram of a time delay strategy of task n in local computing, edge computing and cloud center computing of a mobile device;
FIG. 4.1 is a line graph of the relationship between delay and the number of MDs;
FIG. 4.2 is a line graph of time delay versus the number of ENs;
FIG. 4.3 is a line graph of the relationship between delay and task load;
FIG. 4.4 is a line graph of the relationship between time delay and maximum MD transmit power;
fig. 4.5 is a line graph of the relationship between time delay and maximum EN transmit power.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The cloud-edge-end cooperative unloading method based on the assembly line solves the technical problem that the existing calculation unloading method cannot meet the requirement of the time delay sensitive task on timeliness, and achieves smaller system delay.
In order to solve the technical problems, the general idea of the embodiment of the application is as follows:
the cooperative computing mode combining the edge node and the cloud center is an effective method for solving the problems that the computing capacity of the mobile equipment is limited and the user requirements cannot be effectively met. The method is called computation offloading, and when task offloading is performed in existing computation offloading, the computation capability and power consumption limitation of a mobile terminal, the computation capability and power consumption limitation of an edge node, and the like are mainly considered, so that the aim of minimizing the total waiting delay of all mobile devices is achieved. However, in practical application, the calculation unloading method cannot meet the requirement of the time sensitivity of the time delay sensitive task. Therefore, the embodiment of the present invention provides a pipeline-based cloud-edge-end cooperative offloading method to solve the above problem.
In order to better understand the technical scheme, the technical scheme is described in detail in the following with reference to the attached drawings of the specification and specific embodiments.
The embodiment of the invention provides a cloud-edge-end cooperative unloading method based on a production line, which distributes tasks through a production line unloading strategy, wherein the production line unloading strategy comprises the following steps: for any task, the mobile device firstly judges whether the local available resources are enough, and when the mobile device has enough capacity to process the task, the mobile device can independently process the whole task; otherwise, the mobile device processes part of tasks according to the computing capacity of the mobile device, and simultaneously unloads the rest tasks to the nearest edge node through the access link; after receiving the unloading requirement of the mobile equipment, the MEC on the edge node determines how many tasks to process according to the received task amount and the own computing resource; if the MEC cannot process all the unloaded tasks, the MEC calculates the tasks matched with the computing capacity of the MEC, and the edge nodes unload the rest tasks to the cloud center with sufficient computing resources through the front-end links. As shown in FIG. 1, the method includes steps S1-S5.
S1, acquiring task data and calculation resource data, setting variable parameters based on the task data and calculation resource data, wherein the variable parameters include all mobile device task I and current task amount I of the mobile device n n Computing power f of a mobile device n n
S2, according to the cooperative distribution principle of the task execution amount, the current task amount I of the mobile device n is distributed n Correspondingly allocating the variable parameters to the mobile equipment n, the edge nodes and the cloud center, and constructing time delay models respectively associated with the mobile equipment n, the edge nodes and the cloud center and the correspondingly allocated task execution amount according to the variable parameters and the initialization values thereof;
s3, obtaining a maximum time delay model when all the mobile equipment tasks I are unloaded and executed according to the time delay model;
s4, defining a joint optimization task pipeline unloading strategy, a calculation resource and communication resource distribution problem, taking the problem of minimizing the total waiting time delay of all mobile devices as a target, and converting the distribution problem into an objective function and a corresponding constraint condition according to the maximum time delay model;
s5, based on the objective function, the constraint condition, the current task amount of each mobile device, the computing resources needed by the task, and the computing power f of the mobile device n n And obtaining an optimal task pipeline unloading strategy and an optimal distribution strategy of computing resources and communication resources.
According to the embodiment of the invention, the tasks are distributed through the pipeline unloading strategy, and the communication resources for the transmission of the delay sensitive tasks between the mobile equipment and the edge nodes and between the edge nodes and the cloud center can be effectively reduced, so that the transmission time of the delay sensitive tasks between the mobile equipment and between the edge nodes and the cloud center is reduced, and the purpose of reducing the delay is achieved. Meanwhile, the cloud-edge-end cooperative unloading architecture based on the assembly line in the embodiment of the invention jointly considers the assembly line unloading strategy, the computing resource and the communication resource, and provides the problem of minimizing the total waiting time delay of all mobile devices.
In one embodiment, the step of acquiring the task data and the computing resource data and setting the variable parameter based on the task data and the computing resource data, step S1, includes:
and acquiring task data and computing resource data by a web crawler technology or a manual entry mode. The task data mainly comprises: the data size of the task, the computational resources required by the task, the deadline for completion of the task, and the like. The computing resource data includes: the number of edge nodes, the number of mobile devices, the computing power of the edge nodes, the computing power of the cloud center, the computing resources allocated to the mobile devices by the edge nodes and the computing resources allocated to the edge nodes by the cloud center, etc.
The variable parameters include: mobile device tasksI, current task volume I of mobile device n n The computing resources psi required by the task n And deadline for completion of the task
Figure GDA0002649598540000081
Number of edge nodes S, number of mobile devices N, computing power of mobile devices f n Edge node s upper limit of computational resources
Figure GDA0002649598540000082
And cloud centric computing resource cap
Figure GDA0002649598540000083
And computing power f of edge nodes in computing resources allocated to mobile device n n,s And computing resources allocated to edge node s by cloud center
Figure GDA0002649598540000084
In an embodiment, S2, constructing a delay model respectively associated with the task execution amount distributed correspondingly by the mobile device n, the edge node, and the cloud center according to the cooperative distribution principle of the task execution amount and the variable parameter. The specific implementation process is as follows:
modeling a computation unloading problem in a mobile edge computing network into a cloud-edge-end cooperation unloading problem based on a production line, and enabling the current task quantity I of a mobile device n to be based on a cooperation distribution principle of task execution quantity n And correspondingly allocating the variable parameters to the mobile equipment n, the edge nodes and the cloud center, and constructing a time delay model respectively associated with the task execution quantity correspondingly allocated to the mobile equipment n, the edge nodes and the cloud center according to the variable parameters and the initialization values thereof.
The cooperative distribution principle of the task execution amount refers to that: current task volume I n The task execution amount of the mobile device n + the task execution amount of the edge node + the task execution amount of the cloud center.
In an embodiment of the invention, a computation offload problem in a mobile edge computing network is modeled as a pipeline-based cloud-edge-end collaborative offload problem. By utilizing the difference of computing power among the cloud, the edge and the terminal, the task generated by the Mobile Device (MD) is unloaded to the Cloud Center (CC) and the Edge Node (EN) through a wireless forwarding and access link for cooperative processing. Specifically, if the MD offloads tasks to a Mobile Edge Computing (MEC) server on the EN, but the MEC server's computing resources are exhausted, the offloaded tasks should be further divided and the EN will offload the remaining tasks to CCs with sufficient computing resources.
Based on the above, a cloud-edge-end network framework is constructed, and as shown in fig. 2, a cloud-edge-end collaboration network framework including one CC, S ENs, and N MDs is considered. CC is denoted by c, and the sets S ═ {1, 2.., S } and N ═ 1, 2.., N } denote the set of ENs and the set of MDs, respectively. Assuming that each MD has associated with it a MEC server on one EN, where each MD is connected to the corresponding EN through a wireless access link, the EN offloads data to the CC through a different forward link. In a real network, the computing power of the CC server is sufficient. Whereas MECs on ENs are limited in their ability to process and communicate data compared to CCs. Assume that each MD has a delay sensitive task to process. Thus, using parameters
Figure GDA0002649598540000091
Tasks representing MDn (N ∈ N), where I n Is the data size, psi, of task n n The total amount of computing resources required to process task n (i.e., the number of CPU cycles),
Figure GDA0002649598540000092
is the expiration date of the completion of the task.
Generally, the geographical locations of the CC and EN are fixed, and charging can be performed from nearby power supply equipment, so that uninterrupted power supply is realized.
The delay model for mobile device n is:
Figure GDA0002649598540000093
wherein:
Figure GDA0002649598540000094
representing the latency of the mobile device n to perform the offloading task,
Figure GDA0002649598540000095
Figure GDA0002649598540000096
representing the computational resources required for local computation by the mobile device n,
Figure GDA0002649598540000097
f n the computing power of the MDn is represented,
Figure GDA0002649598540000098
the transmission delay and the calculation delay of unloading the task from the MDn to the ENs are as follows:
Figure GDA0002649598540000099
Figure GDA00026495985400000910
the delay model of the edge node is:
Figure GDA00026495985400000911
wherein:
Figure GDA00026495985400000912
representing the propagation delay of the offloading of tasks from MDn to ENs,
Figure GDA00026495985400000913
Figure GDA00026495985400000914
representing the computational delay of the offloading of tasks from MDn to ENs,
Figure GDA00026495985400000915
T n,s indicating the latency of ENs to perform the offload task,
Figure GDA00026495985400000916
Figure GDA0002649598540000101
indicates the size of the data calculated by EN,
Figure GDA0002649598540000102
r n,s indicating the transmission rate between MDn and ENs,
Figure GDA0002649598540000103
Figure GDA0002649598540000104
indicating the computational resources required for the EN computation,
Figure GDA0002649598540000105
f n,s indicating the computational power that ENs allocates to MDn,
Figure GDA0002649598540000106
the transmission delay and the calculation delay of the task from ENs to CCc are as follows:
Figure GDA0002649598540000107
Figure GDA0002649598540000108
the delay model of the cloud center is:
Figure GDA0002649598540000109
wherein:
Figure GDA00026495985400001010
representing the propagation delay of the task offloading from ENs to CCc,
Figure GDA00026495985400001011
Figure GDA00026495985400001012
representing the computational latency of the task offloading from ENs to CCc,
Figure GDA00026495985400001013
Figure GDA00026495985400001014
representing the latency of the CCc to perform the offload task,
Figure GDA00026495985400001015
Figure GDA00026495985400001016
indicates the data size of the CC calculation,
Figure GDA00026495985400001017
Figure GDA00026495985400001018
indicating the transmission rate between ENs and CCc,
Figure GDA00026495985400001019
Figure GDA00026495985400001020
represents the computational resources required for the computation of the CC,
Figure GDA00026495985400001021
Figure GDA00026495985400001022
indicating the computational power allocated to ENs for task n, CCc,
Figure GDA00026495985400001023
in the embodiment of the present invention, according to the cooperative allocation principle of the task execution amount and the variable parameter, the energy consumption required by the calculation of the mobile device n and the transmission energy consumption for unloading the task from MD to EN can also be constructed, which are expressed as:
Figure GDA00026495985400001024
Figure GDA00026495985400001025
wherein:
Figure GDA00026495985400001026
representing the power consumption required for the mobile device n calculation,
Figure GDA00026495985400001027
κ(f n ) 2 the energy consumption of a CPU is one turn, kappa represents an effective switching capacitor, and the value of the effective switching capacitor is determined by a chip structure;
f n the computing power of the MDn is represented,
Figure GDA00026495985400001028
Figure GDA00026495985400001029
representing the computational resources required for local computation by the mobile device n,
Figure GDA00026495985400001030
Figure GDA00026495985400001031
representing the transmission energy consumption for unloading the task from the MD to the EN;
Figure GDA0002649598540000111
indicating the computational resources required for the EN computation,
Figure GDA0002649598540000112
Figure GDA0002649598540000113
representing the propagation delay of the offloading of tasks from MDn to ENs,
Figure GDA0002649598540000114
p n representing the transmit power of the MDn.
Transmission rate between ENs and CCc in the above equation
Figure GDA0002649598540000115
Transmission rate r between MDn and ENs n,s The calculation process of (2) is as follows:
in the cloud-edge-end network framework, the MD and the EN communicate in an OFDMA (orthogonal frequency division multiple access) manner. Specifically, for the access link, the subcarriers are multiplexed between the MDs, with each EN using a separate orthogonal subcarrier. For the forward link, there are multiple sub-carriers, one sub-carrier may be allocated to multiple ENs at the same time, and each EN is allowed to use at most one sub-carrier. The embodiment of the present invention considers a special case where all ENs share the same subcarrier.
The radio access channel between MD and EN-to-CC forward range channel is modeled as an independent identically distributed rayleigh channel (i.i.d). The channel gain of the access link between MDn and ENs can be expressed as:
Figure GDA0002649598540000116
wherein: g represents a path loss constant;
β n,s representing fast fading gains that follow an exponential distribution;
Γ n,s ;d n,s represents a slow fading gain following a lognormal distribution; α represents the distance from MDn to ENs and the path loss exponent.
In a similar manner to that described above,
Figure GDA0002649598540000117
representing the channel gain between ENs and CCc.
The SINR (Signal-to-noise ratio) of MDn at ENs and the SINR of ENs at CCc are respectively expressed as
Figure GDA0002649598540000118
Figure GDA0002649598540000119
Wherein: p is a radical of n And p n,s Respectively representing the transmission power of MDn and ENs; sigma 2 Is the noise power. It can be seen that the first terms of the denominators in equations (2.2) and (2.3) represent inter-cell interference in the access link and inter-EN interference in the front-end link, respectively. Thus, the transmission rate between MDn and ENs, ENThe transmission rate between s and CC may be expressed as:
r n,s =B log 2 (1+g n,s ) (2.13)
Figure GDA00026495985400001110
wherein: b denotes a bandwidth.
In an embodiment, S3 obtains, according to the latency model, a maximum latency model when all the mobile device tasks I are completely offloaded. The specific implementation process is as follows:
for any task, the MD first determines whether there are sufficient resources available locally. When an MD has sufficient capacity to process a task, it will process the entire task on its own. Otherwise, the MD will process part of the tasks according to its own computing power while offloading the remaining tasks to the nearest EN through the access link. After receiving the MD offload request, the MEC in EN will decide how many tasks to process according to the received task amount and its own computing resources. If the MEC is unable to process all the offloaded tasks, the MEC will compute tasks that match its computational capabilities. The EN then offloads the remaining tasks to the CCs that have sufficient computing resources through the fronthaul link.
By the set x ═ x n ,x n,s N ∈ N, S ∈ S } represents a set of pipeline offload policies, where x ∈ N, S ∈ S } represents a set of pipeline offload policies n I n (x n ∈[0,1]) Representing the part of the task that is processed locally, (1-x) n )x n,s I n Represents the remaining tasks (1-x) n )I n ENs processes x therein n,s ∈[0,1](1-x) remaining n )(1-x n,s )I n Will be offloaded to the CC for processing. Whether a task can be completed within the deadline is directly influenced by the pipeline unloading strategy, wherein the time delay of the task n in the local computing of the mobile device, the edge computing and the cloud center computing is shown in fig. 3.
According to the designed pipeline unloading strategy, the sizes of the tasks in local computing, edge computing and cloud center computing can be modeled as follows:
Figure GDA0002649598540000121
Figure GDA0002649598540000122
Figure GDA0002649598540000123
the computing resources required for mobile device local computing, edge computing, and cloud-centric computing may be rewritten as:
Figure GDA0002649598540000124
Figure GDA0002649598540000125
Figure GDA0002649598540000126
in addition, the energy consumption and processing latency of the task also needs to be re-modeled, as follows.
(1) Mobile device local computing
For local computing schemes, part of the task, i.e. x n I n And processing on the MDn, wherein the corresponding calculation time delay and the calculation energy consumption are rewritten as follows:
Figure GDA0002649598540000127
Figure GDA0002649598540000128
(2) edge calculation
If the mobile device cannot compute all tasks locally, MDn needs to leave tasks (1-x) to it n )I n Off to the nearest EN. Since MEC servers deployed on ENs have limited computational resources, ENs process x of the remaining tasks n,s . A size of (1-x) through a wireless channel n )I n The transmission delay of the task(s) offloaded from MDn to ENs can be redefined as:
Figure GDA0002649598540000131
in addition, ENs processes x in the remaining tasks n,s The calculated delay of (a) can be redefined as:
Figure GDA0002649598540000132
the transmission energy consumption for offloading tasks from MDn to ENs can be defined as:
Figure GDA0002649598540000133
(3) cloud computing
When EN cannot compute all tasks, EN will unload all remaining tasks (1-x) n )(1-x n,s )I n To CCc. Will be of size (1-x) over the wireless fronthaul channel n )(1-x n,s )I n The transmission delay of the task of (1) offloaded from ENs to CCc can be rewritten as:
Figure GDA0002649598540000134
the computational latency of the CCc to process the remaining tasks can be rewritten as:
Figure GDA0002649598540000135
at this time, the total delay of MDn to complete the task can be expressed as:
Figure GDA0002649598540000136
thus, the total processing latency for all MDs can be modeled (i.e., the maximum latency model) as follows:
Figure GDA0002649598540000137
in an embodiment, S4 defines an allocation problem of the joint optimization task pipeline offload policy, the computation resource, the transmission rate, and the transmission power, and converts the allocation problem into an objective function and a corresponding constraint condition according to the maximum latency model with a goal of minimizing the total latency problem of all mobile devices. The method specifically comprises the following steps:
under the condition of simultaneously meeting the deadline and energy constraints of all MD tasks, the purpose of minimizing time delay is achieved by jointly designing a pipeline unloading strategy, computing resources, transmission rate allocation and transmission power allocation. Specifically, the problem can be modeled as:
Figure GDA0002649598540000138
wherein T is the total processing latency of all mobile device tasks I, and constraint C1 indicates that the tasks need to be completed within the deadline; constraint C2 represents the range of values for the pipeline offload policy; c3 ensures that the amount of power required to accomplish this task cannot exceed the upper limit of the mobile device's existing power; c4 indicates that the allocation of computing resources cannot exceed the upper limit of the computing resources of the edge nodes and the cloud center; c5 indicates that the transmission power of the mobile device and the edge node cannot exceed an upper bound; c6 and C7 indicate that the transmission rate of the mobile device to the edge node and the edge node to the cloud center, respectively, cannot exceed the theoretical upper bound.
In one embodiment, S5, constraint based on objective functionConditions, current task volume per mobile device, computing resources required for the task, computing power f of the mobile device n And obtaining an optimal task pipeline unloading strategy, a computing resource and a communication resource allocation strategy. The communication resources include transmission rate and transmission power.
The optimization variables in problem P1 include the pipeline offload policy (x) n ,x n,s ) Computing resources
Figure GDA0002649598540000141
Transmission rate
Figure GDA0002649598540000142
And transmission power (p) n ,p n,s ) There is a coupling between these variables, making the problem P1 a non-convex problem less easy to solve. Therefore, it is desirable to transform the problem P1 into an easy-to-handle form in order to efficiently solve, i.e., transform, the objective function and constraints into a convex optimization problem.
To solve the non-convex problem P1, a set of slack variables is first introduced
Figure GDA0002649598540000143
As an upper bound for various delay variables and scaling it by SCA (sequential convex approximation) method. Therefore, the expressions (3.14) and (3.15) representing the total delay time can be rewritten as:
A n =max{a n,1 ,a n,2 +a n,3 ,a n,2 +a n,4 +a n,5 } (5.1)
Figure GDA0002649598540000144
wherein the upper bound of formula (3.7) can be defined as a n,1 As follows:
Figure GDA0002649598540000145
similarly, the upper bounds for equations (3.9), (3.10), (3.12), (3.13) may be defined as follows, respectively:
Figure GDA0002649598540000146
Figure GDA0002649598540000147
Figure GDA0002649598540000148
Figure GDA0002649598540000149
the optimization problem P1 can then be equivalently transformed into:
Figure GDA0002649598540000151
since the objective function is non-convex, the P2 problem is still difficult to solve, and the P2 non-convex optimization problem can be further transformed by using the SCA method.
To deal with problem P2, the non-convex inequalities (5.4) - (5.7) need to be converted to convex. For this purpose, a relaxation variable θ is first introduced n Splitting the inequality (5.4) into two separable inequalities, as follows:
Figure GDA0002649598540000152
next, an iterative sequence θ is generated n,0 And the left side of the inequality (5.9) is expanded by using a first-order taylor formula, and the expansion is as follows:
Figure GDA0002649598540000153
the right side of the inequality (5.10) can be converted to a second order cone constraint, defined as follows:
Figure GDA0002649598540000154
through the above transformation, the inequality (5.4) has become a convex constraint. Then, (5.5) - (5.7) can be treated in a similar way, as follows:
(5.5) conversion to
Figure GDA0002649598540000155
Figure GDA0002649598540000156
Figure GDA0002649598540000157
(5.6) conversion to
Figure GDA0002649598540000158
Figure GDA0002649598540000159
Figure GDA0002649598540000161
(5.7) conversion to
Figure GDA0002649598540000162
Figure GDA0002649598540000163
Figure GDA0002649598540000164
Wherein: pi nn And τ n And λ n,0 Is a relaxation variable, pi n,0n,0 And τ n,0 Is an iterative sequence. After the above series of transformations, the objective function becomes a convex function, and the problem P2 can be redefined as:
Figure GDA0002649598540000165
however, the optimization problem P3 is still non-convex because constraints C6 and C7 are non-convex. Next, the non-convex constraints C6 and C7 are transformed into convex form by mathematical transformation. First, a relaxation variable β is introduced n And b n . Constraint C6 may then be reconverted to the form:
r n,s ≤B log 2 (1+β n ) (5.19)
j∈N\{n} p j g j,s2 ≤b n (5.20) formula (I) < beta >, ( n ≤p n g n,s /b n And approximated by a first order taylor expansion:
Figure GDA0002649598540000166
in the formula p n,0 And b n,0 Are each p n And b n Of the sequence of iterations of (c). Likewise, non-convex constraint C6 may be converted into a convex constraint form:
Figure GDA0002649598540000167
Figure GDA0002649598540000168
in the formula mu n And c n Is the variable of the amount of relaxation,
Figure GDA0002649598540000169
similarly approximated by a first order Taylor expansion:
Figure GDA0002649598540000171
in inequality (5.24)
Figure GDA0002649598540000172
And c n,0 Are each p n,s And c n Of the sequence of iterations of (a). Thus, through the series of transformation problems described above, P3 can be re-modeled as follows:
Figure GDA0002649598540000173
the current task quantity of each MD, the calculation resources required by the task and the respective calculation capacities of the MD, the EN and the CC are used as input and input into a formula (5.25), and the optimal solution of the problem under all constraint conditions is solved, namely, the optimal task pipeline unloading strategy, the calculation resources, the transmission rate and the transmission power distribution strategy are output.
In order to verify the effectiveness of the embodiment of the present invention, the following experimental demonstration is performed, specifically including:
first, to explore the advantages of computing offloading with EN and CC collaboration, some special cases were further studied: 1) all tasks are processed locally, 2) all tasks are offloaded to EN processing, and 3) all tasks are collaboratively processed locally and EN, as follows.
Case 1: all tasks are handled locally at the mobile device, i.e., x n =1,x s =0。
In this case, the total delay of processing task n may be denoted as T n =ψ n /f n . Order to
Figure GDA0002649598540000174
Can obtain
Figure GDA0002649598540000175
When all tasks are processed locally, MDn can only process the largest computational resource as
Figure GDA0002649598540000176
The task of (2). If it is not
Figure GDA0002649598540000177
Indicating that the local computation failed and the task needs to be offloaded. Thus, case 1 is only applicable to handle latency tolerant tasks.
Case 2: all tasks are offloaded to EN processing, i.e., x n =0,x s =1。
In this case, the total delay of processing task n may be denoted as T n =I n /r n,sn /f n,s Let r n,s =R n,s
Figure GDA0002649598540000178
The remaining capacity of MDn is used to unload data, so equation (2.9) can be converted into
Figure GDA0002649598540000179
That is to say that the first and second electrodes,
Figure GDA00026495985400001710
for this reason, the verification process proposes a theorem 6.1, resulting in the optimal power allocation strategy for case 2.
2, theory 6.1: if it is not
Figure GDA00026495985400001711
Is an equation
Figure GDA00026495985400001712
Is then
Figure GDA00026495985400001713
Also, the following problem is satisfied in that r n,s =R n,s
Figure GDA00026495985400001714
And
Figure GDA00026495985400001715
the optimal solution of (2):
Figure GDA0002649598540000181
proving first by calculating T n (p n ) First derivative of (1), T can be obtained n ′(p n ) Is less than 0. It is clear that T n (p n ) Is accompanied by p n Is increased and decreased. Then let E n (p n )=p n I n /R n,s Calculate E n (p n ) Can be given as the first derivative of E' n (p n ) Is greater than 0. When in use
Figure GDA0002649598540000182
p n When taking the maximum value T n (p n ) And is minimal. Thus, p can be obtained n Closed-form solution as follows:
Figure GDA0002649598540000183
wherein: a ═ I n /B,
Figure GDA0002649598540000184
A solution to problem P5 can be obtained by solving equation (5.27).
By comparing case 1, the correlation equation of the two cases in terms of total delay can be obtained
Figure GDA0002649598540000185
Wherein
Figure GDA0002649598540000186
Case 3: all tasks are collaboratively processed locally and on EN, i.e., x n +x n,s =1。
In the present case, problem P4 is re-modeled
Figure GDA0002649598540000187
Wherein
Figure GDA0002649598540000188
The P6 problem is a convex optimization problem that can be solved using a convex optimization tool. According to the above case, compared with the case that the task is processed locally or only in the EN in a cooperation way, the task is processed locally and only in the EN, so that more powerful computing resources can be provided, and the delay of task processing can be effectively reduced.
And secondly, evaluating the performance of the method provided by the embodiment of the invention through multiple groups of simulation. A square area with a network topology of 1000 m × 1000 m covering one CC, 3 ENs and 30 MDs is considered in the simulation. The location of the CC is fixed at the center of the network, and both ENs and MDs are randomly distributed in the area. The channel model adopts a channel model established by 3 GPP. It should be noted that most of the results in this validation process are averages obtained by performing a large number of monte carlo training on the data. The relevant simulation parameters are shown in table 5.1.
TABLE 5.1 simulation parameters
Figure GDA0002649598540000191
In the simulation, the performance of the EN and CC cooperative offloading framework was compared with the following three offloading schemes:
mobile device Local (Local): without offloading, each task (i ∈ N) is processed locally.
Random (Random): each task (i ∈ N) is processed randomly, i.e., mobile device local, EN, or CC computation.
Random with maximum delivery rate (Random-maximum): each task (i e N) is processed randomly, i.e. mobile local, EN or CC calculation, but the transmission rate takes the theoretical maximum value, i.e.,
Figure GDA0002649598540000192
mobile device Local and edge collaboration (Local and edge): each task (i ∈ N) is processed collaboratively both locally and on EN.
Local and edge collaboration (Local and edge with maximum delivery rate) at maximum transmission rate: each task (i ∈ N) is processed cooperatively both locally at the mobile and on EN, but the transmission rate is at the theoretical maximum.
Joint (Joint): the embodiment of the invention provides a cloud-edge-end cooperative unloading method based on a production line.
Joint with maximum delivery rate (Joint-maximum): the cloud-edge-end cooperative unloading framework based on the assembly line provided by the embodiment of the invention is used for processing tasks, but the transmission rate is the theoretical maximum.
Figure 4.1 shows the delay versus the number of MDs. When the computational task is large and the mobile device local and EN cannot process all tasks within the deadline, the tasks need to be offloaded to the CC, increasing the EN to CC transmission latency, resulting in increased latency as the number of MDs increases. Furthermore, for schemes that do not account for CC offload, the queuing latency of the tasks will be increased, resulting in additional latency overhead.
Fig. 4.2 shows the time delay versus the number of ENs. It can be seen that as the number of ENs increases, the delay decreases. Because as the number of ENs increases, the available MEC servers also increase, resulting in increased total computing resources of the system, MD has more MECs to choose from. However, as the number of ENs increases to a certain degree, the computational resources tend to saturate, and this advantage becomes smaller.
In fig. 4.1 and 4.2 it can be seen that the solution proposed by the embodiment of the invention is superior to other unloading solutions. Because this scheme is a cooperative offloading scheme combining ENs and CCs, the MD can handle tasks on both the EN and CC locally to the mobile device. And reasonable unloading objects are distributed to the tasks, so that the task queuing waiting time delay can be greatly reduced. In addition, the transmission rate distribution scheme provided by the embodiment of the invention is compared with the transmission rate by taking the theoretical maximum value, and a certain difference between the transmission rate distribution scheme and the transmission rate on time delay is obtained, but the difference is small, which shows that the method provided by the embodiment of the invention can well approach the theoretical upper bound.
Fig. 4.3 shows the relationship between latency and task load. It can be observed from the figure that the latency will increase as the task load increases. Obviously, as the task load increases while the number of ENs remains the same, the computational pressure on the MEC server will increase, which will result in additional latency. When the task load is not too much, the processing delay of the task on the MEC server is relatively small. However, as the demand for computing resources increases, the need to balance tasks on the allocation MEC servers will become more stringent. Therefore, the cloud-edge-end cooperative unloading method based on the pipeline provided by the embodiment of the invention continues to highlight the advantages.
Fig. 4.4 and 4.5 show the relationship between the time delay and the maximum MD and EN transmit powers, respectively. It is observed that as the maximum transmit power of MD and EN increases, the delay becomes smaller. From the special case of the above analysis, it can be concluded that the upper bound of the transmitted power is large and the power available to the MD is also increasing. Therefore, the actual transmission rate becomes faster, which results in smaller delay. However, when the transmission power is increased, the inter-cell interference is made more serious, affecting the performance of the system. The same applies to fig. 4.5.
The embodiment of the invention also provides a cloud-edge-end cooperative unloading system based on the pipeline, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the steps of the method are realized when the processor executes the computer program.
It can be understood that, the cloud-edge-end cooperative offloading system based on the pipeline provided in the embodiment of the present invention corresponds to the cloud-edge-end cooperative offloading method based on the pipeline, and the explanation, the example, the verification, and the like of the relevant contents may refer to the corresponding contents in the cloud-edge-end cooperative offloading method based on the pipeline, which is not described herein again.
In summary, compared with the prior art, the method has the following beneficial effects:
1. according to the embodiment of the invention, the tasks are distributed through the pipeline unloading strategy, and the communication resources for the transmission of the delay sensitive tasks between the mobile equipment and the edge node and between the edge node and the cloud center can be effectively reduced, so that the transmission time of the delay sensitive tasks between the mobile equipment and between the edge node and the cloud center is reduced, the purpose of reducing the delay is achieved, and the requirement of the delay sensitive tasks on timeliness is met.
2. In the embodiment of the invention, based on the cloud-edge-end cooperative unloading architecture of the assembly line, the problem of minimizing the total waiting time delay of all mobile devices is provided by jointly considering the assembly line unloading strategy, the transmission rate and the power distribution.
3. In the existing calculation unloading method, the transmission rate directly takes the shannon limit, and the transmission rate can reach the theoretical maximum value in an actual scene, so that network congestion is caused.
4. According to the embodiment of the invention, a non-convex optimization problem is converted into a convex problem by using a classical continuous convex approximation method and an arithmetic geometric mean inequality, so that an optimal task allocation and resource allocation principle is obtained, and the system time delay is reduced.
It should be noted that, through the above description of the embodiments, those skilled in the art can clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
In the embodiments of the present invention, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A cloud-edge-end cooperative unloading method based on a production line is characterized in that tasks are distributed through a production line unloading strategy, and the production line unloading strategy comprises the following steps: for any task, the mobile device firstly judges whether the local has enough available resources, and when the mobile device has enough capacity to process the task, the mobile device independently processes the whole task; otherwise, the mobile equipment processes part of tasks according to the computing capacity of the mobile equipment and unloads the rest tasks to the nearest edge node; the MEC on the edge node determines how many tasks to process according to the received task amount and the self computing resource; if the MEC cannot process all the unloaded tasks, the MEC processes the tasks matched with the computing capacity of the MEC, and the remaining tasks are unloaded to a cloud center with sufficient computing resources, wherein the method comprises the following steps:
s1, acquiring task data and computing resource data, and setting variable parameters based on the task data and the computing resource data, wherein the variable parameters include all mobile device task I and current task quantity I of the mobile device n n Computing power f of a mobile device n n
S2, according to the pipeline unloading strategy of the task execution quantity, the current task quantity I of the mobile device n is converted into the task quantity I n Correspondingly allocating the variable parameters to the mobile equipment n, the edge nodes and the cloud center, and constructing a time delay model respectively associated with the task execution quantity correspondingly allocated to the mobile equipment n, the edge nodes and the cloud center according to the variable parameters and the initialization values thereof, wherein the time delay model comprises the following steps:
the delay model of the mobile device n is:
Figure FDA0003786254980000011
in the formula:
Figure FDA0003786254980000012
representing the latency of the mobile device n to perform the offloading task,
Figure FDA0003786254980000013
Figure FDA0003786254980000014
representing the computational resources required for local computation by the mobile device n,
Figure FDA0003786254980000015
f n representing the computing power of the mobile device n,
Figure FDA0003786254980000016
the transmission delay and the calculation delay of the task unloaded from the mobile equipment n to the edge node s are as follows:
the delay model of the edge node is as follows:
Figure FDA0003786254980000017
in the formula:
Figure FDA0003786254980000018
Figure FDA0003786254980000019
Figure FDA00037862549800000110
representing the transmission delay of the task off-loading from the mobile device n to the edge node s,
Figure FDA00037862549800000111
Figure FDA00037862549800000112
representing the computational delay of offloading the task from the mobile device n to the edge node s,
Figure FDA0003786254980000021
T n,s representing the latency of the edge node s to perform the offload task,
Figure FDA0003786254980000022
Figure FDA0003786254980000023
indicating the size of the data calculated by the edge node,
Figure FDA0003786254980000024
r n,s representing the transmission rate between the mobile device n and the edge node s,
Figure FDA0003786254980000025
Figure FDA0003786254980000026
representing the computational resources required for the computation of the edge node,
Figure FDA0003786254980000027
f n,s representing the computational power allocated by the edge node s to the mobile device n,
Figure FDA0003786254980000028
the cloud center delay model is as follows:
Figure FDA0003786254980000029
in the formula:
Figure FDA00037862549800000210
Figure FDA00037862549800000211
Figure FDA00037862549800000212
representing the transmission delay of the task off-loading from the edge node s to the cloud center c,
Figure FDA00037862549800000213
Figure FDA00037862549800000214
representing the computational latency of offloading the task from the edge node s to the cloud center c,
Figure FDA00037862549800000215
Figure FDA00037862549800000216
representing the time delay for the cloud center c to perform the offloading task,
Figure FDA00037862549800000217
Figure FDA00037862549800000218
represents the size of the data for the cloud-centric computing,
Figure FDA00037862549800000219
Figure FDA00037862549800000220
representing the transmission rate between edge node s and cloud center c,
Figure FDA00037862549800000221
Figure FDA00037862549800000222
representing the computing resources required for cloud-centric computing,
Figure FDA00037862549800000223
Figure FDA00037862549800000224
indicating, for task n, the computing power that cloud center c allocates to edge node s,
Figure FDA00037862549800000225
s3, obtaining a maximum time delay model when all the mobile equipment tasks I are unloaded and executed according to the time delay model;
s4, defining a pipeline unloading strategy of a joint optimization task, and a distribution problem of computing resources and communication resources, aiming at minimizing the total waiting delay problem of all mobile devices, and converting the distribution problem into an objective function and a corresponding constraint condition according to the maximum delay model;
s5, based on the objective function, the constraint condition, the current task amount of each mobile device, the computing resources needed by the task, and the computing power f of the mobile device n n And acquiring an optimal task pipeline unloading strategy, a computing resource and communication resource allocation strategy.
2. The pipeline-based cloud-edge-end cooperative offloading method of claim 1, wherein the variable parameters further comprise: computing resource psi required by a task n And deadline for completion of the task
Figure FDA0003786254980000031
Number of edge nodes S, number of mobile devices N, upper limit of computing resources of edge nodes S
Figure FDA0003786254980000032
And cloud centric computing resource cap
Figure FDA0003786254980000033
3. The pipeline-based cloud-edge-end collaborative offload method of claim 1, wherein the computing resources comprise computing power f allocated by an edge node to a mobile device n n,s And computing resources allocated to edge node s by cloud center
Figure FDA0003786254980000034
4. The pipeline-based cloud-edge-end cooperative offloading method of claim 1, wherein the obtaining the maximum latency model when all mobile device tasks I are offloaded completely according to the latency model comprises:
by collections
Figure FDA0003786254980000035
Represents a set of pipeline offload policies, where x n I n (x n ∈[0,1]) Representing the part of the task that is processed locally, (1-x) n )x n,s I n Represents the remaining tasks (1-x) n )I n Edge node s processes x therein n,s ∈[0,1]The remainder being (1-x) n )(1-x n,s )I n The data to be unloaded to the cloud center for processing, whether the task can be completed within the deadline, and the data to be unloaded is directly influenced by the pipeline unloading strategy; the sizes of the tasks in local computing, edge computing and cloud-centric computing can be modeled as:
Figure FDA0003786254980000036
Figure FDA0003786254980000037
Figure FDA0003786254980000038
the computing resources required for mobile device local computing, edge computing, and cloud-centric computing may be rewritten as:
Figure FDA0003786254980000039
Figure FDA00037862549800000310
Figure FDA00037862549800000311
and (3) re-modeling the processing time delay of the task, which comprises the following specific steps:
the mobile device locally calculates:
for the local computation scheme, processing is performed on the mobile device n, and the corresponding computation delay is rewritten as:
Figure FDA00037862549800000312
and (3) edge calculation:
if the mobile device is not able to compute all tasks locally, mobile device n needs to have tasks (1-x) left n )I n Offloading to the nearest edge node;since the computing resources of the MEC server deployed on the edge node s are limited, the edge node s processes x in the remaining tasks n,s The size is (1-x) through a wireless channel n )I n The transmission delay for offloading the task from the mobile device n to the edge node s can be redefined as:
Figure FDA0003786254980000041
the edge node s processes x in the remaining tasks n,s The calculated delay of (a) can be redefined as:
Figure FDA0003786254980000042
cloud computing:
when an edge node cannot compute all tasks, EN will offload all remaining tasks (1-x) n )(1-x n,s )I n To cloud center c, the size is (1-x) through a wireless fronthaul channel n )(1-x n,s )I n The transmission delay of the task unloaded from the edge node s to the cloud center c may be rewritten as:
Figure FDA0003786254980000043
the computation time delay for the cloud center c to process the remaining tasks may be rewritten as:
Figure FDA0003786254980000044
at this time, the total delay of the mobile device n to complete the task can be expressed as:
Figure FDA0003786254980000045
thus, the maximum delay model is as follows:
Figure FDA0003786254980000046
5. the pipeline-based cloud-edge-end cooperative offloading method of any of claims 1-4, wherein the communication resources comprise transmission rate and transmission power.
6. The pipeline-based cloud-edge-end cooperative offloading method of claim 1, wherein the objective function and corresponding constraints comprise:
Figure FDA0003786254980000047
Figure FDA0003786254980000048
Figure FDA0003786254980000049
Figure FDA00037862549800000410
Figure FDA00037862549800000411
Figure FDA00037862549800000412
Figure FDA00037862549800000413
Figure FDA00037862549800000414
wherein T is the total processing latency of all mobile device tasks I, and constraint C1 indicates that the tasks need to be completed within the deadline; constraint C2 represents the range of values for the pipeline offload policy; c3 ensures that the amount of power required to accomplish this task cannot exceed the upper limit of the mobile device's existing power; c4 indicates that the computing resource allocation cannot exceed the upper limit of the edge node and cloud center computing resources; c5 indicates that the transmission power of the mobile device and the edge node cannot exceed the upper bound; c6 and C7 indicate that the transmission rate of the mobile device to the edge node and the edge node to the cloud center, respectively, cannot exceed the theoretical upper bound.
7. The pipeline-based cloud-edge-to-end collaborative offload method of claim 6, wherein the objective function, constraints, current task volume per mobile device, computational resources required for the task, computational power f of mobile device n, are based on n Obtaining an optimal task pipeline unloading strategy, a computing resource and communication resource allocation strategy, comprising:
converting the objective function and the constraint condition into a convex optimization problem to obtain an optimized objective function and a constraint condition;
and inputting the current task amount of each mobile device, the computing resources required by the task and the computing capacities of the mobile device, the edge node and the cloud center into the optimized objective function and the constraint condition to obtain an optimal task pipeline unloading strategy, computing resources and communication resource allocation strategy.
8. A pipeline-based cloud-edge-end cooperative offloading system, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any of claims 1 to 7 when executing the computer program.
CN202010544184.3A 2020-06-15 2020-06-15 Cloud-edge-end cooperative unloading method and system based on assembly line Active CN111913723B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010544184.3A CN111913723B (en) 2020-06-15 2020-06-15 Cloud-edge-end cooperative unloading method and system based on assembly line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010544184.3A CN111913723B (en) 2020-06-15 2020-06-15 Cloud-edge-end cooperative unloading method and system based on assembly line

Publications (2)

Publication Number Publication Date
CN111913723A CN111913723A (en) 2020-11-10
CN111913723B true CN111913723B (en) 2022-09-23

Family

ID=73238138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010544184.3A Active CN111913723B (en) 2020-06-15 2020-06-15 Cloud-edge-end cooperative unloading method and system based on assembly line

Country Status (1)

Country Link
CN (1) CN111913723B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506656A (en) * 2020-12-08 2021-03-16 深圳市国电科技通信有限公司 Distribution method based on distribution Internet of things computing task
CN112689303B (en) * 2020-12-28 2022-07-22 西安电子科技大学 Edge cloud cooperative resource joint allocation method, system and application
CN112685163B (en) * 2021-01-06 2023-05-02 北京信息科技大学 Calculation unloading method based on mobile edge calculation and mobile edge calculation server
CN112511652B (en) * 2021-02-03 2021-04-30 电子科技大学 Cooperative computing task allocation method under edge computing
CN113015217B (en) * 2021-02-07 2022-05-20 重庆邮电大学 Edge cloud cooperation low-cost online multifunctional business computing unloading method
CN113315818B (en) * 2021-05-10 2023-03-24 华东桐柏抽水蓄能发电有限责任公司 Data acquisition terminal resource adaptation method based on edge calculation
CN113556380A (en) * 2021-06-07 2021-10-26 广东东华发思特软件有限公司 Edge distributed multi-copy processing method, device and medium of Internet of things equipment
GB2607871B (en) 2021-06-08 2023-10-18 Samsung Electronics Co Ltd Improvements in and relating to multi-access edge computing (MEC)
CN113660325B (en) * 2021-08-10 2023-11-07 克拉玛依和中云网技术发展有限公司 Industrial Internet task unloading strategy based on edge calculation
CN113961266B (en) * 2021-10-14 2023-08-22 湘潭大学 Task unloading method based on bilateral matching under edge cloud cooperation
CN116208970B (en) * 2023-04-18 2023-07-14 山东科技大学 Air-ground collaboration unloading and content acquisition method based on knowledge-graph perception

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920280A (en) * 2018-07-13 2018-11-30 哈尔滨工业大学 A kind of mobile edge calculations task discharging method under single user scene
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110659034A (en) * 2019-09-24 2020-01-07 合肥工业大学 Combined optimization deployment method, system and storage medium of cloud-edge hybrid computing service
CN111240701A (en) * 2019-12-31 2020-06-05 重庆大学 Task unloading optimization method for end-edge-cloud collaborative computing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10440096B2 (en) * 2016-12-28 2019-10-08 Intel IP Corporation Application computation offloading for mobile edge computing
US10931586B2 (en) * 2018-10-30 2021-02-23 Verizon Patent And Licensing Inc. Method and system for predictive edge resources

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920280A (en) * 2018-07-13 2018-11-30 哈尔滨工业大学 A kind of mobile edge calculations task discharging method under single user scene
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110659034A (en) * 2019-09-24 2020-01-07 合肥工业大学 Combined optimization deployment method, system and storage medium of cloud-edge hybrid computing service
CN111240701A (en) * 2019-12-31 2020-06-05 重庆大学 Task unloading optimization method for end-edge-cloud collaborative computing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Low-Latency Cooperative Computation Offloading for Mobile Edge Computing;Xinxiang Zhang;《 2019 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)》;20200326;155-159 *
面向边缘侧卸载优化的工作流动态关键路径调度算法;袁友伟等;《计算机集成制造系统》;20190415(第04期);全文 *

Also Published As

Publication number Publication date
CN111913723A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111913723B (en) Cloud-edge-end cooperative unloading method and system based on assembly line
Dai et al. Joint computation offloading and user association in multi-task mobile edge computing
CN111586720B (en) Task unloading and resource allocation combined optimization method in multi-cell scene
CN108540406B (en) Network unloading method based on hybrid cloud computing
CN107766135B (en) Task allocation method based on particle swarm optimization and simulated annealing optimization in moving cloud
CN112512056B (en) Multi-objective optimization calculation unloading method in mobile edge calculation network
CN110096362B (en) Multitask unloading method based on edge server cooperation
Dai et al. Joint offloading and resource allocation in vehicular edge computing and networks
CN113064665B (en) Multi-server computing unloading method based on Lyapunov optimization
CN111182570A (en) User association and edge computing unloading method for improving utility of operator
Zhou et al. Joint optimization of offloading and resource allocation in vehicular networks with mobile edge computing
CN110856259A (en) Resource allocation and offloading method for adaptive data block size in mobile edge computing environment
CN112512061A (en) Task unloading and dispatching method in multi-access edge computing system
Li et al. Security and energy-aware collaborative task offloading in D2D communication
Jiang et al. Research on new edge computing network architecture and task offloading strategy for Internet of Things
El Haber et al. Computational cost and energy efficient task offloading in hierarchical edge-clouds
Banez et al. A mean-field-type game approach to computation offloading in mobile edge computing networks
Lakew et al. Adaptive partial offloading and resource harmonization in wireless edge computing-assisted ioe networks
Zhang et al. Energy minimization task offloading mechanism with edge-cloud collaboration in IoT networks
Mahn et al. Distributed algorithm for energy efficient joint cloud and edge computing with splittable tasks
Yuan et al. An energy-efficient computing offloading framework for blockchain-enabled video streaming systems
Gupta et al. Lifetime maximization in mobile edge computing networks
Di Pietro et al. An optimal low-complexity policy for cache-aided computation offloading
He et al. An offloading scheduling strategy with minimized power overhead for internet of vehicles based on mobile edge computing
Wang et al. Joint heterogeneous tasks offloading and resource allocation in mobile edge computing systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant