CN111132191B - Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server - Google Patents

Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server Download PDF

Info

Publication number
CN111132191B
CN111132191B CN201911275166.3A CN201911275166A CN111132191B CN 111132191 B CN111132191 B CN 111132191B CN 201911275166 A CN201911275166 A CN 201911275166A CN 111132191 B CN111132191 B CN 111132191B
Authority
CN
China
Prior art keywords
user
task
base station
time delay
modeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911275166.3A
Other languages
Chinese (zh)
Other versions
CN111132191A (en
Inventor
柴蓉
毛梦齐
陈前斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201911275166.3A priority Critical patent/CN111132191B/en
Publication of CN111132191A publication Critical patent/CN111132191A/en
Application granted granted Critical
Publication of CN111132191B publication Critical patent/CN111132191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/53Allocation or scheduling criteria for wireless resources based on regulatory allocation policies

Abstract

The invention relates to a method for unloading, caching and distributing resources of a joint task of a mobile edge computing server, belonging to the technical field of wireless communication and comprising the following steps: s1: modeling user task auxiliary information type identification; s2: modeling a user task auxiliary information cache variable; s3: modeling the total time delay of user task execution; s4: modeling time delay required by local execution of a user task; s5: modeling time delay required by unloading a user task to a base station side for execution; s6: modeling user task unloading, caching and resource allocation limiting conditions; s7: and determining user task unloading, caching and resource allocation strategies based on the minimization of the total time delay of user task execution. The invention can realize the minimization of the total time delay of the execution of the user task by optimizing and determining the user task unloading, caching and resource allocation strategies.

Description

Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server
Technical Field
The invention belongs to the technical field of wireless communication, and relates to a method for unloading, caching and resource allocation of a joint task of a mobile edge computing server.
Background
With the rapid development of the internet and the popularization of 4G/5G wireless networks, the era of everything interconnection has come, and the rapid increase of the number of network edge devices brings about huge amount of data at the level of bytes. Existing data statistics show that most data generated by the internet in the future can be stored, processed and analyzed at the network edge, which poses a serious challenge to the limited computing power and battery capacity of the intelligent terminal. To solve this problem, research has been carried out to propose a mobile edge computing offloading technology, in which a mobile edge computing server with a relatively high computing power is deployed in a network to offload a user terminal computing task from a mobile device to the mobile edge computing server for processing, so that the service performance of an intelligent terminal can be effectively improved, and the energy consumption of the terminal can be significantly reduced. Although the processing performance of the current edge device is increasingly enhanced, the processing performance is still limited by the computing power and the battery power of the edge device, and it is difficult to meet the business requirement of processing a large amount of data in a short time, and the service quality of the user is difficult to guarantee. In addition, the high power consumption of the mobile device also affects to some extent the service experience of the user using high performance demanding services on the local device.
In existing research, there is literature that task execution energy consumption is used as a performance evaluation basis to determine a task offloading strategy. There is literature on studying the task offloading problem in the small cell mobile edge computing system and modeling it as an energy consumption optimization problem to minimize the system energy consumption, and authors determine the offloading strategy of each user by proposing a task offloading algorithm based on an artificial fish swarm algorithm. For another example, there is a document that develops research on the task offloading problem of the mobile edge computing system supporting energy collection, models the multi-user multi-task computing offloading problem as an energy consumption minimization problem, and proposes a method for determining energy collection and task offloading decisions based on lyapunov optimization. However, the existing research rarely considers the problem of time delay optimization, so that the task execution performance of a user is limited; in addition, the existing literature is less in research and jointly considers user unloading and caching strategies, so that the network performance optimization of the algorithm is difficult to realize.
Disclosure of Invention
In view of this, an object of the present invention is to provide a method for unloading, caching and resource allocation of a mobile edge computing server combined task, assuming that a user needs to execute a certain computing task, where the task is composed of two parts of data, namely data information generated by the user and auxiliary information, and the mobile edge computing server has certain task computing and caching capabilities, and the user can execute the task locally or unload the task by the mobile edge computing server, and modeling the total time delay of user task execution is an optimization target, so as to implement the combined optimal allocation of user task unloading, caching and resources.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for unloading, caching and resource allocation of a joint task of a mobile edge computing server comprises the following steps:
s1: modeling user task auxiliary information type identification;
s2: modeling a user task auxiliary information cache variable;
s3: modeling the total time delay of user task execution;
s4: modeling time delay required by local execution of a user task;
s5: modeling time delay required by unloading a user task to a base station side for execution;
s6: modeling user task unloading, caching and resource allocation limiting conditions;
s7: and determining user task unloading, caching and resource allocation strategies based on the minimization of the total time delay of user task execution.
Further, the step S1 specifically includes: let Ω be { r ═ r1,...,rMDenotes a set of users who need to perform a task, where riRepresenting the ith user, i is more than or equal to 1 and less than or equal to M, M is the number of users, and assuming that the task to be executed by the user consists of two parts of data information and auxiliary information generated by the user, F is equal to F1,...,fQDenotes a set of side information types, where fqRepresenting the Q-th type auxiliary information, Q is more than or equal to 1 and less than or equal to Q, Q is the number of auxiliary information types, and A isqRepresenting auxiliary information fqThe amount of data of (a); let ai,qIndicating the type of auxiliary information of the user task, ai,q1 denotes user riAuxiliary information f required for tasks to be performedqOtherwise, ai,q=0。
Further, the step S2 specifically includes: let phi be { b ═ b1,...,bNDenotes the set of base stations in the network, where bjJ is more than or equal to 1 and less than or equal to N, N is the number of base stations, let yj,qRepresents base station bjBuffer decision identity, yj,q1 denotes a base station bjCaching auxiliary information fqOtherwise, yj,q=0。
Further, the step S3 specifically includes: according to the formula
Figure BDA0002315363580000021
Modeling total time delay of user task execution, wherein TiTime delay, T, required for task execution of user iiIs modeled as
Figure BDA0002315363580000022
Wherein x isi,jIndicating user i task offloading to base station bjScheduling decision identification of, xi,j1 means that user i offloads the task to base station bjExecute, otherwise, xi,j=0,Ti lIndicating the time delay required for the user i task to execute locally,
Figure BDA0002315363580000023
indicating that user i offloads the task to base station bjThe required delay is implemented.
Further, step S4 specifically includes: according to the formula Ti l=Ti l,t+Ti l,eModeling the time delay required by local execution of user i task, wherein Ti l,tAnd Ti l,eRespectively representing the transmission delay of the user i for obtaining the task auxiliary information and the task local processing delay, Ti l,tIs modeled as
Figure BDA0002315363580000024
Ti l,eIs modeled as
Figure BDA0002315363580000025
Wherein z isi,jIndicating the user's association identity with the base station, zi,j1 denotes user i and base station bjAssociating, otherwise, zi,j=0,
Figure BDA0002315363580000026
Represents user i and base station bjTransmission rate of inter-downlink, RMRepresents base station bjTransmission rate of link with cloud server, DiRepresenting the amount of computing resources required by user i to perform the task, fi lRepresents the CPU frequency of user i;
assuming that the system bandwidth is divided into a plurality of sub-channels, each sub-channel can only be allocated to one user, and different base stations can share the frequency spectrum; the above-mentioned
Figure BDA0002315363580000031
Is modeled as
Figure BDA0002315363580000032
Wherein, betai,lAllocating decision flags, beta, for the downlink sub-channelsi,l1 denotes that the l-th sub-channel is allocated to user i, otherwise, βi,l0, B denotes a bandwidth of the subchannel,
Figure BDA0002315363580000033
represents base station bjThe transmission power of the antenna is set to be,
Figure BDA0002315363580000034
represents user i and base station bjChannel gain of the downlink in-between sub-channel l, di,jRepresents user i and base station bjThe distance between them, λ represents the road loss coefficient,
Figure BDA0002315363580000035
represents user i and base station bjWhen the communication is performed, interference from other base stations is received,
Figure BDA0002315363580000036
is modeled as
Figure BDA0002315363580000037
CD={1,2,...,cdIs a downlink sub-channel set, wherein cdIs the number of downlink sub-channels in the network.
Further, the step S5 specifically includes: according to the formula
Figure BDA0002315363580000038
Offloading of modeled user i tasks to base station bjThe required time delay is implemented, wherein,
Figure BDA0002315363580000039
and
Figure BDA00023153635800000310
respectively representing the transmission of task data information of a user i to a base station bjWhen corresponding transmissionDelay and base station bjThe time delay required to process the task of user i,
Figure BDA00023153635800000311
is modeled as
Figure BDA00023153635800000312
Is modeled as
Figure BDA00023153635800000313
Wherein S isiA data amount indicating data information generated by the user i,
Figure BDA00023153635800000314
represents user i and base station bjTransmission rate of inter-uplink, fi,jRepresents base station bjAllocating CPU frequency for user i;
the above-mentioned
Figure BDA00023153635800000315
Is modeled as
Figure BDA00023153635800000316
Wherein, deltai,kAllocating a decision identity, δ, to the uplink subchanneli,k1 denotes that the k-th sub-channel is assigned to user i, otherwise δi,k=0,Pi UWhich represents the transmit power of the user i,
Figure BDA00023153635800000317
represents user i and base station bjThe channel gain of the uplink in subchannel k,
Figure BDA00023153635800000318
indicating user i selects base station bjWhen the sub-channel k is communicated, the interference of other users is received,
Figure BDA00023153635800000319
is modeled as
Figure BDA00023153635800000320
CU={1,2,...,cuIs the uplink sub-channel set, wherein, cuIs the number of uplink subchannels in the network.
Further, the step S6 specifically includes: modeling user task offload, caching and resource allocation constraints, wherein the task offload constraints are modeled as xi,j∈{0,1},
Figure BDA00023153635800000321
Modeling task cache constraints as yj,q∈{0,1},zi,j∈{0,1},
Figure BDA00023153635800000322
Wherein, thetajRepresents base station bjThe resource allocation constraint is modeled as deltai,k∈{0,1},βi,l∈{0,1},
Figure BDA0002315363580000041
Figure BDA0002315363580000042
And
Figure BDA0002315363580000043
wherein, FjRepresents base station bjThe maximum amount of computing resources.
Further, the step S7 specifically includes: and determining a user task scheduling and resource allocation strategy based on the minimum of the total time delay of the user task execution: under the condition of meeting the user task unloading, caching and resource allocation limiting conditions, the user task scheduling and resource allocation strategy is determined by optimization with the aim of minimizing the total time delay of user task execution, namely
Figure BDA0002315363580000044
The invention has the beneficial effects that: the invention can ensure that the user task unloading and caching strategies are optimal, the computing resource distribution is optimal, and the total time delay of the user task execution is minimized.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of a mobile edge server offload network;
FIG. 2 is a schematic flow chart of the method of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
The invention provides a method for unloading, caching and resource allocation of a combined task of a mobile edge computing server, which is characterized in that a user is supposed to execute a certain computing task, the task is composed of two parts of data, namely data information and auxiliary information, which are generated by the user, the mobile edge computing server has certain task computing and caching capabilities, the user can execute locally or unload the task by the mobile edge computing server, the total time delay of task execution of the user is modeled as an optimization target, and the combined optimization allocation of the task unloading, caching and the resource of the user is realized.
The invention relates to a mobile edge computing server combined task unloading, caching and resource allocation method.A user task is assumed to be composed of two parts of data, namely data information and auxiliary information generated by a user, and the mobile edge computing server can cache the auxiliary information of the user task and also can acquire the auxiliary information from a cloud server through a return link; a plurality of base stations exist in the network, frequency spectrums are shared among the base stations, interference exists in task transmission, and a user to execute a task can select a proper mode to unload the task; modeling the total time delay of user task execution, and optimizing to realize task unloading, caching and resource allocation strategies based on the total time delay of user task execution.
As shown in fig. 1, there are multiple users to be executed with tasks in the network, and the users select a suitable manner to unload the tasks, and the total time delay for executing the tasks of the users is minimized by optimizing the user task unloading, caching and resource allocation policies.
As shown in fig. 2, the method of the present invention specifically includes the following steps:
1) modeling user task auxiliary information type identification;
1. the modeling user task auxiliary information type identification specifically comprises the following steps: let Ω be { r ═ r1,...,rMDenotes a set of users who need to perform a task, where riRepresenting the ith user, i is more than or equal to 1 and less than or equal to M, M is the number of users, and assuming that the task to be executed by the user consists of two parts of data information and auxiliary information generated by the user, F is equal to F1,...,fQDenotes a set of side information types, where fqRepresenting the Q-th type auxiliary information, Q is more than or equal to 1 and less than or equal to Q, Q is the number of auxiliary information types, and A isqRepresenting auxiliary information fqThe amount of data of (a); let ai,qIndicating the type of auxiliary information of the user task, ai,q1 denotes user riAuxiliary information f required for tasks to be performedqOtherwise, ai,q=0。
2) Modeling a user task auxiliary information cache variable;
the modeling user task auxiliary information cache variable specifically comprises the following steps: let phi be { b ═ b1,...,bNDenotes the set of base stations in the network, where bjJ is more than or equal to 1 and less than or equal to N, N is the number of base stations, let yj,qRepresents base station bjBuffer decision identity, yj,q1 denotes a base station bjCaching auxiliary information fqOtherwise, yj,q=0。
3) Modeling the total time delay of user task execution;
modeling the total time delay of user task execution, specifically: according to the formula
Figure BDA0002315363580000061
Modeling total time delay of user task execution, wherein TiTime delay, T, required for task execution of user iiIs modeled as
Figure BDA0002315363580000062
Wherein x isi,jIndicating user i task offloading to base station bjScheduling decision identification of, xi,j1 means that user i offloads the task to base station bjExecute, otherwise, xi,j=0,Ti lIndicating the time delay required for the user i task to execute locally,
Figure BDA0002315363580000063
indicating that user i offloads the task to base station bjThe required delay is implemented.
4) Modeling time delay required by local execution of a user task;
modeling the time delay required by the local execution of the user task, specifically: according to the formula Ti l=Ti l,t+Ti l,eModeling the time delay required by local execution of user i task, wherein Ti l,tAnd Ti l,eRespectively representing the transmission delay of the user i for obtaining the task auxiliary information and the task local processing delay, Ti l,tIs modeled as
Figure BDA0002315363580000064
Ti l,eIs modeled as
Figure BDA0002315363580000065
Wherein z isi,jIndicating the user's association identity with the base station, zi,j1 denotes user i and base station bjAssociating, otherwise, zi,j=0,
Figure BDA0002315363580000066
Represents user i and base station bjTransmission rate of inter-downlink, RMRepresents base station bjTransmission rate of link with cloud server, DiRepresenting the amount of computing resources required by user i to perform the task, fi lRepresents the CPU frequency of user i;
the spectrum may be shared between different base stations, assuming that the system bandwidth is divided equally into a number of sub-channels and each sub-channel can only be allocated to one user. The above-mentioned
Figure BDA0002315363580000067
Is modeled as
Figure BDA0002315363580000068
Wherein, betai,lAllocating decision flags, beta, for the downlink sub-channelsi,l1 denotes that the l-th sub-channel is allocated to user i, otherwise, βi,l0, B denotes a bandwidth of the subchannel,
Figure BDA0002315363580000069
represents base station bjThe transmission power of the antenna is set to be,
Figure BDA00023153635800000610
represents user i and base station bjChannel gain of the downlink in-between sub-channel l, di,jRepresents user i and base station bjThe distance between them, λ represents the road loss coefficient,
Figure BDA00023153635800000611
represents user i and base station bjWhen the communication is performed, interference from other base stations is received,
Figure BDA00023153635800000612
is modeled as
Figure BDA00023153635800000613
CD={1,2,...,cdIs a downlink sub-channel set, wherein cdIs the number of downlink sub-channels in the network.
5) Modeling time delay required by unloading a user task to a base station side for execution;
the method comprises the following steps of modeling time delay required by unloading a user task to a base station side for execution, specifically: according to the formula
Figure BDA00023153635800000614
Offloading of modeled user i tasks to base station bjThe required time delay is implemented, wherein,
Figure BDA00023153635800000615
and
Figure BDA00023153635800000616
respectively representTransmitting task data information of user i to base station bjCorresponding transmission delay and base station bjThe time delay required to handle the user i task,
Figure BDA00023153635800000617
is modeled as
Figure BDA00023153635800000618
Figure BDA0002315363580000071
Is modeled as
Figure BDA0002315363580000072
Wherein S isiA data amount indicating data information generated by the user i,
Figure BDA0002315363580000073
represents user i and base station bjTransmission rate of inter-uplink, fi,jRepresents base station bjAllocating CPU frequency for user i;
the above-mentioned
Figure BDA0002315363580000074
Is modeled as
Figure BDA0002315363580000075
Wherein, deltai,kAllocating a decision identity, δ, to the uplink subchanneli,k1 denotes that the k-th sub-channel is assigned to user i, otherwise δi,k=0,Pi UWhich represents the transmit power of the user i,
Figure BDA0002315363580000076
represents user i and base station bjThe channel gain of the uplink in subchannel k,
Figure BDA0002315363580000077
indicating user i selects base station bjWhen the sub-channel k is communicated, the interference of other users is received,
Figure BDA0002315363580000078
is modeled as
Figure BDA0002315363580000079
CU={1,2,...,cuIs the uplink sub-channel set, wherein, cuIs the number of uplink subchannels in the network.
6) Modeling user task unloading, caching and resource allocation limiting conditions;
modeling user task unloading, caching and resource allocation limiting conditions, specifically: modeling user task offload, caching and resource allocation constraints, wherein the task offload constraints are modeled as xi,j∈{0,1},
Figure BDA00023153635800000710
Modeling task cache constraints as yj,q∈{0,1},zi,j∈{0,1},
Figure BDA00023153635800000711
Wherein, thetajRepresents base station bjThe resource allocation constraint is modeled as deltai,k∈{0,1},βi,l∈{0,1},
Figure BDA00023153635800000712
Figure BDA00023153635800000713
And
Figure BDA00023153635800000714
wherein, FjRepresents base station bjThe maximum amount of computing resources.
7) Determining a user task unloading, caching and resource allocation strategy based on the minimization of the total time delay of user task execution;
determining a user task unloading, caching and resource allocation strategy based on the minimization of the total time delay of user task execution, which specifically comprises the following steps: under the condition of meeting the user task unloading, caching and resource allocation limitation conditions, the user uses the methodThe minimization of the total time delay of task execution is taken as the target, and the scheduling and resource allocation strategy of the user task is optimized and determined, namely
Figure BDA00023153635800000715
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (1)

1. A method for unloading, caching and resource allocation of joint tasks of a mobile edge computing server is characterized by comprising the following steps: the method comprises the following steps:
s1: modeling user task auxiliary information type identification; let Ω be { r ═ r1,...,rMDenotes a set of users who need to perform a task, where riRepresenting the ith user, i is more than or equal to 1 and less than or equal to M, M is the number of users, and assuming that the task to be executed by the user consists of two parts of data information and auxiliary information generated by the user, F is equal to F1,...,fQDenotes a set of side information types, where fqRepresenting the Q-th type auxiliary information, Q is more than or equal to 1 and less than or equal to Q, Q is the number of auxiliary information types, and A isqRepresenting auxiliary information fqThe amount of data of (a); let ai,qIndicating the type of auxiliary information of the user task, ai,q1 denotes user riAuxiliary information f required for tasks to be performedqOtherwise, ai,q=0;
S2: modeling a user task auxiliary information cache variable; let phi be { b ═ b1,...,bNDenotes the set of base stations in the network, where bjJ is more than or equal to 1 and less than or equal to N, N is the number of base stations, let yj,qRepresents base station bjBuffer decision identity, yj,q1 denotes a base station bjCaching auxiliary information fqOtherwise, yj,q=0;
S3: modeling the total time delay of user task execution; according to the formula
Figure FDA0003520973740000011
Modeling total time delay of user task execution, wherein TiTime delay, T, required for task execution of user iiIs modeled as
Figure FDA0003520973740000012
Wherein x isi,jIndicating user i task offloading to base station bjScheduling decision identification of, xi,j1 means that user i offloads the task to base station bjExecute, otherwise, xi,j=0,Ti lIndicating the time delay required for the user i task to execute locally,
Figure FDA0003520973740000013
indicating that user i offloads the task to base station bjPerforming the required time delay;
s4: modeling time delay required by local execution of a user task; : according to the formula Ti l=Ti l,t+Ti l,eModeling the time delay required by local execution of user i task, wherein Ti l,tAnd Ti l,eRespectively representing the transmission delay of the user i for obtaining the task auxiliary information and the task local processing delay, Ti l,tIs modeled as
Figure FDA0003520973740000014
Ti l,eIs modeled as
Figure FDA0003520973740000015
Wherein z isi,jIndicating the user's association identity with the base station, zi,j1 denotes user i and base station bjAssociating, otherwise, zi,j=0,
Figure FDA0003520973740000016
Represents user i and base station bjTransmission rate of inter-downlink, RMRepresents base station bjTransmission rate of link with cloud server, DiRepresenting the amount of computing resources required by user i to perform the task, fi lRepresents the CPU frequency of user i; assuming that the system bandwidth is divided into a plurality of sub-channels, each sub-channel can only be allocated to one user, and different base stations can share the frequency spectrum; the above-mentioned
Figure FDA0003520973740000017
Is modeled as
Figure FDA0003520973740000018
Wherein, betai,lAllocating decision flags, beta, for the downlink sub-channelsi,l1 denotes that the l-th sub-channel is allocated to user i, otherwise, βi,l0, B denotes a bandwidth of the subchannel,
Figure FDA0003520973740000021
represents base station bjThe transmission power of the antenna is set to be,
Figure FDA0003520973740000022
represents user i and base station bjChannel gain of the downlink in-between sub-channel l, di,jRepresents user i and base station bjThe distance between them, λ represents the road loss coefficient,
Figure FDA0003520973740000023
represents user i and base station bjWhen the communication is performed, interference from other base stations is received,
Figure FDA0003520973740000024
is modeled as
Figure FDA0003520973740000025
CD={1,2,...,cdIs a downlink sub-channel set, wherein cdThe number of downlink sub-channels in the network;
s5: modeling time delay required by unloading a user task to a base station side for execution; according to the formula
Figure FDA0003520973740000026
Offloading of modeled user i tasks to base station bjThe required time delay is implemented, wherein,
Figure FDA0003520973740000027
and
Figure FDA0003520973740000028
respectively representing the transmission of task data information of a user i to a base station bjCorresponding transmission delay and base station bjThe time delay required to process the task of user i,
Figure FDA0003520973740000029
is modeled as
Figure FDA00035209737400000210
Figure FDA00035209737400000211
Is modeled as
Figure FDA00035209737400000212
Wherein S isiA data amount indicating data information generated by the user i,
Figure FDA00035209737400000213
represents user i and base station bjTransmission rate of inter-uplink, fi,jRepresents base station bjAllocating CPU frequency for user i; the above-mentioned
Figure FDA00035209737400000214
Is modeled as
Figure FDA00035209737400000215
Wherein, deltai,kIs an uplink sub-mailTrack allocation decision identification, δi,k1 denotes that the k-th sub-channel is assigned to user i, otherwise δi,k=0,Pi UWhich represents the transmit power of the user i,
Figure FDA00035209737400000216
represents user i and base station bjThe channel gain of the uplink in subchannel k,
Figure FDA00035209737400000217
indicating user i selects base station bjWhen the sub-channel k is communicated, the interference of other users is received,
Figure FDA00035209737400000218
is modeled as
Figure FDA00035209737400000219
CU={1,2,...,cuIs the uplink sub-channel set, wherein, cuThe number of uplink sub-channels in the network;
s6: modeling user task unloading, caching and resource allocation limiting conditions; modeling user task offload, caching and resource allocation constraints, wherein the task offload constraints are modeled as xi,j∈{0,1},
Figure FDA00035209737400000220
Modeling task cache constraints as yj,q∈{0,1},zi,j∈{0,1},
Figure FDA00035209737400000221
Wherein, thetajRepresents base station bjThe resource allocation constraint is modeled as deltai,k∈{0,1},βi,l∈{0,1},
Figure FDA00035209737400000222
Figure FDA00035209737400000223
And
Figure FDA00035209737400000224
wherein, FjRepresents base station bjThe maximum amount of computing resources;
s7: determining a user task unloading, caching and resource allocation strategy based on the minimization of the total time delay of user task execution; under the condition of meeting the user task unloading, caching and resource allocation limiting conditions, the user task scheduling and resource allocation strategy is determined by optimization with the aim of minimizing the total time delay of user task execution, namely
Figure FDA0003520973740000031
CN201911275166.3A 2019-12-12 2019-12-12 Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server Active CN111132191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911275166.3A CN111132191B (en) 2019-12-12 2019-12-12 Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911275166.3A CN111132191B (en) 2019-12-12 2019-12-12 Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server

Publications (2)

Publication Number Publication Date
CN111132191A CN111132191A (en) 2020-05-08
CN111132191B true CN111132191B (en) 2022-04-01

Family

ID=70499941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911275166.3A Active CN111132191B (en) 2019-12-12 2019-12-12 Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server

Country Status (1)

Country Link
CN (1) CN111132191B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111711479B (en) * 2020-06-15 2022-02-11 重庆邮电大学 Low-earth-orbit satellite system resource scheduling method
CN111901400A (en) * 2020-07-13 2020-11-06 兰州理工大学 Edge computing network task unloading method equipped with cache auxiliary device
CN112068952A (en) * 2020-07-21 2020-12-11 清华大学 Cooperative optimization method for network communication, calculation and storage resources of multiple unmanned aerial vehicles
CN111866954B (en) * 2020-07-21 2022-03-29 重庆邮电大学 User selection and resource allocation method based on federal learning
CN112187872B (en) * 2020-09-08 2021-07-30 重庆大学 Content caching and user association optimization method under mobile edge computing network
CN112291335B (en) * 2020-10-27 2021-11-02 上海交通大学 Optimized task scheduling method in mobile edge calculation
CN112689296B (en) * 2020-12-14 2022-06-24 山东师范大学 Edge calculation and cache method and system in heterogeneous IoT network
CN113377447B (en) * 2021-05-28 2023-03-21 四川大学 Multi-user computing unloading method based on Lyapunov optimization
CN114125936B (en) * 2021-11-29 2023-09-05 中国联合网络通信集团有限公司 Resource scheduling method, device and storage medium
CN115334669B (en) * 2022-07-29 2023-05-16 西安邮电大学 Resource allocation method assisted by non-orthogonal multiple access and energy collection in cacheable network
CN115297013B (en) * 2022-08-04 2023-11-28 重庆大学 Task unloading and service cache joint optimization method based on edge collaboration
CN117251296B (en) * 2023-11-15 2024-03-12 成都信息工程大学 Mobile edge computing task unloading method with caching mechanism

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107708135A (en) * 2017-07-21 2018-02-16 上海交通大学 A kind of resource allocation methods for being applied to mobile edge calculations scene
EP3457664A1 (en) * 2017-09-14 2019-03-20 Deutsche Telekom AG Method and system for finding a next edge cloud for a mobile user
CN109756912A (en) * 2019-03-25 2019-05-14 重庆邮电大学 A kind of multiple base stations united task unloading of multi-user and resource allocation methods
CN110418418A (en) * 2019-07-08 2019-11-05 广州海格通信集团股份有限公司 Scheduling method for wireless resource and device based on mobile edge calculations
CN110418416A (en) * 2019-07-26 2019-11-05 东南大学 Resource allocation methods based on multiple agent intensified learning in mobile edge calculations system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107708135A (en) * 2017-07-21 2018-02-16 上海交通大学 A kind of resource allocation methods for being applied to mobile edge calculations scene
EP3457664A1 (en) * 2017-09-14 2019-03-20 Deutsche Telekom AG Method and system for finding a next edge cloud for a mobile user
CN109756912A (en) * 2019-03-25 2019-05-14 重庆邮电大学 A kind of multiple base stations united task unloading of multi-user and resource allocation methods
CN110418418A (en) * 2019-07-08 2019-11-05 广州海格通信集团股份有限公司 Scheduling method for wireless resource and device based on mobile edge calculations
CN110418416A (en) * 2019-07-26 2019-11-05 东南大学 Resource allocation methods based on multiple agent intensified learning in mobile edge calculations system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Energy Efficient Computation Offloading for Energy Harvesting-Enabled Heterogeneous Cellular Networks(Workshop);Mengqi Mao;《Communications and Networking》;20191201;全文 *
移动边缘计算中的计算卸载策略研究综述;董思岐;《计算机科学》;20191130;全文 *

Also Published As

Publication number Publication date
CN111132191A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111132191B (en) Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server
CN111447619B (en) Joint task unloading and resource allocation method in mobile edge computing network
CN107995660B (en) Joint task scheduling and resource allocation method supporting D2D-edge server unloading
CN110087318B (en) Task unloading and resource allocation joint optimization method based on 5G mobile edge calculation
Liu et al. Economically optimal MS association for multimedia content delivery in cache-enabled heterogeneous cloud radio access networks
WO2022121097A1 (en) Method for offloading computing task of mobile user
CN109413724B (en) MEC-based task unloading and resource allocation scheme
CN109151864B (en) Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network
CN111586696B (en) Resource allocation and unloading decision method based on multi-agent architecture reinforcement learning
CN109756912B (en) Multi-user multi-base station joint task unloading and resource allocation method
CN111182570B (en) User association and edge computing unloading method for improving utility of operator
CN111372314A (en) Task unloading method and task unloading device based on mobile edge computing scene
CN112888002B (en) Game theory-based mobile edge computing task unloading and resource allocation method
CN109194763B (en) Caching method based on small base station self-organizing cooperation in ultra-dense network
CN112105062B (en) Mobile edge computing network energy consumption minimization strategy method under time-sensitive condition
CN107613556B (en) Full-duplex D2D interference management method based on power control
CN111200831B (en) Cellular network computing unloading method fusing mobile edge computing
CN111915142B (en) Unmanned aerial vehicle auxiliary resource allocation method based on deep reinforcement learning
CN112512065B (en) Method for unloading and migrating under mobile awareness in small cell network supporting MEC
Zhou et al. Multi-server federated edge learning for low power consumption wireless resource allocation based on user QoE
Paymard et al. Task scheduling based on priority and resource allocation in multi-user multi-task mobile edge computing system
CN111954230B (en) Computing migration and resource allocation method based on integration of MEC and dense cloud access network
CN110061826B (en) Resource allocation method for maximizing energy efficiency of multi-carrier distributed antenna system
CN116916386A (en) Large model auxiliary edge task unloading method considering user competition and load
CN116405979A (en) Millimeter wave mobile edge computing networking resource allocation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant