CN114375058A - Task queue aware edge computing real-time channel allocation and task unloading method - Google Patents

Task queue aware edge computing real-time channel allocation and task unloading method Download PDF

Info

Publication number
CN114375058A
CN114375058A CN202210058397.4A CN202210058397A CN114375058A CN 114375058 A CN114375058 A CN 114375058A CN 202210058397 A CN202210058397 A CN 202210058397A CN 114375058 A CN114375058 A CN 114375058A
Authority
CN
China
Prior art keywords
task
user
combination
phi
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210058397.4A
Other languages
Chinese (zh)
Inventor
孙彦赞
谢新坤
张舜卿
吴雅婷
王涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202210058397.4A priority Critical patent/CN114375058A/en
Publication of CN114375058A publication Critical patent/CN114375058A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • H04W72/542Allocation or scheduling criteria for wireless resources based on quality criteria using measured or perceived quality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/53Allocation or scheduling criteria for wireless resources based on regulatory allocation policies

Abstract

A task queue sensing edge calculation real-time channel allocation and task unloading method is characterized in that base station channel allocation and user task unloading problems sensed by a task queue are converted into a single-time slot optimization model according to a Lyapunov optimization framework, users are divided and combined according to a game theory and are subjected to cooperative game, a combination set with stable convergence is formed, and finally a channel allocation and task unloading strategy of each time slot is obtained.

Description

Task queue aware edge computing real-time channel allocation and task unloading method
Technical Field
The invention relates to a technology in the field of distributed computing, in particular to a real-time channel allocation and task unloading method based on Lyapunov optimization and game theory and based on task queue backlog sensing.
Background
Mobile Edge Computing (MEC) is widely recognized as an important technology to realize the next generation internet landscape. The traditional cloud computing system depends on a remote public cloud, and long delay is caused due to data exchange, while the MEC deploys a cloud computing platform at the edge of a wireless access network to provide computing, storage, network and communication resources for application services, so that the tension relationship between computing-intensive application programs and mobile devices with limited resources can be relieved. Therefore, by offloading the computing tasks on the mobile device to the MEC server, the distance to the application service may be shortened, thereby reducing energy consumption and execution delay and significantly improving the quality of experience for the user.
In the existing technology for unloading calculation tasks, a channel is mostly allocated once in a wireless transmission process, that is, a task does not change the channel in the transmission process, but the channel is time-varying, and the task needs to be transmitted and completed in a plurality of time slots, so that a large transmission delay may be caused when the channel quality is poor. In addition, the current technology does not consider the task queue status of different application services of the edge server, which may cause a situation that some service is overloaded and other services are idle, thereby causing a large computation delay.
Disclosure of Invention
Aiming at the problems that the wireless transmission delay and the task calculation delay are large and different types of task queue conditions of an MEC server cannot be sensed in the prior art during channel allocation, therefore, the problems that too many tasks of the same type arrive at the same time to cause too many overstocked tasks of a task queue, so that larger calculation time delay is caused, and the overstocked task queue of an individual application service in an edge server causes average time delay increase of all application tasks in a user intensive place are solved.
The invention is realized by the following technical scheme:
the invention relates to a task queue sensing edge computing real-time channel allocation and task unloading method, which comprises the following steps:
step A, generating a channel gain matrix of each time slot and a transmission rate which can be achieved by a user in each sub-channel based on the distance between the user and a base station, and calculating the corresponding waiting time delay, transmission time delay and edge calculation time delay of each task according to the task amount of an application service task queue of an edge server, wherein the method specifically comprises the following steps:
step A-1, channel gain of user on sub-channel
Figure BDA0003477285920000021
Wherein: dmIs the distance of user m to the base station, Ll(dm) At a distance d from the base station for a subchannel lmThe path loss of the optical fiber (A) is reduced,
Figure BDA0003477285920000022
is small-scale zero-mean gaussian distribution.
Step A-2, the receiving signal to noise ratio SINR between the user and the base station is as follows:
Figure BDA0003477285920000023
pl,mthe transmitting power of the user m in the sub-channel l is not changed along with the time; σ is the thermal noise power variance. Without loss of generality, the channel gain estimation order of users on the l-th sub-channel is
Figure BDA0003477285920000024
Step A-3, calculating the transmission rate of user m on the sub-channel l of the t time slot
Figure BDA0003477285920000025
Wherein: l is the total number of sub-channels, W is the total channel bandwidth of the base station, andare distributed to each sub-channel;
Figure BDA0003477285920000026
a decision variable is assigned to the channel.
Step A-4, calculating the residual size of each task in each time slot after the transmission is completed:
Figure BDA0003477285920000027
wherein:
Figure BDA0003477285920000028
for each user's task offload decision variable, Δ T is the duration of each slot.
Step A-5, calculating the waiting time delay of each task, namely the time delay from the generation of the task to the beginning of the transmission of the task and the transmission time delay:
Figure BDA0003477285920000029
wherein:
Figure BDA00034772859200000210
in order to be the generation time of the task,
Figure BDA00034772859200000211
and
Figure BDA00034772859200000212
a start slot and an end slot for the wireless transmission of each task.
Step A-6, the task quantity of the task queue of the application service k in the t time slot in the edge server is Qk(t), calculating the length of the task queue at the transmission completion time of each task:
Figure BDA00034772859200000213
wherein:
Figure BDA00034772859200000214
for the transfer completion time of the nth task, (m, k) → n are the transfer completion time maps for each task of the userThe task index of the task queue. When n is equal to 0, the compound is,
Figure BDA00034772859200000215
task queue task volume Qk(0)=0。
Step A-7, calculating the edge calculation time delay of each task:
Figure BDA00034772859200000216
wherein: v iskProcessing frequency of average task of application service k in edge server, specifically vk=μk/FKWherein: mu.skApplying the amount of computation per bit required for k-type, FKComputing resources allocated for the k-type application service for the edge server.
Step B, modeling the base station channel allocation and user task unloading problem perceived by the task queue as an optimization model taking minimized task average time delay as a target, and converting the optimization model into a single-time slot optimization target according to a Lyapunov optimization framework, wherein the method specifically comprises the following steps:
and step B-1, establishing an optimization model taking the minimized task average time delay as a target.
And step B-2, converting the optimization target into the accumulated sum of the maximum transmission of all time slots and the task size of the edge processing.
Step B-3, based on the Lyapunov optimization framework, converting the optimization target in the time domain into a single-time slot model through the established time delay residual queue and the task queue backlog queue:
Figure BDA0003477285920000031
wherein: u shapek(at,ct) A utility function that serves each application.
Step C, a combined set based on the cooperative game is constructed, namely users requesting the same type of application service are combined, the users play the game with the aim of maximizing the system utility, and the final combined set achieves stable convergence, and the method specifically comprises the following steps:
step C-1, initializing a combined set: and the base station randomly allocates channels to users requesting the service, each user randomly selects a task to be transmitted, and an initial combination set pi is constructed according to the users with the same task transmission type.
Step C-2, calculating the utility u of the user m under the combinationm(phi), utility U of each combinationk(φ) and total system utility.
Step C-3, a user m selects a combination from the combination set, when the user is not associated with the sub-channel, the sub-channel associated in the combination is firstly distributed to the user m, the added self utility, the combined utility and the total system utility are calculated, and whether the user m meets the following transfer conditions is judged:
step C-3-1, user m makes a current combination phiiTransfer to combination phijWhen the effect is not less than the effect before adding;
step C-3-2, user m makes a current combination phiiTransfer to combination phijThe system utility is greater than that of the original combined set before addition.
Step C-4, when the transfer condition is satisfied, combining phiiAdding the candidate set into a candidate set; and when the transfer condition is not met, reselecting one combination to be added.
Step C-5, when the candidate set is not empty, selecting the combination phi which enables the system to be most effective from the candidate combinations of the user moptAnd updating the new combination and the old combination after the user is added.
And C-6, when the combination of all the users is not changed any more, the game is ended, and a stable combination set is obtained.
And C-7, generating a channel distribution and task unloading strategy according to the final stable combination set.
And D, performing task transmission in each time slot according to the obtained channel allocation and task unloading strategies, updating the time delay residual queue and the task queue task backlog queue according to the queue updating rule, and continuously executing the step C until all tasks are transmitted.
The invention relates to a system for realizing the method, which comprises the following steps: the system comprises a data processing unit, a channel allocation and task unloading decision unit and a data updating unit, wherein: the data processing unit calculates the transmission rate which can be achieved by the user in each sub-channel according to the channel gain information and the user task information, calculates the corresponding waiting time delay, transmission time delay and edge calculation time delay of each task according to the task queue task amount of each application service of the edge server, the channel allocation and task unloading decision unit performs combined game among the users according to the time delay information and the task queue state of the user task to obtain a stable channel allocation and task unloading strategy, and the data updating unit updates the residual delay queue and the task backlog queue according to the current channel allocation and task unloading strategy.
Technical effects
Compared with the prior art, the method and the device consider the influence of data backlog of the task queue in the MEC server on channel allocation and task unloading decision, and jointly consider the data volume of the task queue and the residual time delay of the user task to perform channel allocation and task unloading under the user intensive scene with various tasks, so that the application task with less data volume of the task queue obtains higher priority, the server calculation time delay is reduced, and lower average time delay is achieved.
In the task transmission process, the time domain is divided into a plurality of time slots, the task queue backlog and the task residual time delay of the application service are comprehensively considered in each time slot, and the game among the user combinations is carried out, so that the stable channel allocation and task unloading strategies are obtained. The method combines the game theory and the channel allocation with the task unloading, and performs cooperative game among the user combinations formed according to the task types to form a combination set with stable convergence, so as to obtain a better channel allocation and task unloading strategy.
Drawings
FIG. 1 is a schematic diagram of an embodiment;
FIG. 2 is a graph of average task size versus average delay;
FIG. 3 is a graph of average task size versus task timeout percentage;
FIG. 4 is a graph of average task size versus task queue backlog variance;
FIG. 5 is a graph of the distance between a user and a base station and the average delay;
FIG. 6 is a diagram of the relationship between the distance between the user and the base station and the backlog variance of the task queue;
FIG. 7 is a flowchart illustrating exemplary steps of the present invention.
Detailed Description
In this embodiment, the experimental environment is a Windows 1064 bit operating system, the CPU is an Intel i7-7600U, the memory is 16GB, and the development language of the experiment is Python. The MEC network scenario includes an edge server and a base station, where the base station has L subchannels, i.e., L ∈ L ═ 1, 2. There are M service requesting users in its coverage, i.e., M ∈ M ═ 1, 2. There are K types of application requests per user, i.e., K ∈ K ═ {1, 2. Average task generation size for k-type applications is BkSize of the same kind of application task bm,k,jObey mean value of BkUniform distribution of (2); the computation amount of each bit requirement of the k-type application task is muk
The time domain is divided into T time slots, i.e., te ∈ T ═ 1, 2.., T }, each time slot has a duration Δ T, and k-type applications of user m share T time slots
Figure BDA0003477285920000051
The indexes of the tasks are sorted according to the arrival time into
Figure BDA0003477285920000052
Figure BDA0003477285920000053
As shown in fig. 7, the present embodiment relates to a method for allocating a task queue-aware edge computing real-time channel and offloading a task, which includes the specific steps of:
step one, generating a channel gain matrix H of each time slot and a transmission rate which can be reached by a user in each sub-channel according to the distance between the user and a base station, and calculating corresponding waiting time delay, transmission time delay and edge calculation time delay of each task according to the task queue task amount of each application service of an edge server;
the communication adopts Non-Orthogonal Multiple Access (NOMA) technology, which allows Multiple users to use the same resource block at the same time, and further applies Successive Interference Cancellation (SIC) technology to alleviate the co-channel Interference of the users, so as to effectively improve the resource utilization rate. According to the rules of the NOMA protocol, the base station employs SIC for multi-user detection, specifically, the base station sequentially decodes signals from devices with higher channel gain and treats all other signals as interference.
The channel gain of the user on the subchannel is as follows:
Figure BDA0003477285920000054
wherein: dmIs the distance of user m to the base station, Ll(dm) For the sub-channel l at a distance d from the base stationmThe path loss of the optical fiber (A) is reduced,
Figure BDA0003477285920000055
is small-scale zero-mean gaussian distribution. Without loss of generality, the channel gain estimation order of users on the l-th sub-channel is
Figure BDA0003477285920000056
Figure BDA0003477285920000057
Then the received signal-to-noise ratio SINR between the user and the base station is:
Figure BDA0003477285920000058
pl,mthe transmitting power of the user m in the sub-channel l is not changed along with the time; sigma is the variance of the thermal noise power, the signal-to-noise ratio of the user on all channels of each time slot is obtained by calculation, and the transmission rate and the transmission delay of the data transmission are calculated.
Said userm at the t-th time slot subchannel l at a transmission rate of
Figure BDA0003477285920000059
Wherein: w is the total channel bandwidth of the base station and is averagely distributed to each subchannel;
Figure BDA00034772859200000510
allocating decision variables to the channel when
Figure BDA00034772859200000511
Then, the base station allocates subchannel/to user m at the t-th slot. To reduce inter-channel interference, each user can only be associated with a maximum of 1 sub-channel, i.e. each user can only be associated with a maximum of 1 sub-channel
Figure BDA00034772859200000512
Obtaining the size of the task left after the transmission of each time slot user when associating each sub-channel according to the wireless transmission rate, specifically to obtain the size of the task left after the transmission of each time slot user when associating each sub-channel
Figure BDA00034772859200000513
Wherein:
Figure BDA00034772859200000514
the task unloading decision variables for each user are specifically: when in use
Figure BDA00034772859200000515
And then, the user m selects the task of the k-type application program for transmission at the t time slot. That is, the size of the remaining task is changed only when the base station allocates a sub-channel to the user and the user selects a corresponding application, and the size of the remaining task for the initial slot is the task generation size
Figure BDA0003477285920000061
The change of the rest tasks can obtain the wireless transmission starting time slot of all the tasks of the user
Figure BDA0003477285920000062
And ending the time slot
Figure BDA0003477285920000063
In particular to
Figure BDA0003477285920000064
For the same application program of the same user, the transmission sequence of the tasks is in FIFO sequence, and the j +1 th task of each application can only start transmission after the j transmission is finished, i.e. the j transmission is finished
Figure BDA0003477285920000065
The waiting time delay and the transmission time delay of each task of each user are obtained by the information
Figure BDA0003477285920000066
Figure BDA0003477285920000067
Wherein:
Figure BDA0003477285920000068
the task arrival rate of each application for the generation time of the task obeys lambdakThe non-uniform moving flux of the space domain is modeled by a lognormal distribution, i.e.
Figure BDA0003477285920000069
For the edge server, the task quantity of the task queue of the application service k in the t time slot in the edge server is Qk(t), each task of the user is mapped to a task index of the task queue upon completion of the transfer, i.e., user m services the nth task in the task queue of application k upon completion of the transfer of the jth task of application k, (m, k) → n.
The transmission completion time of the nth task can be obtained according to the transmission delay
Figure BDA00034772859200000610
The task amount of the task queue at the completion time of each task transmission is equal to
Figure BDA00034772859200000611
For the case where n is 0,
Figure BDA00034772859200000612
task queue task volume Qk(0)=0。
For a task queue of an application service, its task size cannot exceed its maximum length, i.e. it is a fixed length
Figure BDA00034772859200000613
Wherein:
Figure BDA00034772859200000614
the maximum length of the task queue to serve the k-type application in the edge server.
The edge calculation time delay of each task can be obtained by the length of the task queue
Figure BDA00034772859200000615
Wherein: v iskFor the application service k average task processing frequency in the edge server, vk=μk/FKWherein: mu.skApplying the amount of computation per bit required for k-type, FKComputing resources allocated for the k-type application service for the edge server.
And step two, modeling the base station channel allocation and user task unloading problems perceived by the task queue into an optimization model taking minimized task average time delay as a target, and converting the optimization model into a single-time-slot optimization target according to a Lyapunov optimization framework.
The problem of base station channel allocation and user task unloading perceived by the task queue is as follows: when the task queue tasks of one application service in the edge server are extruded more, the application service load of the edge server can be balanced by allocating channels to users with other types of application tasks and preferentially transmitting the tasks of the application with idle service, so that the waiting time of the tasks in the task queue is reduced, the calculation time delay is reduced, the average time delay of the tasks is further reduced, and the satisfaction degree of the users is improved.
The optimization model taking the minimized task average time delay as a target specifically comprises the following steps:
Figure BDA0003477285920000071
Figure BDA0003477285920000072
Figure BDA0003477285920000073
Figure BDA0003477285920000074
Figure BDA0003477285920000075
Figure BDA0003477285920000076
wherein: dm,k,jThe total time delay of the jth task of the application program k of the user m is specifically as follows:
Figure BDA0003477285920000077
the optimization model must satisfy that the total latency of all tasks is within the range of its application requirements, i.e. the total latency is within the range of the application requirements
Figure BDA0003477285920000078
Figure BDA0003477285920000079
Converting the optimization model for minimizing the average user time delay into an optimization target for time domain integration, which is equivalent to maximizing the cumulative sum of the maximum transmission and edge processing task sizes in all time slots, and specifically comprises the following steps:
Figure BDA00034772859200000710
Figure BDA00034772859200000711
Figure BDA00034772859200000712
Figure BDA00034772859200000713
Figure BDA00034772859200000714
Figure BDA00034772859200000715
Figure BDA00034772859200000716
converting task time delay constraint and task queue constraint into a time delay residual queue and a task queue backlog queue based on the Lyapunov optimization theory, and specifically comprises the following steps:
Figure BDA00034772859200000717
wherein:
Figure BDA00034772859200000718
applying the residual execution time of the jth task for the k type of the user m, specifically
Figure BDA00034772859200000719
Figure BDA00034772859200000720
The single-time-slot optimization target is based on a Lyapunov optimization framework, single-time-slot conversion is carried out on the optimization target in the time domain through the established time delay residual queue and task queue task backlog queue, namely a strategy of the whole time domain is obtained by solving a channel allocation and task transmission strategy in each time slot, and the single-time-slot optimization target specifically comprises the following steps:
Figure BDA0003477285920000081
Figure BDA0003477285920000082
Figure BDA0003477285920000083
Figure BDA0003477285920000084
wherein: u shapek(at,ct) Utility function serving each application, in particular
Figure BDA0003477285920000085
And step three, constructing a combined set based on the cooperative game, namely combining the users requesting the same type of application service, performing the game among the users by aiming at maximizing the system effectiveness, and finally achieving stable convergence of the combined set.
The combination set is a set pi ═ phi of user combinations formed by combining users requesting the same type of application service in M12,...,φK}, wherein: k is not equal to k',
Figure BDA0003477285920000086
when there is no combination phikThe users in the group will add other combinations phik′To change the current partition, the combined set Π is a stable set.
The cooperative game based method comprises the following steps: each user selects any type of task for transmission, and constructs a task queue of application service in the edge server together with other users in the combination, specifically: for user m, define >mA complete transitive relationship over all possible combinations that user m may form; when phi isimφjDenotes user m compared to combination phijPreferably by adding the combination phiiThis preference relationship may affect the formation of the final combined set. The users play the game with each other to form a combination, whether a new combination is added or not is considered according to the preference relation, namely the combination rule, and finally all the combinations are stable. In the combination forming game, the preference sequence can ensure the existence of combination stability.
The combination rule is as follows: when the combined utility of the user m is higher than that of the user m before the user m is added into the combination and the utility of the user m is improved, the user m is added into the new combination, namely, when the user selects to add the combination phiiWhen the system is used, the utility of the system is increased, and the total utility of the system is increased, wherein the combination rule is as follows:
Figure BDA0003477285920000087
Figure BDA0003477285920000088
wherein:
Figure BDA0003477285920000089
Figure BDA00034772859200000810
in order for the utility of the user to be effective,
Figure BDA00034772859200000811
adding a combination phi for user miNew combination phi ofiPhi combined with originaljThe combined utility of (a) and (b),
Figure BDA00034772859200000812
is user m joins the combination phiiFront combination phiiPhi combined with originaljThe combined utility of (a). Since the addition of the user m only affects the new and old combinations and does not affect other combinations, it is feasible to consider the combined utility of the new and old combinations and then affect the total system utility.
The stable convergence of the combined set is as follows: forming a stable combination set by continuously playing and finally converging all users according to a combination rule, which specifically comprises the following steps:
3.1) initializing a combined set: and the base station randomly allocates channels to users requesting the service, each user randomly selects a task to be transmitted, and an initial combination set is constructed as pi according to the users with the same task transmission type.
3.2) calculating the utility u of the user m under the combinationm(phi), utility U of each combinationk(φ) and total system utility.
3.3) the user m selects a combination from the combination set, when the user is not associated with the sub-channel, the sub-channel associated in the combination is firstly distributed to the user m, and the added self utility, the combined utility and the total system utility are calculated.
Judging whether the user m meets the following transfer conditions:
a) user m from the current combination phiiTransfer to combination phijWhen the effect is not less than the effect before adding;
b) user m from the current combination phiiTransfer to combination phijThe system utility is greater than that of the original combined set before addition.
When the transfer condition is satisfied, the combination phi is combinediAdding the candidate set into a candidate set; and when the transfer condition is not met, reselecting one combination to be added.
3.4) when the candidate set is not empty, selecting from the candidate combinations of the user mCombination phi for maximizing system utilityoptAnd updating the new combination and the old combination after the user is added.
3.5) when the combination of all the users is not changed any more, the game is ended, and a stable combination set is obtained.
And generating a channel allocation and task unloading strategy according to the final stable combination set.
And step four, performing task transmission in each time slot according to the obtained channel allocation and task unloading strategies, updating the time delay residual queue and the task queue task backlog queue according to the queue updating rule, and continuously executing the step three until all tasks are transmitted.
As shown in fig. 1, the specific application scenarios related to the present embodiment include: a base station, an MEC server connected to the base station, 5 sub-channels, 8 users and 3 different types of applications. The positions of the users are randomly distributed in a simulation area with the radius of 200m, the total bandwidth W is 5MHz, and the average task generation size is BkThe simulation results obtained according to the above method are shown in fig. 2-6, 1 MB.
As shown in fig. 2, the real-time channel allocation and task unloading method for task queue sensing proposed by the present invention is a comparison between a greedy algorithm and a random algorithm for task queue sensing under different average task sizes for the average delay of tasks, and as can be seen from the figure, the present invention can achieve the lowest delay.
As shown in fig. 3, the percentage of the tasks exceeding the maximum delay in the total tasks is compared with the comparison between the real-time channel allocation and task unloading method for task queue sensing provided by the invention and the greedy algorithm and the random algorithm under different average task sizes.
As shown in fig. 4, the method for real-time channel allocation and task offloading for task queue sensing proposed by the present invention is a comparison between the method for task queue aware real-time channel allocation and task offloading and the greedy algorithm and the random algorithm under different average task sizes for variances of task backlogs of different types of application service task queues in an edge server.
As shown in fig. 5, in order that the average time delay of the task varies with the distance between the user and the base station, the method for real-time channel allocation and task offloading for task queue sensing proposed by the present invention is compared with the greedy algorithm and the random algorithm.
As shown in fig. 6, for the variance of the backlogs of different types of application service task queues in the edge server along with the variation of the distance between the user and the base station, the method for real-time channel allocation and task offloading based on task queue sensing proposed by the present invention is compared with the greedy algorithm and the random algorithm.
In conclusion, the method models the base station channel allocation and user task unloading problems into an optimization model aiming at minimizing the task average time delay, comprehensively considers the task backlog and the task residual time delay of a task queue of an application service, converts a time domain model into a single time slot model based on a Lyapunov optimization framework, divides users into different combinations for cooperative game in each time slot according to a game theory, performs channel allocation and task transmission according to a stable combination set formed by convergence, updates the time delay residual queue and the task backlog queue until all tasks are transmitted, and finally remarkably reduces the task average time delay.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (10)

1. A task queue aware edge calculation real-time channel allocation and task unloading method is characterized in that a channel gain matrix of each time slot and a transmission rate which can be achieved by a user in each sub-channel are generated based on the distance between the user and a base station, and the corresponding waiting time delay, transmission time delay and edge calculation time delay of each task are calculated according to the task quantity of an edge server application service task queue; modeling base station channel allocation and user task unloading problems perceived by a task queue into an optimization model taking minimized task average time delay as a target, and converting the optimization model into a single-time-slot optimization target according to a Lyapunov optimization framework; performing task transmission in each time slot according to the obtained channel allocation and task unloading strategies, updating a time delay residual queue and a task queue task backlog queue according to a queue updating rule, and performing combined set game until all tasks are transmitted;
the game of the combined set is as follows: and constructing a combined set based on the cooperative game, and playing the game among the users by taking the system effectiveness maximization as a target until the combined set is converged and stable.
2. The task queue aware edge computing real-time channel assignment and task offloading method of claim 1, wherein the waiting delay, the transmission delay, and the edge computing delay corresponding to each task are computed by:
step A-1, channel gain of user on sub-channel
Figure FDA0003477285910000011
Wherein: dmIs the distance of user m to the base station, Ll(dm) At a distance d from the base station for a subchannel lmThe path loss of the optical fiber (A) is reduced,
Figure FDA0003477285910000012
is small-scale zero-mean Gaussian distribution;
step A-2, the receiving signal to noise ratio SINR between the user and the base station is as follows:
Figure FDA0003477285910000013
pl,mthe transmitting power of the user m in the sub-channel l is not changed along with the time; σ is the thermal noise power variance; without loss of generality, the channel gain estimation order of users on the l-th sub-channel is
Figure FDA0003477285910000014
Step A-3, calculating the transmission rate of user m on the sub-channel l of the t time slot
Figure FDA0003477285910000015
Wherein: l is the total number of the sub-channels, W is the total channel bandwidth of the base station, and the L is averagely distributed to each sub-channel;
Figure FDA0003477285910000016
allocating decision variables for the channels;
step A-4, calculating the residual size of each task in each time slot after the transmission is completed:
Figure FDA0003477285910000017
wherein:
Figure FDA0003477285910000018
offloading the decision variables for each user's task, Δ T being the duration of each time slot;
step A-5, calculating the waiting time delay of each task, namely the time delay from the generation of the task to the beginning of the transmission of the task and the transmission time delay:
Figure FDA0003477285910000021
wherein:
Figure FDA0003477285910000022
in order to be the generation time of the task,
Figure FDA0003477285910000023
and
Figure FDA0003477285910000024
starting and ending time slots for wireless transmission of each task;
step A-6, the task quantity of the task queue of the application service k in the t time slot in the edge server is Qk(t) calculating the transmission of each taskLength of task queue at completion time:
Figure FDA0003477285910000025
wherein:
Figure FDA0003477285910000026
the (m, k) → n is a task index of the task queue mapped when the transmission of each task of the user is completed; when n is equal to 0, the compound is,
Figure FDA0003477285910000027
task queue task volume Qk(0)=0;
Step A-7, calculating the edge calculation time delay of each task:
Figure FDA0003477285910000028
wherein: v iskProcessing frequency of average task of application service k in edge server, specifically vk=μk/FKWherein: mu.skApplying the amount of computation per bit required for k-type, FKComputing resources allocated for the k-type application service for the edge server.
3. The method of claim 1, wherein the single-slot optimization objective is obtained by:
step B-1, establishing an optimization model taking the minimum task average time delay as a target;
b-2, converting the optimization target into the accumulated sum of the maximum transmission of all time slots and the size of the task of edge processing;
step B-3, based on the Lyapunov optimization framework, converting the optimization target in the time domain into a single-time slot model through the established time delay residual queue and the task queue backlog queue:
Figure FDA0003477285910000029
wherein: u shapek(at,ct) A utility function that serves each application.
4. The task queue aware edge computing real-time channel assignment and task offloading method of claim 1, wherein the gaming is targeted at system utility maximization, specifically:
step C-1, initializing a combined set: a base station randomly allocates channels to users requesting for services, each user randomly selects a task to be transmitted, and an initial combination set pi is constructed according to users with the same task transmission type;
step C-2, calculating the utility u of the user m under the combinationm(phi), utility U of each combinationk(φ) and total system utility;
c-3, selecting a combination from the combination set by the user m, when the user m is not associated with the sub-channel, firstly allocating the sub-channel associated in the combination to the user m, calculating the added self utility, combined utility and total system utility, and judging whether the user m meets the transfer condition;
step C-4, when the transfer condition is satisfied, combining phiiAdding the candidate set into a candidate set; when the transfer condition is not met, one combination is selected again to be added;
step C-5, when the candidate set is not empty, selecting the combination phi which enables the system to be most effective from the candidate combinations of the user moptUpdating the new combination and the old combination after the user is added;
step C-6, when the combination of all users is not changed any more, the game is ended, and a stable combination set is obtained;
and C-7, generating a channel distribution and task unloading strategy according to the final stable combination set.
5. The method of claim 4 wherein said transition conditions include:
step C-3-1, user m makes a current combination phiiTransfer to combination phijWhen the effect is not less than the effect before adding;
step C-3-2, user m makes a current combination phiiTransfer to combination phijThe system utility is greater than that of the original combined set before addition.
6. The task queue aware edge computing real-time channel allocation and task offloading method of claim 1 or 3, wherein the optimization model for minimizing task average latency is specifically:
Figure FDA0003477285910000031
Figure FDA0003477285910000032
Figure FDA0003477285910000033
Figure FDA0003477285910000034
Figure FDA0003477285910000035
Figure FDA0003477285910000036
wherein: dm,k,jThe total time delay of the jth task of the application program k of the user m is specifically as follows:
Figure FDA0003477285910000037
7. the task queue aware edge computing real-time channel assignment and task offload method according to claim 1 or 3, wherein the single-slot optimization objective is specifically:
Figure FDA0003477285910000038
Figure FDA0003477285910000039
Figure FDA00034772859100000310
Figure FDA0003477285910000041
wherein: u shapek(at,ct) Utility function serving each application, in particular
Figure FDA0003477285910000042
8. The task queue aware edge computing real-time channel assignment and task offload method of claim 1, wherein the cooperative game-based approach is: each user selects any type of task for transmission, and constructs a task queue of application service in the edge server together with other users in the combination, specifically: for user m, define >mA complete transitive relationship over all possible combinations that user m may form; when phi isimφjDenotes user m compared to combination phijPreferably by adding the combination phiiThe preference relation influences the formation of the final combination set, the users play with each other to form the combination, whether to add a new combination is considered according to the preference relation, namely the combination rule, and finally the effect is achievedAll combinations are stable, and the preference sequence ensures the existence of the combination stability in the game formed by the combination;
the combination rule is as follows: when the combined utility of the user m is higher than that of the user m before the user m is added into the combination and the utility of the user m is improved, the user m is added into the new combination, namely, when the user selects to add the combination phiiWhen the system is used, the utility of the system is increased, and the total utility of the system is increased, wherein the combination rule is as follows:
Figure FDA0003477285910000043
Figure FDA0003477285910000044
wherein: u. ofm(φ)=
Figure FDA0003477285910000045
In order for the utility of the user to be effective,
Figure FDA0003477285910000046
adding a combination phi for user miNew combination phi ofiPhi combined with originaljThe combined utility of (a) and (b),
Figure FDA0003477285910000047
is user m joins the combination phiiFront combination phiiPhi combined with originaljSince the addition of the user m only affects the new and old combinations and does not affect other combinations, it is feasible to consider the combined utility of the new and old combinations and further affect the total system utility.
9. The task queue aware edge computing real-time channel assignment and task offload method of claim 8, wherein the stable convergence of the combined set is achieved by: forming a stable combination set by continuously playing and finally converging all users according to a combination rule, which specifically comprises the following steps:
3.1) initializing a combined set: a base station randomly allocates channels to users requesting for services, each user randomly selects a task to be transmitted, and an initial combination set is constructed as pi according to the users with the same task transmission type;
3.2) calculating the utility u of the user m under the combinationm(phi), utility U of each combinationk(φ) and total system utility;
3.3) the user m selects a combination from the combination set, when the user is not associated with the sub-channel, the sub-channel associated in the combination is firstly distributed to the user m, and the added self utility, the combined utility and the total system utility are calculated;
judging whether the user m meets the following transfer conditions:
a) user m from the current combination phiiTransfer to combination phijWhen the effect is not less than the effect before adding;
b) user m from the current combination phiiTransfer to combination phijThen, the system utility is greater than that under the original combination set before adding;
when the transfer condition is satisfied, the combination phi is combinediAdding the candidate set into a candidate set; when the transfer condition is not met, one combination is selected again to be added;
3.4) when the candidate set is not empty, selecting a combination phi which enables the system to have maximum effect from the candidate combinations of the user moptUpdating the new combination and the old combination after the user is added;
3.5) when the combination of all the users is not changed any more, the game is ended, and a stable combination set is obtained.
10. A system for implementing the task queue aware edge computing real-time channel assignment and task offload method as claimed in any one of claims 1 to 9, comprising: the system comprises a data processing unit, a channel allocation and task unloading decision unit and a data updating unit, wherein: the data processing unit calculates the transmission rate which can be achieved by the user in each sub-channel according to the channel gain information and the user task information, calculates the corresponding waiting time delay, transmission time delay and edge calculation time delay of each task according to the task queue task amount of each application service of the edge server, the channel allocation and task unloading decision unit performs combined game among the users according to the time delay information and the task queue state of the user task to obtain a stable channel allocation and task unloading strategy, and the data updating unit updates the residual delay queue and the task backlog queue according to the current channel allocation and task unloading strategy.
CN202210058397.4A 2022-01-19 2022-01-19 Task queue aware edge computing real-time channel allocation and task unloading method Pending CN114375058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210058397.4A CN114375058A (en) 2022-01-19 2022-01-19 Task queue aware edge computing real-time channel allocation and task unloading method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210058397.4A CN114375058A (en) 2022-01-19 2022-01-19 Task queue aware edge computing real-time channel allocation and task unloading method

Publications (1)

Publication Number Publication Date
CN114375058A true CN114375058A (en) 2022-04-19

Family

ID=81143204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210058397.4A Pending CN114375058A (en) 2022-01-19 2022-01-19 Task queue aware edge computing real-time channel allocation and task unloading method

Country Status (1)

Country Link
CN (1) CN114375058A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002409A (en) * 2022-05-20 2022-09-02 天津大学 Dynamic task scheduling method for video detection and tracking

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002409A (en) * 2022-05-20 2022-09-02 天津大学 Dynamic task scheduling method for video detection and tracking
CN115002409B (en) * 2022-05-20 2023-07-28 天津大学 Dynamic task scheduling method for video detection and tracking

Similar Documents

Publication Publication Date Title
CN111586720B (en) Task unloading and resource allocation combined optimization method in multi-cell scene
CN108541027B (en) Communication computing resource replacement method based on edge cloud network
CN111182495B (en) 5G internet of vehicles partial calculation unloading method
CN109194763B (en) Caching method based on small base station self-organizing cooperation in ultra-dense network
CN110505644B (en) User task unloading and resource allocation joint optimization method
CN111800812B (en) Design method of user access scheme applied to mobile edge computing network of non-orthogonal multiple access
CN107396448B (en) Resource allocation method in heterogeneous network
CN110233755B (en) Computing resource and frequency spectrum resource allocation method for fog computing in Internet of things
CN111182570A (en) User association and edge computing unloading method for improving utility of operator
Zhao et al. Task proactive caching based computation offloading and resource allocation in mobile-edge computing systems
CN114363984B (en) Cloud edge collaborative optical carrier network spectrum resource allocation method and system
CN111953547B (en) Heterogeneous base station overlapping grouping and resource allocation method and device based on service
CN113407249B (en) Task unloading method facing to position privacy protection
CN112969163B (en) Cellular network computing resource allocation method based on self-adaptive task unloading
CN111711666A (en) Internet of vehicles cloud computing resource optimization method based on reinforcement learning
CN116744311B (en) User group spectrum access method based on PER-DDQN
CN112860429A (en) Cost-efficiency optimization system and method for task unloading in mobile edge computing system
CN108449149B (en) Energy acquisition small base station resource allocation method based on matching game
Zhang et al. Game-theory based power and spectrum virtualization for optimizing spectrum efficiency in mobile cloud-computing wireless networks
CN114375058A (en) Task queue aware edge computing real-time channel allocation and task unloading method
CN115103326A (en) Internet of vehicles task unloading and resource management method and device based on alliance game
Wang et al. Joint service caching, resource allocation and computation offloading in three-tier cooperative mobile edge computing system
CN111526526B (en) Task unloading method in mobile edge calculation based on service mashup
CN111954230B (en) Computing migration and resource allocation method based on integration of MEC and dense cloud access network
Li Optimization of task offloading problem based on simulated annealing algorithm in MEC

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination