CN115858048B - Hybrid critical task oriented dynamic arrival edge unloading method - Google Patents

Hybrid critical task oriented dynamic arrival edge unloading method Download PDF

Info

Publication number
CN115858048B
CN115858048B CN202310194760.XA CN202310194760A CN115858048B CN 115858048 B CN115858048 B CN 115858048B CN 202310194760 A CN202310194760 A CN 202310194760A CN 115858048 B CN115858048 B CN 115858048B
Authority
CN
China
Prior art keywords
task
tasks
representing
resource scheduling
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310194760.XA
Other languages
Chinese (zh)
Other versions
CN115858048A (en
Inventor
沈艳
杨骞云
邓添
胡辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202310194760.XA priority Critical patent/CN115858048B/en
Publication of CN115858048A publication Critical patent/CN115858048A/en
Application granted granted Critical
Publication of CN115858048B publication Critical patent/CN115858048B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a dynamic arrival edge unloading method for a mixed critical-level task, which relates to the field of mobile edge calculation and comprises the following steps: acquiring tasks generated by a system, and classifying the key levels of the tasks; respectively calculating system resources required by the task completion in the local and the completion in the server; establishing a resource scheduling model according to the system resources required by the local completion of the task and the key level of the task completed by the server, and acquiring an optimal allocation mechanism; and obtaining unloading and task unloading schemes according to the optimal allocation mechanism. In order to increase the safety of system operation, reduce the system execution risk, effectively avoid damaging accidents, comprehensively consider the time delay of tasks and the system energy consumption and the criticality of the tasks, and effectively manage the tasks with different criticalities; task execution with high key level can be preferentially selected, interruption of task unloading service caused by mobility of a user is avoided, and service quality of a mobile system is improved.

Description

Hybrid critical task oriented dynamic arrival edge unloading method
Technical Field
The invention relates to the field of mobile edge calculation; in particular to a dynamic arrival edge unloading method for a mixed critical-level task.
Background
The short-distance, ultra-low delay, high bandwidth and other characteristics of mobile edge calculation, the research of mobile edge calculation has received high attention in recent years, and in particular, in the aspect of task unloading, different solutions are proposed according to different requirements and application scenes. The current common offloading strategies are divided into 3 main types according to the performance requirements of computational offloading, namely minimizing time delay, minimizing energy consumption and maximizing benefit.
The minimum delay scheme aims at different services and provides a distributed task scheduling strategy estimation delay time with obvious delay by minimizing end-to-end service delay and task completion time, so that service quality is improved. The prior art aims at realizing the purpose of preferentially reducing the execution time of the computing task with high real-time requirement, and sets the real-time priority of the task. And determining that the task is executed at the edge or the cloud according to the priority level. Meanwhile, in order to ensure that all tasks can be completed within required time, the length of the queue can be correspondingly changed according to the arrival condition of the tasks, so that the situation that the queue is too long and the tasks are trapped in long-time waiting is avoided. An excitation function is designed, and an online learning method is provided based on a deep reinforcement learning mechanism. The method effectively reduces the average delay time in the task queue. Taking mean delay and random delay jitter into account, an improved heterogeneous earliest completion time strategy is proposed that utilizes kernel density estimation to solve the problem of maximizing tolerant delay minimization. An edge server deployment scheme using Mixed-Integer linear programming (Mixed-inter LinearProgramming, MILP) technology is proposed that can optimize both the workload of the edge server and the response delay of the user. The scheme with the minimum time delay can reduce the execution time of the task and reduce the time delay, but the too fast energy consumption of the mobile terminal can cause that the corresponding unloading strategy cannot be used normally.
The method is suitable for an efficient energy-saving computing and unloading algorithm for performing all computing and unloading on the multiple mobile devices. Considering that the weighted sum of the energy consumption and the time delay is minimum, assuming that the computing capacity of the server is a fixed constant, taking into account the allocation of wireless resources for the purpose of energy conservation, classifying the tasks according to the time delay of the tasks, the wireless resource requirement and the weight of the energy consumption, and giving priority to the tasks to finish unloading. The computational offload strategy that minimizes energy consumption is an algorithm that seeks to minimize energy consumption while meeting the latency constraints of the mobile terminal. But it may not necessarily be necessary to minimize latency or minimize energy consumption during the actual unloading process.
Maximum profitability proposes a gaming strategy for computing offload for multiple mobile terminals computing all offload. The strategy sets weighing parameters as indexes of calculation and unloading, and designs a distributed calculation and unloading algorithm capable of realizing Nash equilibrium so as to balance between equipment energy consumption and calculation time delay, thereby realizing maximization of user benefits. Maximum profitability considers the energy consumption of the mobile terminal and the dead limit of a calculation task, and two approximation algorithms are respectively provided for a single user and multiple users in a dead limit sensitive MEC system to minimize the energy consumption and the time delay. The literature proposes a new energy-efficient deep learning-based offloading scheme to train a deep learning-based intelligent decision algorithm. The algorithm selects an optimal set of application components based on the remaining energy of the user, the energy consumption of the application components, network conditions, computational load, data transfer volume, and communication delay. The prior art includes utilizing deep reinforcement learning to provide a task offloading decision to solve fine granularity, thereby reducing delay, cost, energy consumption and network utilization of an MEC platform; and according to the data volume of the calculation task of the mobile user and the performance characteristics of the edge calculation node, combining with an artificial intelligence technology, a calculation unloading and task migration algorithm based on task prediction is provided so as to obtain the maximum benefit. The calculation unloading strategy for maximizing the benefits essentially analyzes the influence of two indexes, namely time delay and energy consumption, on the total consumption of calculation unloading to find a balance point so that the limit setting of the time delay or the energy consumption is more suitable for the actual scene, thereby achieving the purpose of minimizing the total cost, namely maximizing the benefits. However, in the prior art, though the priority of the executed task is considered, the executed task is regarded as the task with the same criticality, and the influence of the execution of the tasks with different criticalities on the system is not considered, so that the task with high criticality cannot be executed in time in the switching of the security level of the system and the dead limit of the execution of the task is missed. On the other hand, the mobility of the user makes the algorithm neglect the interruption of the task unloading service caused by the task unloading process, so that the failure of the task unloading is caused, and the service quality of the mobile system is reduced.
Disclosure of Invention
Aiming at the defects in the prior art, the dynamic arrival edge unloading method for the mixed critical-level task solves the problem that the prior art does not consider the influence of execution of tasks with different criticality on a system and the problem that the task unloading service is interrupted due to the mobility of a user.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: a dynamic arrival edge unloading method for a mixed critical-level task comprises the following steps:
s1, acquiring tasks generated by a system, and classifying the tasks in a key level;
s2, respectively calculating system resources required by the local completion of tasks and the completion of the tasks on a server;
s3, establishing a resource scheduling model according to the system resources and the key levels of the tasks required by the local completion of the tasks and the completion of the tasks at the server, and acquiring an optimal allocation mechanism;
s4, obtaining the dynamic arrival edge unloading method of the mixed critical-level task according to the optimal allocation mechanism.
Further, the specific implementation manner of step S1 is as follows:
s1-1, the system is in the current time slottWith probability
Figure SMS_1
Creating a task->
Figure SMS_2
Figure SMS_3
wherein ,
Figure SMS_4
data quantity representing a computing task +.>
Figure SMS_5
Representing the number of CPU cycles required per bit of data for the calculation task to calculate;
Figure SMS_6
Representing the maximum tolerated time delay of the task;
Figure SMS_7
Representing a key level of a task;
Figure SMS_8
Representation definition;
s1-2, dividing the key level of the task according to the importance of the task by
Figure SMS_9
And (3) representing.
Further, the specific implementation manner of step S2 is as follows:
s2-1, according to the formula:
Figure SMS_10
obtaining the rate of task transmission to server uplink communication
Figure SMS_11
; wherein ,
Figure SMS_12
Is the bandwidth of the channel, ">
Figure SMS_13
Is the channel gain between the terminal device and the base station, < >>
Figure SMS_14
Is the power of Gaussian white noise, +.>
Figure SMS_15
Is the transmit power allocated by the local device;
s2-2, according to the formula:
Figure SMS_16
Figure SMS_17
obtaining energy consumption generated in the process of executing tasks on local equipment
Figure SMS_18
And the time delay generated by the local calculation of the task +.>
Figure SMS_19
; wherein ,
Figure SMS_20
Representing the computing power provided by the local device to the task;
Figure SMS_21
Is the energy consumption coefficient determined by the CPU architecture;
Figure SMS_22
Conditions for successful task execution;
Figure SMS_23
The waiting time for the task to be locally queued;
s2-3, according to the formula:
Figure SMS_24
Figure SMS_25
Figure SMS_26
Figure SMS_27
Figure SMS_28
obtaining time delay required by task unloading to MEC to complete calculation
Figure SMS_30
and
Figure SMS_34
The energy consumption of the moment of arrival at the MEC server for offloading tasks is +.>
Figure SMS_38
; wherein ,
Figure SMS_32
For offloading tasks from local equipment to MEC serversThe generated transmission delay;
Figure SMS_36
Uploading the task to an MEC server to calculate the energy consumption generated in the process;
Figure SMS_39
Energy consumption generated by communication transmission in the process of unloading the task to the MEC;
Figure SMS_41
Queuing waiting time for uploading MEC server;
Figure SMS_29
A queuing time period for execution on the MEC server;
Figure SMS_33
Conditions for successful task execution;
Figure SMS_37
Is computing resource->
Figure SMS_40
The corresponding energy consumption coefficient;
Figure SMS_31
Representing the task as being computed locally;
Figure SMS_35
representing the task being calculated on the MEC server.
Further, the specific implementation manner of step S3 is as follows:
s3-1, according to the formula:
Figure SMS_42
Figure SMS_43
Figure SMS_44
Figure SMS_45
Figure SMS_46
Figure SMS_47
Figure SMS_48
obtaining a resource scheduling model; wherein,
Figure SMS_54
representing a maximum number of time slots;
Figure SMS_52
Is a Boolean function when->
Figure SMS_57
True time
Figure SMS_53
On the contrary->
Figure SMS_58
Figure SMS_62
Indicating that the task is completed locally and successfully executed;
Figure SMS_65
Indicating that the task is successfully unloaded and executed at the MEC server;
Figure SMS_61
Representing energy consumption generated in the task completion process;
Figure SMS_64
Is the energy consumption coefficient>
Figure SMS_49
Is the energy consumption generated by completing the task;
Figure SMS_55
Coefficients determined for the key level;C1 denotes that the whole model is a discrete time model consisting of time slots;C2 and C3 represents a decision, by 0 and 1, whether the decision type is selected;C4 is the condition that the task is successfully executed, i.e. the time length of the completion of the task is less than the maximum tolerance time delay +.>
Figure SMS_51
C5 indicates that the computing power requirement of the MEC server assigned to the task is less than +.>
Figure SMS_56
C6 indicates that the transmission power allocated to the task is smaller than the maximum transmission power
Figure SMS_59
FRepresenting the frequency;
Figure SMS_63
Representing the latency of the MEC server;
Figure SMS_50
Representing the execution time of the MEC server;
Figure SMS_60
representing local latency;lodrespectively representing the local execution of the task, the execution of the task at the server and the discarding of the task;
s3-2, according to the formula:
Figure SMS_66
obtaining the state of a resource scheduling model
Figure SMS_67
; wherein ,
Figure SMS_68
Representing the time length of local queuing of tasks;
Figure SMS_69
representing the queuing time of uploading the MEC server;
Figure SMS_70
Representing a queuing time period executing on the MEC server;
s3-3, according to the formula:
Figure SMS_71
actions to get resource scheduling model
Figure SMS_72
; wherein ,
Figure SMS_73
Representing an execution strategy of the task;
Figure SMS_74
Virtual deadline factor, ">
Figure SMS_75
The value of (2) is +.>
Figure SMS_76
Figure SMS_77
The task is uploaded to the computing resources allocated on the MEC server;
Figure SMS_78
Is the transmit power;
s3-4, according to the formula:
Figure SMS_79
rewarding function for obtaining resource scheduling model
Figure SMS_80
; wherein ,
Figure SMS_81
Applying different levels of reward coefficients to tasks of different key levels according to +.>
Figure SMS_82
Determining a prize value for the prize;successindicating that the task is successfully executed;failureindicating that the task fails to execute;
s3-5, training the resource scheduling model through randomly generating tasks according to the obtained reward function of the resource scheduling model, the action of the resource scheduling model and the state of the resource scheduling model to obtain a trained resource scheduling model;
s3-6, obtaining an optimal allocation mechanism according to the trained resource scheduling model.
Further, the specific implementation manner of the step S3-5 is as follows:
s3-5-1, initializing a playback memory unit D, a value neural network Q and a target neural network in a depth Q network DQN
Figure SMS_83
The method comprises the steps of carrying out a first treatment on the surface of the Network weights of a randomly selected value network>
Figure SMS_84
And target network->
Figure SMS_85
The network weight is +.>
Figure SMS_86
; wherein ,
Figure SMS_87
S3-5-2, obtaining the initial state of the environment
Figure SMS_88
Generating a random number of 0-1 during the time slot interval, and judging whether the random number is smaller than a preset threshold value or not; if yes, randomly generating task with probability +.>
Figure SMS_89
Select random action->
Figure SMS_90
The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, let
Figure SMS_91
S3-5-3, calculating a reward function
Figure SMS_92
The state of the resource scheduling model is converted into s #t+1) and stores the state transition procedure +.>
Figure SMS_93
To the playback memory unit D;
s3-5-4, according to the formula:
Figure SMS_94
obtaining a random sample of the original data from the playback memory unit D
Figure SMS_96
Training results->
Figure SMS_99
; wherein ,
Figure SMS_101
Is a new action;krepresenting the sample number;Krepresenting the training times; first, thekStatus of resource scheduling model under sample +.>
Figure SMS_97
The method comprises the steps of carrying out a first treatment on the surface of the First, thekAction of resource scheduling model under individual samples +.>
Figure SMS_98
The method comprises the steps of carrying out a first treatment on the surface of the First, thekRewarding function of resource scheduling model under individual samples +.>
Figure SMS_100
The method comprises the steps of carrying out a first treatment on the surface of the First, thekState of resource scheduling model under +1 samples +.>
Figure SMS_102
Figure SMS_95
Is a discount factor;
s3-5-5, according to the formula:
Figure SMS_103
obtaining a loss functionLThe method comprises the steps of carrying out a first treatment on the surface of the Wherein, the value neural network
Figure SMS_104
The weight of (2) is +.>
Figure SMS_105
The method comprises the steps of carrying out a first treatment on the surface of the Target neural network->
Figure SMS_106
The weight of (2) is +.>
Figure SMS_107
S3-5-6, learning rate and loss function are usedLGradient descent update value neural network
Figure SMS_108
Weights of (2);
s3-5-7, step length is setCAnd spacing step lengthCWhen the time is over, the parameters of the value neural network are assigned to the target neural network, and the current time slot state update is completed;
s3-5-8, repeating steps S3-5-1 to S3-5-7 until a given training number and time interval is reached.
The beneficial effects of the invention are as follows: the invention establishes a system model according to the success rate of the task and the total energy consumption, and simultaneously provides a system resource dynamic adjustment mechanism and a virtual deadline scheduling mechanism for the resource scheduling problem after unloading in order to ensure the success rate of the task and ensure the execution of the high-key-level task; task execution with high key level can be preferentially selected, and interruption of task unloading service caused by mobility of a user is avoided, so that the service quality of a mobile system is improved; the safety of system operation is improved, the system execution risk is reduced, and damage accidents are effectively avoided.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of the structure of the present invention;
FIG. 3 is a graph showing the comparison of the success rate of different tasks.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, a dynamic arrival edge unloading method for a hybrid critical-level-oriented task includes the following steps:
s1, acquiring tasks generated by a system, and classifying the tasks in a key level;
s2, respectively calculating system resources required by the local completion of tasks and the completion of the tasks on a server;
s3, establishing a resource scheduling model according to the system resources and the key levels of the tasks required by the local completion of the tasks and the completion of the tasks at the server, and acquiring an optimal allocation mechanism;
s4, obtaining the dynamic arrival edge unloading method of the mixed critical-level task according to the optimal allocation mechanism.
The specific implementation manner of the step S1 is as follows:
s1-1, the system is in the current time slottWith probability
Figure SMS_109
Creating a task->
Figure SMS_110
:/>
Figure SMS_111
wherein ,
Figure SMS_112
data quantity representing a computing task +.>
Figure SMS_113
Representing the number of CPU cycles required per bit of data for the calculation task to calculate;
Figure SMS_114
Representing the maximum tolerated time delay of the task;
Figure SMS_115
Representing a key level of a task;
Figure SMS_116
Representation definition;
s1-2, dividing the key level of the task according to the importance of the task by
Figure SMS_117
And (3) representing.
The specific implementation manner of the step S2 is as follows:
s2-1, according to the formula:
Figure SMS_118
get task transmission to serverRoad communication rate
Figure SMS_119
; wherein ,
Figure SMS_120
Is the bandwidth of the channel, ">
Figure SMS_121
Is the channel gain between the terminal device and the base station, < >>
Figure SMS_122
Is the power of Gaussian white noise, +.>
Figure SMS_123
Is the transmit power allocated by the local device;
s2-2, according to the formula:
Figure SMS_124
Figure SMS_125
obtaining energy consumption generated in the process of executing tasks on local equipment
Figure SMS_126
And the time delay generated by the local calculation of the task +.>
Figure SMS_127
; wherein ,
Figure SMS_128
Representing the computing power provided by the local device to the task;
Figure SMS_129
Is the energy consumption coefficient determined by the CPU architecture;
Figure SMS_130
Conditions for successful task execution;
Figure SMS_131
The waiting time for the task to be locally queued;
s2-3, according to the formula:
Figure SMS_132
Figure SMS_133
Figure SMS_134
Figure SMS_135
Figure SMS_136
obtaining time delay required by task unloading to MEC to complete calculation
Figure SMS_138
and
Figure SMS_143
The energy consumption of the moment of arrival at the MEC server for offloading tasks is +.>
Figure SMS_147
; wherein ,
Figure SMS_140
Transmission delays created for offloading tasks from the local device to the MEC server;
Figure SMS_142
Uploading the task to an MEC server to calculate the energy consumption generated in the process;
Figure SMS_146
Energy consumption generated by communication transmission in the process of unloading the task to the MEC;
Figure SMS_149
Queuing waiting time for uploading MEC server;
Figure SMS_137
A queuing time period for execution on the MEC server;
Figure SMS_141
Conditions for successful task execution;
Figure SMS_145
Is computing resource->
Figure SMS_148
The corresponding energy consumption coefficient;
Figure SMS_139
Representing the task as being computed locally;
Figure SMS_144
representing the task being calculated on the MEC server.
The specific implementation manner of the step S3 is as follows:
s3-1, according to the formula:
Figure SMS_150
Figure SMS_151
Figure SMS_152
Figure SMS_153
Figure SMS_154
Figure SMS_155
Figure SMS_156
obtaining a resource scheduling model; wherein,
Figure SMS_169
representing a maximum number of time slots;
Figure SMS_159
Is a Boolean function when->
Figure SMS_163
True time
Figure SMS_162
On the contrary->
Figure SMS_165
Figure SMS_171
Indicating that the task is completed locally and successfully executed;
Figure SMS_173
Indicating that the task is successfully unloaded and executed at the MEC server;
Figure SMS_160
Representing energy consumption generated in the task completion process;
Figure SMS_167
Is the energy consumption coefficient>
Figure SMS_161
Is the energy generated by completing the taskConsumption;
Figure SMS_166
Coefficients determined for the key level;C1 denotes that the whole model is a discrete time model consisting of time slots;C2 and C3 represents a decision, by 0 and 1, whether the decision type is selected;C4 is the condition that the task is successfully executed, i.e. the time length of the completion of the task is less than the maximum tolerance time delay +.>
Figure SMS_157
C5 indicates that the computing power requirement of the MEC server assigned to the task is less than +.>
Figure SMS_164
C6 indicates that the transmission power allocated to the task is smaller than the maximum transmission power
Figure SMS_170
FRepresenting the frequency;
Figure SMS_172
Representing the latency of the MEC server;
Figure SMS_158
Representing the execution time of the MEC server;
Figure SMS_168
representing local latency;lodrespectively representing the local execution of the task, the execution of the task at the server and the discarding of the task;
s3-2, according to the formula:
Figure SMS_174
obtaining the state of a resource scheduling model
Figure SMS_175
; wherein ,
Figure SMS_176
Representing the time length of local queuing of tasks;
Figure SMS_177
representing the queuing time of uploading the MEC server;
Figure SMS_178
Representing a queuing time period executing on the MEC server;
s3-3, according to the formula:
Figure SMS_179
actions to get resource scheduling model
Figure SMS_180
; wherein ,
Figure SMS_181
Representing an execution strategy of the task;
Figure SMS_182
Virtual deadline factor, ">
Figure SMS_183
The value of (2) is +.>
Figure SMS_184
Figure SMS_185
The task is uploaded to the computing resources allocated on the MEC server;
Figure SMS_186
Is the transmit power;
s3-4, according to the formula:
Figure SMS_187
rewarding function for obtaining resource scheduling model
Figure SMS_188
; wherein ,
Figure SMS_189
Applying different levels of reward coefficients to tasks of different key levels according to +.>
Figure SMS_190
Determining a prize value for the prize;successindicating that the task is successfully executed;failureindicating that the task fails to execute;
s3-5, training the resource scheduling model through randomly generating tasks according to the obtained reward function of the resource scheduling model, the action of the resource scheduling model and the state of the resource scheduling model to obtain a trained resource scheduling model;
s3-6, obtaining an optimal allocation mechanism according to the trained resource scheduling model.
The specific implementation mode of the step S3-5 is as follows:
s3-5-1, initializing a playback memory unit D, a value neural network Q and a target neural network in a depth Q network DQN
Figure SMS_191
The method comprises the steps of carrying out a first treatment on the surface of the Network weights of a randomly selected value network>
Figure SMS_192
And target network->
Figure SMS_193
The network weight is +.>
Figure SMS_194
; wherein ,
Figure SMS_195
S3-5-2, obtaining the initial state of the environment
Figure SMS_196
Generating a random number of 0-1 during the time slot interval, and determining the random number asWhether it is less than a predetermined threshold; if yes, randomly generating task with probability +.>
Figure SMS_197
Select random action->
Figure SMS_198
The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, let
Figure SMS_199
S3-5-3, calculating a reward function
Figure SMS_200
The state of the resource scheduling model is converted into s #t+1) and stores the state transition procedure +.>
Figure SMS_201
To the playback memory unit D;
s3-5-4, according to the formula:
Figure SMS_202
obtaining a random sample of the original data from the playback memory unit D
Figure SMS_204
Training results->
Figure SMS_207
; wherein ,
Figure SMS_209
Is a new action;krepresenting the sample number;Krepresenting the training times; first, thekStatus of resource scheduling model under sample +.>
Figure SMS_205
The method comprises the steps of carrying out a first treatment on the surface of the First, thekAction of resource scheduling model under individual samples +.>
Figure SMS_206
The method comprises the steps of carrying out a first treatment on the surface of the First, thekRewarding function of resource scheduling model under individual samples +.>
Figure SMS_208
The method comprises the steps of carrying out a first treatment on the surface of the First, thekState of resource scheduling model under +1 samples +.>
Figure SMS_210
Figure SMS_203
Is a discount factor;
s3-5-5, according to the formula:
Figure SMS_211
obtaining a loss functionLThe method comprises the steps of carrying out a first treatment on the surface of the Wherein, the value neural network
Figure SMS_212
The weight of (2) is +.>
Figure SMS_213
The method comprises the steps of carrying out a first treatment on the surface of the Target neural network->
Figure SMS_214
The weight of (2) is +.>
Figure SMS_215
S3-5-6, learning rate and loss function are usedLGradient descent update value neural network
Figure SMS_216
Weights of (2);
s3-5-7, step length is setCAnd spacing step lengthCWhen the time is over, the parameters of the value neural network are assigned to the target neural network, and the current time slot state update is completed;
s3-5-8, repeating steps S3-5-1 to S3-5-7 until a given training number and time interval is reached.
As shown in fig. 2, there is a single user equipment and a single MEC in the system, the system uses a discrete time model, and the time slot length is
Figure SMS_217
The current time slot is +.>
Figure SMS_218
. Each time slot starts with probability +.>
Figure SMS_219
A computing task is generated that can be executed either at the local device or off-loaded onto the MEC server via a wireless channel. After the task arrives at the system, the system needs to make an unloading decision for the task and allocate system resources for the task. The resources provided by the system can only execute a single task at the same time, and if the system resources allocated by the newly released task are occupied, the task can be completed after the system resources are released.
As shown in fig. 3, a comparison experiment is performed on different algorithms, and task arrival probabilities are started at each time slot
Figure SMS_220
In smaller cases, the success rate of the task can reach 100%. Along with->
Figure SMS_221
All tasks have a decreasing success rate because the increased density of tasks results in insufficient system resources and some tasks waste a lot of time in the queue. Because the invention combines the dynamic resource lifting and the virtual deadline scheduling, the success rate of task execution can be effectively ensured to be always higher than other methods, even if at +.>
Figure SMS_222
When the success rate is increased to 1, the success rate is 33% higher than that of DQN-FCFS. DQN-FCFS represents a first come first serve offload strategy based on DQN; all-Local represents All Local policies; random represents a Random offload policy; greedy represents Greedy algorithm; all-Edge represents the overall offload policy; edge-First represents an offload priority policy; local-First represents the Local priority policy.
In one embodiment of the present invention, when the system performs scheduling in a first come first serve mode, the queuing time is as follows:
waiting time of task on local device:
Figure SMS_223
wherein ,
Figure SMS_224
is time->
Figure SMS_225
The time length of the arrival task calculated locally;
waiting time of task uploading on channel:
Figure SMS_226
wherein ,
Figure SMS_227
is time->
Figure SMS_228
Reaching the duration of the task calculated on the MEC server;
calculation waiting time of tasks on MEC server:
Figure SMS_229
wherein ,
Figure SMS_230
is time->
Figure SMS_231
The time duration of the task calculated on the MEC server is reached.
Transmit power adjustment mechanism: transmitting power of user equipment
Figure SMS_232
The dynamic adjustment can be carried out, and when the key level of the task is low and the data volume is small, smaller transmitting power is allocated to the task so as to reduce the energy consumption of the system; when the importance degree of the task is higher and the data volume is large, larger transmitting power is distributed so as to improve the communication rate and ensure that the task can be completed in time;
computing resource mechanism allocated by MEC server: for tasks with high key level or large data volume, the MEC system can allocate more MEC computing power to reduce computing time, so that the problem that the follow-up tasks are influenced by overtime of execution of the tasks with high key level or excessively long occupation of computing resources by the tasks with large data volume is avoided.
Virtual deadline mechanism: the virtual deadline is calculated according to the current system resource utilization rate, and the task with higher mixed key level has the closer virtual deadline, so that the task with higher key level is ensured to be executed preferentially;
Figure SMS_233
as virtual deadline factor, virtual deadline factor of low-critical task +.>
Figure SMS_234
Always 1, the system calculates the ++f of the high-key-level task according to the current system resource utilization rate>
Figure SMS_235
In order to verify the influence of success rates of different key tasks of the proposed algorithm, the following comparison tests are set, and the proposed algorithm is compared with the following heuristic strategies:
1) All local (all): the tasks are all calculated locally. And adding the task into a local computing queue in the FCFS mode under the condition that the task can meet the deadline. If the deadline of the task cannot be met, the task is directly discarded when arriving.
2) Full unload (AllO): the task completely offloads the edge server computation. And adding the task into a local computing queue in the FCFS mode under the condition that the task can meet the deadline. If the deadline of the task cannot be met, the task is directly discarded when arriving.
3) Random unload (Random): when the task arrives at the system, if it can be performed at the local and MEC ends, it is randomly selected whether to compute the offload locally or at the MEC. Tasks may also be discarded if they are low critical tasks.
4) Greedy algorithm (Greedy): and selecting a scheme with minimum task completion cost in local calculation and unloading calculation in a greedy manner.
5) Local priority (FirstL): and (3) selecting local calculation preferentially, and unloading the task to the MEC for calculation if the local calculation cannot meet the time delay requirement of the task.
6) Unloading priority (FirstO): task is preferably unloaded to MEC computing, and if the computing unloading cannot meet the task requirements, the task is tried to be locally computed.
7) The DQN-based first come first serve offload strategy (DQN-FCFS) that performs offload decisions and resource allocation based on the DQN, but the queue queuing approach uses a first come first serve scheduling algorithm.
Taking the average value of task success rates after 1000 experiments on strategies 1) to 6), training the strategy 7) and the invention for a period of time to obtain a model, and learning the invention
Figure SMS_236
Set to 0.001, rewarding coefficient of high-key task
Figure SMS_237
Set to 4, energy consumption coefficient +.>
Figure SMS_238
Setting the success rate to 0.001, carrying out 1000 times of data testing, taking the average value of the success rates, and obtaining the success rates of all tasks, wherein the success rates of the low-key-level tasks are shown in a table 1, and the success rates of the high-key tasks are shown in a table 2.
TABLE 1
Figure SMS_239
TABLE 2
Figure SMS_240
In order to increase the safety of system operation, reduce the risk of system execution and effectively avoid damaging accidents, in the task unloading research of MEC, the time delay of the tasks, the energy consumption of the system and the criticality of the tasks are comprehensively considered, and the tasks with different criticalities are effectively managed, so that each task can be reasonably and effectively unloaded and executed according to the criticality of the task, and the safety of system operation is improved; task execution with high key level can be preferentially selected, interruption of task unloading service caused by mobility of a user is avoided, and service quality of a mobile system is improved.

Claims (1)

1. The dynamic arrival edge unloading method for the mixed critical-level task is characterized by comprising the following steps of:
s1, acquiring tasks generated by a system, and classifying the tasks in a key level;
s2, respectively calculating system resources required by the local completion of tasks and the completion of the tasks on a server;
s3, establishing a resource scheduling model according to the system resources and the key levels of the tasks required by the local completion of the tasks and the completion of the tasks at the server, and acquiring an optimal allocation mechanism;
s4, obtaining a dynamic arrival edge unloading method of the mixed critical-level task according to an optimal allocation mechanism;
the specific implementation manner of the step S1 is as follows:
s1-1, the system is in the current time slottWith probability
Figure QLYQS_1
Creating a task->
Figure QLYQS_2
Figure QLYQS_3
wherein ,
Figure QLYQS_4
data quantity representing a computing task +.>
Figure QLYQS_5
Representing the number of CPU cycles required per bit of data for the calculation task to calculate;
Figure QLYQS_6
Representing the maximum tolerated time delay of the task;
Figure QLYQS_7
Representing a key level of a task;
Figure QLYQS_8
Representation definition;
s1-2, dividing the key level of the task according to the importance of the task by
Figure QLYQS_9
A representation;
the specific implementation manner of the step S2 is as follows:
s2-1, according to the formula:
Figure QLYQS_10
obtaining the rate of task transmission to server uplink communication
Figure QLYQS_11
; wherein ,
Figure QLYQS_12
Is the bandwidth of the channel, ">
Figure QLYQS_13
Is the channel gain between the terminal device and the base station, < >>
Figure QLYQS_14
Is the power of Gaussian white noise, +.>
Figure QLYQS_15
Is the transmit power allocated by the local device;
s2-2, according to the formula:
Figure QLYQS_16
Figure QLYQS_17
obtaining energy consumption generated in the process of executing tasks on local equipment
Figure QLYQS_18
And the time delay generated by the local calculation of the task +.>
Figure QLYQS_19
; wherein ,
Figure QLYQS_20
Representing the computing power provided by the local device to the task;
Figure QLYQS_21
Is the energy consumption coefficient determined by the CPU architecture;
Figure QLYQS_22
Conditions for successful task execution;
Figure QLYQS_23
Is the job at the presentWaiting time of the ground queuing;
s2-3, according to the formula:
Figure QLYQS_24
Figure QLYQS_25
Figure QLYQS_26
Figure QLYQS_27
Figure QLYQS_28
obtaining time delay required by task unloading to MEC to complete calculation
Figure QLYQS_29
and
Figure QLYQS_35
The energy consumption of the moment of arrival at the MEC server for offloading tasks is +.>
Figure QLYQS_39
; wherein ,
Figure QLYQS_31
Transmission delays created for offloading tasks from the local device to the MEC server;
Figure QLYQS_34
Uploading the task to an MEC server to calculate the energy consumption generated in the process;
Figure QLYQS_38
Energy consumption generated by communication transmission in the process of unloading the task to the MEC;
Figure QLYQS_41
Queuing waiting time for uploading MEC server;
Figure QLYQS_32
A queuing time period for execution on the MEC server;
Figure QLYQS_33
Conditions for successful task execution;
Figure QLYQS_37
Is computing resource->
Figure QLYQS_40
The corresponding energy consumption coefficient;
Figure QLYQS_30
Representing the task as being computed locally;
Figure QLYQS_36
Representing the task computation on the MEC server;
the specific implementation manner of the step S3 is as follows:
s3-1, according to the formula:
Figure QLYQS_42
Figure QLYQS_43
Figure QLYQS_44
Figure QLYQS_45
Figure QLYQS_46
Figure QLYQS_47
Figure QLYQS_48
obtaining a resource scheduling model; wherein,
Figure QLYQS_57
representing a maximum number of time slots;
Figure QLYQS_51
Is a Boolean function when->
Figure QLYQS_56
True time
Figure QLYQS_52
On the contrary->
Figure QLYQS_55
Figure QLYQS_59
Indicating that the task is completed locally and successfully executed;
Figure QLYQS_63
Indicating that the task is successfully unloaded and executed at the MEC server;
Figure QLYQS_58
Representing energy consumption generated in the task completion process;
Figure QLYQS_62
Is the energy consumption coefficient>
Figure QLYQS_49
Is the energy consumption generated by completing the task;
Figure QLYQS_54
Coefficients determined for the key level;C1 denotes that the whole model is a discrete time model consisting of time slots;C2 and C3 represents a decision, by 0 and 1, whether or not the decision is selected;C4 is the condition that the task is successfully executed, i.e. the time length of the completion of the task is less than the maximum tolerance time delay +.>
Figure QLYQS_60
C5 indicates that the computing power requirement of the MEC server assigned to the task is less than +.>
Figure QLYQS_65
C6 indicates that the transmit power allocated to the task is less than the maximum transmit power + ->
Figure QLYQS_61
FRepresenting the frequency;
Figure QLYQS_64
Representing the latency of the MEC server;
Figure QLYQS_50
Representing the execution time of the MEC server;
Figure QLYQS_53
representing local latency;lodrespectively representing the local execution of the task, the execution of the task at the server and the discarding of the task;
s3-2, according to the formula:
Figure QLYQS_66
obtaining the state of a resource scheduling model
Figure QLYQS_67
; wherein ,
Figure QLYQS_68
Representing the time length of local queuing of tasks;
Figure QLYQS_69
representing the queuing time of uploading the MEC server;
Figure QLYQS_70
Representing a queuing time period executing on the MEC server;
s3-3, according to the formula:
Figure QLYQS_71
actions to get resource scheduling model
Figure QLYQS_72
; wherein ,
Figure QLYQS_73
Representing an execution strategy of the task;
Figure QLYQS_74
Virtual deadline factor, ">
Figure QLYQS_75
The value of (2) is +.>
Figure QLYQS_76
Figure QLYQS_77
The task is uploaded to the computing resources allocated on the MEC server;
Figure QLYQS_78
Is the transmit power;
s3-4, according to the formula:
Figure QLYQS_79
rewarding function for obtaining resource scheduling model
Figure QLYQS_80
; wherein ,
Figure QLYQS_81
Applying different levels of reward coefficients to tasks of different key levels according to +.>
Figure QLYQS_82
Determining a prize value for the prize;successindicating that the task is successfully executed;failureindicating that the task fails to execute;
s3-5, training the resource scheduling model through randomly generating tasks according to the obtained reward function of the resource scheduling model, the action of the resource scheduling model and the state of the resource scheduling model to obtain a trained resource scheduling model;
s3-6, obtaining an optimal allocation mechanism according to the trained resource scheduling model;
the specific implementation mode of the step S3-5 is as follows:
s3-5-1, initializing a playback memory unit D, a value neural network Q and a target neural network in a depth Q network DQN
Figure QLYQS_83
The method comprises the steps of carrying out a first treatment on the surface of the Network for randomly selecting value networksComplex weight->
Figure QLYQS_84
And target network->
Figure QLYQS_85
The network weight is +.>
Figure QLYQS_86
; wherein ,
Figure QLYQS_87
S3-5-2, obtaining the initial state of the environment
Figure QLYQS_88
Generating a random number of 0-1 during the time slot interval, and judging whether the random number is smaller than a preset threshold value or not; if yes, randomly generating task with probability +.>
Figure QLYQS_89
Select random action->
Figure QLYQS_90
The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, let
Figure QLYQS_91
S3-5-3, calculating a reward function
Figure QLYQS_92
The state of the resource scheduling model is converted into s #t+1) and store state transition procedure
Figure QLYQS_93
To the playback memory unit D;
s3-5-4, according to the formula:
Figure QLYQS_94
obtaining a random sample of the original data from the playback memory unit D
Figure QLYQS_96
Training results->
Figure QLYQS_99
; wherein ,
Figure QLYQS_101
Is a new action;krepresenting the sample number;Krepresenting the training times; first, thekStatus of resource scheduling model under sample +.>
Figure QLYQS_97
The method comprises the steps of carrying out a first treatment on the surface of the First, thekAction of resource scheduling model under individual samples +.>
Figure QLYQS_98
The method comprises the steps of carrying out a first treatment on the surface of the First, thekRewarding function of resource scheduling model under individual samples +.>
Figure QLYQS_100
The method comprises the steps of carrying out a first treatment on the surface of the First, thekState of resource scheduling model under +1 samples +.>
Figure QLYQS_102
Figure QLYQS_95
Is a discount factor;
s3-5-5, according to the formula:
Figure QLYQS_103
obtaining a loss functionLThe method comprises the steps of carrying out a first treatment on the surface of the Wherein, the value neural network
Figure QLYQS_104
The weight of (2) is +.>
Figure QLYQS_105
The method comprises the steps of carrying out a first treatment on the surface of the Target neural network->
Figure QLYQS_106
The weight of (2) is +.>
Figure QLYQS_107
S3-5-6, learning rate and loss function are usedLGradient descent update value neural network
Figure QLYQS_108
Weights of (2);
s3-5-7, step length is setCAnd spacing step lengthCWhen the time is over, the parameters of the value neural network are assigned to the target neural network, and the current time slot state update is completed;
s3-5-8, repeating steps S3-5-1 to S3-5-7 until a given training number and time interval is reached.
CN202310194760.XA 2023-03-03 2023-03-03 Hybrid critical task oriented dynamic arrival edge unloading method Active CN115858048B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310194760.XA CN115858048B (en) 2023-03-03 2023-03-03 Hybrid critical task oriented dynamic arrival edge unloading method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310194760.XA CN115858048B (en) 2023-03-03 2023-03-03 Hybrid critical task oriented dynamic arrival edge unloading method

Publications (2)

Publication Number Publication Date
CN115858048A CN115858048A (en) 2023-03-28
CN115858048B true CN115858048B (en) 2023-04-25

Family

ID=85659868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310194760.XA Active CN115858048B (en) 2023-03-03 2023-03-03 Hybrid critical task oriented dynamic arrival edge unloading method

Country Status (1)

Country Link
CN (1) CN115858048B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117240631A (en) * 2023-11-15 2023-12-15 成都超算中心运营管理有限公司 Method and system for connecting heterogeneous industrial equipment with cloud platform based on message middleware
CN117648182B (en) * 2023-11-28 2024-07-05 南京审计大学 Method for processing safety key calculation task by mobile audit equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677461A (en) * 2015-12-30 2016-06-15 西安工业大学 Mixed-criticality tasks scheduling method based on criticality
CN110099384A (en) * 2019-04-25 2019-08-06 南京邮电大学 Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user
CN111954236A (en) * 2020-07-27 2020-11-17 河海大学 Hierarchical edge calculation unloading method based on priority
CN113597013A (en) * 2021-08-05 2021-11-02 哈尔滨工业大学 Cooperative task scheduling method in mobile edge computing under user mobile scene
CN113612843A (en) * 2021-08-02 2021-11-05 吉林大学 MEC task unloading and resource allocation method based on deep reinforcement learning
CN113950066A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Single server part calculation unloading method, system and equipment under mobile edge environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677461A (en) * 2015-12-30 2016-06-15 西安工业大学 Mixed-criticality tasks scheduling method based on criticality
CN110099384A (en) * 2019-04-25 2019-08-06 南京邮电大学 Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user
WO2020216135A1 (en) * 2019-04-25 2020-10-29 南京邮电大学 Multi-user multi-mec task unloading resource scheduling method based on edge-end collaboration
CN111954236A (en) * 2020-07-27 2020-11-17 河海大学 Hierarchical edge calculation unloading method based on priority
CN113612843A (en) * 2021-08-02 2021-11-05 吉林大学 MEC task unloading and resource allocation method based on deep reinforcement learning
CN113597013A (en) * 2021-08-05 2021-11-02 哈尔滨工业大学 Cooperative task scheduling method in mobile edge computing under user mobile scene
CN113950066A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Single server part calculation unloading method, system and equipment under mobile edge environment

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
Feifei Zhang,Jidong Ge,Chifong Wong.Online learning offloading framework for heterogeneous mobile edge computing system.《Journal of Parallel and Distributed Computing》.2019,第128卷(第2019期),第167-183页. *
Li Tiansen, Huang Shujuan, Xiao Feng.A Mixed-Criticality Task Scheduling Method Based on Comprehensive Impact Factor.《Computers and Electrical Engineering》.2022,第105卷(第2023期),第1-11页. *
Qi Wang ; Jing Shen ; Yujing Zhao ; Gongming Li ; Jinglong Zhao.Offloading and Delay Optimization Strategies for Power Services in Smart Grid for 5G Edge Computing.《2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC)》.2022,第1423-1427页. *
Xianfu Chen, Honggang Zhang, Celimuge Wu.Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning.《2018 IEEE 88th Vehicular Technology Conference (VTC-Fall)》.2018,第1-7页. *
卢海峰 ; 顾春华 ; 罗飞 ; 丁炜超 ; 杨婷 ; 郑帅 ; .基于深度强化学习的移动边缘计算任务卸载研究.计算机研究与发展.2020,(第07期),第195-210页. *
周驰岷 ; 郭兵 ; 沈艳 ; 邓立国 ; .基于排队系统的最佳拥塞控制比例研究.现代电子技术.2016,第39卷(第12期),第14-17、21页. *
林峻良.移动边缘计算系统联合任务卸载及资源分配算法研究.《中国优秀硕士学位论文全文数据库 (信息科技辑)》.2020,(第2期),I136-869. *
董思岐, 李海龙, 屈毓锛.面向优先级用户的移动边缘计算任务调度策略.《计算机应用研究》.2019,第37卷(第9期),第2701-2705页. *
邓添,沈艳,史奎锐.基于遗传算法的移动边缘计算混合关键任务卸载.《信息与电脑(理论版)》.2021,第33卷(第11期),第26-29页. *

Also Published As

Publication number Publication date
CN115858048A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN115858048B (en) Hybrid critical task oriented dynamic arrival edge unloading method
CN113612843B (en) MEC task unloading and resource allocation method based on deep reinforcement learning
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN113242568B (en) Task unloading and resource allocation method in uncertain network environment
CN113032120B (en) Industrial field big data task cooperative scheduling method based on edge calculation
CN110971706A (en) Approximate optimization and reinforcement learning-based task unloading method in MEC
CN107708152B (en) Task unloading method of heterogeneous cellular network
Yuan et al. Online dispatching and fair scheduling of edge computing tasks: A learning-based approach
CN112799823B (en) Online dispatching and scheduling method and system for edge computing tasks
CN113626104B (en) Multi-objective optimization unloading strategy based on deep reinforcement learning under edge cloud architecture
CN114567895A (en) Method for realizing intelligent cooperation strategy of MEC server cluster
CN114172558B (en) Task unloading method based on edge calculation and unmanned aerial vehicle cluster cooperation in vehicle network
CN114706631B (en) Unloading decision method and system in mobile edge calculation based on deep Q learning
CN116886703A (en) Cloud edge end cooperative computing unloading method based on priority and reinforcement learning
CN107820278B (en) Task unloading method for cellular network delay and cost balance
CN116016519A (en) QoE-oriented edge computing resource allocation method
CN117407160A (en) Mixed deployment method for online task and offline task in edge computing scene
CN115408072A (en) Rapid adaptation model construction method based on deep reinforcement learning and related device
CN117749796A (en) Cloud edge computing power network system calculation unloading method and system
CN115022893B (en) Resource allocation method for minimizing total computation time in multi-task edge computing system
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
CN115022188B (en) Container placement method and system in electric power edge cloud computing network
CN116204319A (en) Yun Bianduan collaborative unloading method and system based on SAC algorithm and task dependency relationship
CN117891532B (en) Terminal energy efficiency optimization unloading method based on attention multi-index sorting
CN118055160A (en) System and method for distributing tasks of edge computing server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant