CN115858048B - Hybrid critical task oriented dynamic arrival edge unloading method - Google Patents
Hybrid critical task oriented dynamic arrival edge unloading method Download PDFInfo
- Publication number
- CN115858048B CN115858048B CN202310194760.XA CN202310194760A CN115858048B CN 115858048 B CN115858048 B CN 115858048B CN 202310194760 A CN202310194760 A CN 202310194760A CN 115858048 B CN115858048 B CN 115858048B
- Authority
- CN
- China
- Prior art keywords
- task
- tasks
- representing
- resource scheduling
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000005265 energy consumption Methods 0.000 claims abstract description 45
- 238000004364 calculation method Methods 0.000 claims abstract description 28
- 230000007246 mechanism Effects 0.000 claims abstract description 17
- 238000013528 artificial neural network Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 14
- 230000005540 biological transmission Effects 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 13
- 230000009471 action Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 claims description 8
- 230000001537 neural effect Effects 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 230000001934 delay Effects 0.000 claims description 2
- 230000008901 benefit Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000004134 energy conservation Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Mobile Radio Communication Systems (AREA)
Abstract
The invention discloses a dynamic arrival edge unloading method for a mixed critical-level task, which relates to the field of mobile edge calculation and comprises the following steps: acquiring tasks generated by a system, and classifying the key levels of the tasks; respectively calculating system resources required by the task completion in the local and the completion in the server; establishing a resource scheduling model according to the system resources required by the local completion of the task and the key level of the task completed by the server, and acquiring an optimal allocation mechanism; and obtaining unloading and task unloading schemes according to the optimal allocation mechanism. In order to increase the safety of system operation, reduce the system execution risk, effectively avoid damaging accidents, comprehensively consider the time delay of tasks and the system energy consumption and the criticality of the tasks, and effectively manage the tasks with different criticalities; task execution with high key level can be preferentially selected, interruption of task unloading service caused by mobility of a user is avoided, and service quality of a mobile system is improved.
Description
Technical Field
The invention relates to the field of mobile edge calculation; in particular to a dynamic arrival edge unloading method for a mixed critical-level task.
Background
The short-distance, ultra-low delay, high bandwidth and other characteristics of mobile edge calculation, the research of mobile edge calculation has received high attention in recent years, and in particular, in the aspect of task unloading, different solutions are proposed according to different requirements and application scenes. The current common offloading strategies are divided into 3 main types according to the performance requirements of computational offloading, namely minimizing time delay, minimizing energy consumption and maximizing benefit.
The minimum delay scheme aims at different services and provides a distributed task scheduling strategy estimation delay time with obvious delay by minimizing end-to-end service delay and task completion time, so that service quality is improved. The prior art aims at realizing the purpose of preferentially reducing the execution time of the computing task with high real-time requirement, and sets the real-time priority of the task. And determining that the task is executed at the edge or the cloud according to the priority level. Meanwhile, in order to ensure that all tasks can be completed within required time, the length of the queue can be correspondingly changed according to the arrival condition of the tasks, so that the situation that the queue is too long and the tasks are trapped in long-time waiting is avoided. An excitation function is designed, and an online learning method is provided based on a deep reinforcement learning mechanism. The method effectively reduces the average delay time in the task queue. Taking mean delay and random delay jitter into account, an improved heterogeneous earliest completion time strategy is proposed that utilizes kernel density estimation to solve the problem of maximizing tolerant delay minimization. An edge server deployment scheme using Mixed-Integer linear programming (Mixed-inter LinearProgramming, MILP) technology is proposed that can optimize both the workload of the edge server and the response delay of the user. The scheme with the minimum time delay can reduce the execution time of the task and reduce the time delay, but the too fast energy consumption of the mobile terminal can cause that the corresponding unloading strategy cannot be used normally.
The method is suitable for an efficient energy-saving computing and unloading algorithm for performing all computing and unloading on the multiple mobile devices. Considering that the weighted sum of the energy consumption and the time delay is minimum, assuming that the computing capacity of the server is a fixed constant, taking into account the allocation of wireless resources for the purpose of energy conservation, classifying the tasks according to the time delay of the tasks, the wireless resource requirement and the weight of the energy consumption, and giving priority to the tasks to finish unloading. The computational offload strategy that minimizes energy consumption is an algorithm that seeks to minimize energy consumption while meeting the latency constraints of the mobile terminal. But it may not necessarily be necessary to minimize latency or minimize energy consumption during the actual unloading process.
Maximum profitability proposes a gaming strategy for computing offload for multiple mobile terminals computing all offload. The strategy sets weighing parameters as indexes of calculation and unloading, and designs a distributed calculation and unloading algorithm capable of realizing Nash equilibrium so as to balance between equipment energy consumption and calculation time delay, thereby realizing maximization of user benefits. Maximum profitability considers the energy consumption of the mobile terminal and the dead limit of a calculation task, and two approximation algorithms are respectively provided for a single user and multiple users in a dead limit sensitive MEC system to minimize the energy consumption and the time delay. The literature proposes a new energy-efficient deep learning-based offloading scheme to train a deep learning-based intelligent decision algorithm. The algorithm selects an optimal set of application components based on the remaining energy of the user, the energy consumption of the application components, network conditions, computational load, data transfer volume, and communication delay. The prior art includes utilizing deep reinforcement learning to provide a task offloading decision to solve fine granularity, thereby reducing delay, cost, energy consumption and network utilization of an MEC platform; and according to the data volume of the calculation task of the mobile user and the performance characteristics of the edge calculation node, combining with an artificial intelligence technology, a calculation unloading and task migration algorithm based on task prediction is provided so as to obtain the maximum benefit. The calculation unloading strategy for maximizing the benefits essentially analyzes the influence of two indexes, namely time delay and energy consumption, on the total consumption of calculation unloading to find a balance point so that the limit setting of the time delay or the energy consumption is more suitable for the actual scene, thereby achieving the purpose of minimizing the total cost, namely maximizing the benefits. However, in the prior art, though the priority of the executed task is considered, the executed task is regarded as the task with the same criticality, and the influence of the execution of the tasks with different criticalities on the system is not considered, so that the task with high criticality cannot be executed in time in the switching of the security level of the system and the dead limit of the execution of the task is missed. On the other hand, the mobility of the user makes the algorithm neglect the interruption of the task unloading service caused by the task unloading process, so that the failure of the task unloading is caused, and the service quality of the mobile system is reduced.
Disclosure of Invention
Aiming at the defects in the prior art, the dynamic arrival edge unloading method for the mixed critical-level task solves the problem that the prior art does not consider the influence of execution of tasks with different criticality on a system and the problem that the task unloading service is interrupted due to the mobility of a user.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: a dynamic arrival edge unloading method for a mixed critical-level task comprises the following steps:
s1, acquiring tasks generated by a system, and classifying the tasks in a key level;
s2, respectively calculating system resources required by the local completion of tasks and the completion of the tasks on a server;
s3, establishing a resource scheduling model according to the system resources and the key levels of the tasks required by the local completion of the tasks and the completion of the tasks at the server, and acquiring an optimal allocation mechanism;
s4, obtaining the dynamic arrival edge unloading method of the mixed critical-level task according to the optimal allocation mechanism.
Further, the specific implementation manner of step S1 is as follows:
wherein ,data quantity representing a computing task +.>Representing the number of CPU cycles required per bit of data for the calculation task to calculate;Representing the maximum tolerated time delay of the task;Representing a key level of a task;Representation definition;
s1-2, dividing the key level of the task according to the importance of the task byAnd (3) representing.
Further, the specific implementation manner of step S2 is as follows:
s2-1, according to the formula:
obtaining the rate of task transmission to server uplink communication; wherein ,Is the bandwidth of the channel, ">Is the channel gain between the terminal device and the base station, < >>Is the power of Gaussian white noise, +.>Is the transmit power allocated by the local device;
s2-2, according to the formula:
obtaining energy consumption generated in the process of executing tasks on local equipmentAnd the time delay generated by the local calculation of the task +.>; wherein ,Representing the computing power provided by the local device to the task;Is the energy consumption coefficient determined by the CPU architecture;Conditions for successful task execution;The waiting time for the task to be locally queued;
s2-3, according to the formula:
obtaining time delay required by task unloading to MEC to complete calculation andThe energy consumption of the moment of arrival at the MEC server for offloading tasks is +.>; wherein ,For offloading tasks from local equipment to MEC serversThe generated transmission delay;Uploading the task to an MEC server to calculate the energy consumption generated in the process;Energy consumption generated by communication transmission in the process of unloading the task to the MEC;Queuing waiting time for uploading MEC server;A queuing time period for execution on the MEC server;Conditions for successful task execution;Is computing resource->The corresponding energy consumption coefficient;Representing the task as being computed locally;representing the task being calculated on the MEC server.
Further, the specific implementation manner of step S3 is as follows:
s3-1, according to the formula:
obtaining a resource scheduling model; wherein,representing a maximum number of time slots;Is a Boolean function when->True timeOn the contrary->;Indicating that the task is completed locally and successfully executed;Indicating that the task is successfully unloaded and executed at the MEC server;Representing energy consumption generated in the task completion process;Is the energy consumption coefficient>Is the energy consumption generated by completing the task;Coefficients determined for the key level;C1 denotes that the whole model is a discrete time model consisting of time slots;C2 and C3 represents a decision, by 0 and 1, whether the decision type is selected;C4 is the condition that the task is successfully executed, i.e. the time length of the completion of the task is less than the maximum tolerance time delay +.>;C5 indicates that the computing power requirement of the MEC server assigned to the task is less than +.>;C6 indicates that the transmission power allocated to the task is smaller than the maximum transmission power;FRepresenting the frequency;Representing the latency of the MEC server;Representing the execution time of the MEC server;representing local latency;l、o、drespectively representing the local execution of the task, the execution of the task at the server and the discarding of the task;
s3-2, according to the formula:
obtaining the state of a resource scheduling model; wherein ,Representing the time length of local queuing of tasks;representing the queuing time of uploading the MEC server;Representing a queuing time period executing on the MEC server;
s3-3, according to the formula:
actions to get resource scheduling model; wherein ,Representing an execution strategy of the task;Virtual deadline factor, ">The value of (2) is +.>;The task is uploaded to the computing resources allocated on the MEC server;Is the transmit power;
s3-4, according to the formula:
rewarding function for obtaining resource scheduling model; wherein ,Applying different levels of reward coefficients to tasks of different key levels according to +.>Determining a prize value for the prize;successindicating that the task is successfully executed;failureindicating that the task fails to execute;
s3-5, training the resource scheduling model through randomly generating tasks according to the obtained reward function of the resource scheduling model, the action of the resource scheduling model and the state of the resource scheduling model to obtain a trained resource scheduling model;
s3-6, obtaining an optimal allocation mechanism according to the trained resource scheduling model.
Further, the specific implementation manner of the step S3-5 is as follows:
s3-5-1, initializing a playback memory unit D, a value neural network Q and a target neural network in a depth Q network DQNThe method comprises the steps of carrying out a first treatment on the surface of the Network weights of a randomly selected value network>And target network->The network weight is +.>; wherein ,;
S3-5-2, obtaining the initial state of the environmentGenerating a random number of 0-1 during the time slot interval, and judging whether the random number is smaller than a preset threshold value or not; if yes, randomly generating task with probability +.>Select random action->The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, let;
S3-5-3, calculating a reward functionThe state of the resource scheduling model is converted into s #t+1) and stores the state transition procedure +.>To the playback memory unit D;
s3-5-4, according to the formula:
obtaining a random sample of the original data from the playback memory unit DTraining results->; wherein ,Is a new action;krepresenting the sample number;Krepresenting the training times; first, thekStatus of resource scheduling model under sample +.>The method comprises the steps of carrying out a first treatment on the surface of the First, thekAction of resource scheduling model under individual samples +.>The method comprises the steps of carrying out a first treatment on the surface of the First, thekRewarding function of resource scheduling model under individual samples +.>The method comprises the steps of carrying out a first treatment on the surface of the First, thekState of resource scheduling model under +1 samples +.>;Is a discount factor;
s3-5-5, according to the formula:
obtaining a loss functionLThe method comprises the steps of carrying out a first treatment on the surface of the Wherein, the value neural networkThe weight of (2) is +.>The method comprises the steps of carrying out a first treatment on the surface of the Target neural network->The weight of (2) is +.>;
S3-5-6, learning rate and loss function are usedLGradient descent update value neural networkWeights of (2);
s3-5-7, step length is setCAnd spacing step lengthCWhen the time is over, the parameters of the value neural network are assigned to the target neural network, and the current time slot state update is completed;
s3-5-8, repeating steps S3-5-1 to S3-5-7 until a given training number and time interval is reached.
The beneficial effects of the invention are as follows: the invention establishes a system model according to the success rate of the task and the total energy consumption, and simultaneously provides a system resource dynamic adjustment mechanism and a virtual deadline scheduling mechanism for the resource scheduling problem after unloading in order to ensure the success rate of the task and ensure the execution of the high-key-level task; task execution with high key level can be preferentially selected, and interruption of task unloading service caused by mobility of a user is avoided, so that the service quality of a mobile system is improved; the safety of system operation is improved, the system execution risk is reduced, and damage accidents are effectively avoided.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of the structure of the present invention;
FIG. 3 is a graph showing the comparison of the success rate of different tasks.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, a dynamic arrival edge unloading method for a hybrid critical-level-oriented task includes the following steps:
s1, acquiring tasks generated by a system, and classifying the tasks in a key level;
s2, respectively calculating system resources required by the local completion of tasks and the completion of the tasks on a server;
s3, establishing a resource scheduling model according to the system resources and the key levels of the tasks required by the local completion of the tasks and the completion of the tasks at the server, and acquiring an optimal allocation mechanism;
s4, obtaining the dynamic arrival edge unloading method of the mixed critical-level task according to the optimal allocation mechanism.
The specific implementation manner of the step S1 is as follows:
wherein ,data quantity representing a computing task +.>Representing the number of CPU cycles required per bit of data for the calculation task to calculate;Representing the maximum tolerated time delay of the task;Representing a key level of a task;Representation definition;
s1-2, dividing the key level of the task according to the importance of the task byAnd (3) representing.
The specific implementation manner of the step S2 is as follows:
s2-1, according to the formula:
get task transmission to serverRoad communication rate; wherein ,Is the bandwidth of the channel, ">Is the channel gain between the terminal device and the base station, < >>Is the power of Gaussian white noise, +.>Is the transmit power allocated by the local device;
s2-2, according to the formula:
obtaining energy consumption generated in the process of executing tasks on local equipmentAnd the time delay generated by the local calculation of the task +.>; wherein ,Representing the computing power provided by the local device to the task;Is the energy consumption coefficient determined by the CPU architecture;Conditions for successful task execution;The waiting time for the task to be locally queued;
s2-3, according to the formula:
obtaining time delay required by task unloading to MEC to complete calculation andThe energy consumption of the moment of arrival at the MEC server for offloading tasks is +.>; wherein ,Transmission delays created for offloading tasks from the local device to the MEC server;Uploading the task to an MEC server to calculate the energy consumption generated in the process;Energy consumption generated by communication transmission in the process of unloading the task to the MEC;Queuing waiting time for uploading MEC server;A queuing time period for execution on the MEC server;Conditions for successful task execution;Is computing resource->The corresponding energy consumption coefficient;Representing the task as being computed locally;representing the task being calculated on the MEC server.
The specific implementation manner of the step S3 is as follows:
s3-1, according to the formula:
obtaining a resource scheduling model; wherein,representing a maximum number of time slots;Is a Boolean function when->True timeOn the contrary->;Indicating that the task is completed locally and successfully executed;Indicating that the task is successfully unloaded and executed at the MEC server;Representing energy consumption generated in the task completion process;Is the energy consumption coefficient>Is the energy generated by completing the taskConsumption;Coefficients determined for the key level;C1 denotes that the whole model is a discrete time model consisting of time slots;C2 and C3 represents a decision, by 0 and 1, whether the decision type is selected;C4 is the condition that the task is successfully executed, i.e. the time length of the completion of the task is less than the maximum tolerance time delay +.>;C5 indicates that the computing power requirement of the MEC server assigned to the task is less than +.>;C6 indicates that the transmission power allocated to the task is smaller than the maximum transmission power;FRepresenting the frequency;Representing the latency of the MEC server;Representing the execution time of the MEC server;representing local latency;l、o、drespectively representing the local execution of the task, the execution of the task at the server and the discarding of the task;
s3-2, according to the formula:
obtaining the state of a resource scheduling model; wherein ,Representing the time length of local queuing of tasks;representing the queuing time of uploading the MEC server;Representing a queuing time period executing on the MEC server;
s3-3, according to the formula:
actions to get resource scheduling model; wherein ,Representing an execution strategy of the task;Virtual deadline factor, ">The value of (2) is +.>;The task is uploaded to the computing resources allocated on the MEC server;Is the transmit power;
s3-4, according to the formula:
rewarding function for obtaining resource scheduling model; wherein ,Applying different levels of reward coefficients to tasks of different key levels according to +.>Determining a prize value for the prize;successindicating that the task is successfully executed;failureindicating that the task fails to execute;
s3-5, training the resource scheduling model through randomly generating tasks according to the obtained reward function of the resource scheduling model, the action of the resource scheduling model and the state of the resource scheduling model to obtain a trained resource scheduling model;
s3-6, obtaining an optimal allocation mechanism according to the trained resource scheduling model.
The specific implementation mode of the step S3-5 is as follows:
s3-5-1, initializing a playback memory unit D, a value neural network Q and a target neural network in a depth Q network DQNThe method comprises the steps of carrying out a first treatment on the surface of the Network weights of a randomly selected value network>And target network->The network weight is +.>; wherein ,;
S3-5-2, obtaining the initial state of the environmentGenerating a random number of 0-1 during the time slot interval, and determining the random number asWhether it is less than a predetermined threshold; if yes, randomly generating task with probability +.>Select random action->The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, let;
S3-5-3, calculating a reward functionThe state of the resource scheduling model is converted into s #t+1) and stores the state transition procedure +.>To the playback memory unit D;
s3-5-4, according to the formula:
obtaining a random sample of the original data from the playback memory unit DTraining results->; wherein ,Is a new action;krepresenting the sample number;Krepresenting the training times; first, thekStatus of resource scheduling model under sample +.>The method comprises the steps of carrying out a first treatment on the surface of the First, thekAction of resource scheduling model under individual samples +.>The method comprises the steps of carrying out a first treatment on the surface of the First, thekRewarding function of resource scheduling model under individual samples +.>The method comprises the steps of carrying out a first treatment on the surface of the First, thekState of resource scheduling model under +1 samples +.>;Is a discount factor;
s3-5-5, according to the formula:
obtaining a loss functionLThe method comprises the steps of carrying out a first treatment on the surface of the Wherein, the value neural networkThe weight of (2) is +.>The method comprises the steps of carrying out a first treatment on the surface of the Target neural network->The weight of (2) is +.>;
S3-5-6, learning rate and loss function are usedLGradient descent update value neural networkWeights of (2);
s3-5-7, step length is setCAnd spacing step lengthCWhen the time is over, the parameters of the value neural network are assigned to the target neural network, and the current time slot state update is completed;
s3-5-8, repeating steps S3-5-1 to S3-5-7 until a given training number and time interval is reached.
As shown in fig. 2, there is a single user equipment and a single MEC in the system, the system uses a discrete time model, and the time slot length isThe current time slot is +.>. Each time slot starts with probability +.>A computing task is generated that can be executed either at the local device or off-loaded onto the MEC server via a wireless channel. After the task arrives at the system, the system needs to make an unloading decision for the task and allocate system resources for the task. The resources provided by the system can only execute a single task at the same time, and if the system resources allocated by the newly released task are occupied, the task can be completed after the system resources are released.
As shown in fig. 3, a comparison experiment is performed on different algorithms, and task arrival probabilities are started at each time slotIn smaller cases, the success rate of the task can reach 100%. Along with->All tasks have a decreasing success rate because the increased density of tasks results in insufficient system resources and some tasks waste a lot of time in the queue. Because the invention combines the dynamic resource lifting and the virtual deadline scheduling, the success rate of task execution can be effectively ensured to be always higher than other methods, even if at +.>When the success rate is increased to 1, the success rate is 33% higher than that of DQN-FCFS. DQN-FCFS represents a first come first serve offload strategy based on DQN; all-Local represents All Local policies; random represents a Random offload policy; greedy represents Greedy algorithm; all-Edge represents the overall offload policy; edge-First represents an offload priority policy; local-First represents the Local priority policy.
In one embodiment of the present invention, when the system performs scheduling in a first come first serve mode, the queuing time is as follows:
waiting time of task on local device:
waiting time of task uploading on channel:
calculation waiting time of tasks on MEC server:
Transmit power adjustment mechanism: transmitting power of user equipmentThe dynamic adjustment can be carried out, and when the key level of the task is low and the data volume is small, smaller transmitting power is allocated to the task so as to reduce the energy consumption of the system; when the importance degree of the task is higher and the data volume is large, larger transmitting power is distributed so as to improve the communication rate and ensure that the task can be completed in time;
computing resource mechanism allocated by MEC server: for tasks with high key level or large data volume, the MEC system can allocate more MEC computing power to reduce computing time, so that the problem that the follow-up tasks are influenced by overtime of execution of the tasks with high key level or excessively long occupation of computing resources by the tasks with large data volume is avoided.
Virtual deadline mechanism: the virtual deadline is calculated according to the current system resource utilization rate, and the task with higher mixed key level has the closer virtual deadline, so that the task with higher key level is ensured to be executed preferentially;as virtual deadline factor, virtual deadline factor of low-critical task +.>Always 1, the system calculates the ++f of the high-key-level task according to the current system resource utilization rate>。
In order to verify the influence of success rates of different key tasks of the proposed algorithm, the following comparison tests are set, and the proposed algorithm is compared with the following heuristic strategies:
1) All local (all): the tasks are all calculated locally. And adding the task into a local computing queue in the FCFS mode under the condition that the task can meet the deadline. If the deadline of the task cannot be met, the task is directly discarded when arriving.
2) Full unload (AllO): the task completely offloads the edge server computation. And adding the task into a local computing queue in the FCFS mode under the condition that the task can meet the deadline. If the deadline of the task cannot be met, the task is directly discarded when arriving.
3) Random unload (Random): when the task arrives at the system, if it can be performed at the local and MEC ends, it is randomly selected whether to compute the offload locally or at the MEC. Tasks may also be discarded if they are low critical tasks.
4) Greedy algorithm (Greedy): and selecting a scheme with minimum task completion cost in local calculation and unloading calculation in a greedy manner.
5) Local priority (FirstL): and (3) selecting local calculation preferentially, and unloading the task to the MEC for calculation if the local calculation cannot meet the time delay requirement of the task.
6) Unloading priority (FirstO): task is preferably unloaded to MEC computing, and if the computing unloading cannot meet the task requirements, the task is tried to be locally computed.
7) The DQN-based first come first serve offload strategy (DQN-FCFS) that performs offload decisions and resource allocation based on the DQN, but the queue queuing approach uses a first come first serve scheduling algorithm.
Taking the average value of task success rates after 1000 experiments on strategies 1) to 6), training the strategy 7) and the invention for a period of time to obtain a model, and learning the inventionSet to 0.001, rewarding coefficient of high-key taskSet to 4, energy consumption coefficient +.>Setting the success rate to 0.001, carrying out 1000 times of data testing, taking the average value of the success rates, and obtaining the success rates of all tasks, wherein the success rates of the low-key-level tasks are shown in a table 1, and the success rates of the high-key tasks are shown in a table 2.
TABLE 1
TABLE 2
In order to increase the safety of system operation, reduce the risk of system execution and effectively avoid damaging accidents, in the task unloading research of MEC, the time delay of the tasks, the energy consumption of the system and the criticality of the tasks are comprehensively considered, and the tasks with different criticalities are effectively managed, so that each task can be reasonably and effectively unloaded and executed according to the criticality of the task, and the safety of system operation is improved; task execution with high key level can be preferentially selected, interruption of task unloading service caused by mobility of a user is avoided, and service quality of a mobile system is improved.
Claims (1)
1. The dynamic arrival edge unloading method for the mixed critical-level task is characterized by comprising the following steps of:
s1, acquiring tasks generated by a system, and classifying the tasks in a key level;
s2, respectively calculating system resources required by the local completion of tasks and the completion of the tasks on a server;
s3, establishing a resource scheduling model according to the system resources and the key levels of the tasks required by the local completion of the tasks and the completion of the tasks at the server, and acquiring an optimal allocation mechanism;
s4, obtaining a dynamic arrival edge unloading method of the mixed critical-level task according to an optimal allocation mechanism;
the specific implementation manner of the step S1 is as follows:
wherein ,data quantity representing a computing task +.>Representing the number of CPU cycles required per bit of data for the calculation task to calculate;Representing the maximum tolerated time delay of the task;Representing a key level of a task;Representation definition;
s1-2, dividing the key level of the task according to the importance of the task byA representation;
the specific implementation manner of the step S2 is as follows:
s2-1, according to the formula:
obtaining the rate of task transmission to server uplink communication; wherein ,Is the bandwidth of the channel, ">Is the channel gain between the terminal device and the base station, < >>Is the power of Gaussian white noise, +.>Is the transmit power allocated by the local device;
s2-2, according to the formula:
obtaining energy consumption generated in the process of executing tasks on local equipmentAnd the time delay generated by the local calculation of the task +.>; wherein ,Representing the computing power provided by the local device to the task;Is the energy consumption coefficient determined by the CPU architecture;Conditions for successful task execution;Is the job at the presentWaiting time of the ground queuing;
s2-3, according to the formula:
obtaining time delay required by task unloading to MEC to complete calculation andThe energy consumption of the moment of arrival at the MEC server for offloading tasks is +.>; wherein ,Transmission delays created for offloading tasks from the local device to the MEC server;Uploading the task to an MEC server to calculate the energy consumption generated in the process;Energy consumption generated by communication transmission in the process of unloading the task to the MEC;Queuing waiting time for uploading MEC server;A queuing time period for execution on the MEC server;Conditions for successful task execution;Is computing resource->The corresponding energy consumption coefficient;Representing the task as being computed locally;Representing the task computation on the MEC server;
the specific implementation manner of the step S3 is as follows:
s3-1, according to the formula:
obtaining a resource scheduling model; wherein,representing a maximum number of time slots;Is a Boolean function when->True timeOn the contrary->;Indicating that the task is completed locally and successfully executed;Indicating that the task is successfully unloaded and executed at the MEC server;Representing energy consumption generated in the task completion process;Is the energy consumption coefficient>Is the energy consumption generated by completing the task;Coefficients determined for the key level;C1 denotes that the whole model is a discrete time model consisting of time slots;C2 and C3 represents a decision, by 0 and 1, whether or not the decision is selected;C4 is the condition that the task is successfully executed, i.e. the time length of the completion of the task is less than the maximum tolerance time delay +.>;C5 indicates that the computing power requirement of the MEC server assigned to the task is less than +.>;C6 indicates that the transmit power allocated to the task is less than the maximum transmit power + ->;FRepresenting the frequency;Representing the latency of the MEC server;Representing the execution time of the MEC server;representing local latency;l、o、drespectively representing the local execution of the task, the execution of the task at the server and the discarding of the task;
s3-2, according to the formula:
obtaining the state of a resource scheduling model; wherein ,Representing the time length of local queuing of tasks;representing the queuing time of uploading the MEC server;Representing a queuing time period executing on the MEC server;
s3-3, according to the formula:
actions to get resource scheduling model; wherein ,Representing an execution strategy of the task;Virtual deadline factor, ">The value of (2) is +.>;The task is uploaded to the computing resources allocated on the MEC server;Is the transmit power;
s3-4, according to the formula:
rewarding function for obtaining resource scheduling model; wherein ,Applying different levels of reward coefficients to tasks of different key levels according to +.>Determining a prize value for the prize;successindicating that the task is successfully executed;failureindicating that the task fails to execute;
s3-5, training the resource scheduling model through randomly generating tasks according to the obtained reward function of the resource scheduling model, the action of the resource scheduling model and the state of the resource scheduling model to obtain a trained resource scheduling model;
s3-6, obtaining an optimal allocation mechanism according to the trained resource scheduling model;
the specific implementation mode of the step S3-5 is as follows:
s3-5-1, initializing a playback memory unit D, a value neural network Q and a target neural network in a depth Q network DQNThe method comprises the steps of carrying out a first treatment on the surface of the Network for randomly selecting value networksComplex weight->And target network->The network weight is +.>; wherein ,;
S3-5-2, obtaining the initial state of the environmentGenerating a random number of 0-1 during the time slot interval, and judging whether the random number is smaller than a preset threshold value or not; if yes, randomly generating task with probability +.>Select random action->The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, let;
S3-5-3, calculating a reward functionThe state of the resource scheduling model is converted into s #t+1) and store state transition procedureTo the playback memory unit D;
s3-5-4, according to the formula:
obtaining a random sample of the original data from the playback memory unit DTraining results->; wherein ,Is a new action;krepresenting the sample number;Krepresenting the training times; first, thekStatus of resource scheduling model under sample +.>The method comprises the steps of carrying out a first treatment on the surface of the First, thekAction of resource scheduling model under individual samples +.>The method comprises the steps of carrying out a first treatment on the surface of the First, thekRewarding function of resource scheduling model under individual samples +.>The method comprises the steps of carrying out a first treatment on the surface of the First, thekState of resource scheduling model under +1 samples +.>;Is a discount factor;
s3-5-5, according to the formula:
obtaining a loss functionLThe method comprises the steps of carrying out a first treatment on the surface of the Wherein, the value neural networkThe weight of (2) is +.>The method comprises the steps of carrying out a first treatment on the surface of the Target neural network->The weight of (2) is +.>;
S3-5-6, learning rate and loss function are usedLGradient descent update value neural networkWeights of (2);
s3-5-7, step length is setCAnd spacing step lengthCWhen the time is over, the parameters of the value neural network are assigned to the target neural network, and the current time slot state update is completed;
s3-5-8, repeating steps S3-5-1 to S3-5-7 until a given training number and time interval is reached.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310194760.XA CN115858048B (en) | 2023-03-03 | 2023-03-03 | Hybrid critical task oriented dynamic arrival edge unloading method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310194760.XA CN115858048B (en) | 2023-03-03 | 2023-03-03 | Hybrid critical task oriented dynamic arrival edge unloading method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115858048A CN115858048A (en) | 2023-03-28 |
CN115858048B true CN115858048B (en) | 2023-04-25 |
Family
ID=85659868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310194760.XA Active CN115858048B (en) | 2023-03-03 | 2023-03-03 | Hybrid critical task oriented dynamic arrival edge unloading method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115858048B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117240631A (en) * | 2023-11-15 | 2023-12-15 | 成都超算中心运营管理有限公司 | Method and system for connecting heterogeneous industrial equipment with cloud platform based on message middleware |
CN117648182B (en) * | 2023-11-28 | 2024-07-05 | 南京审计大学 | Method for processing safety key calculation task by mobile audit equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105677461A (en) * | 2015-12-30 | 2016-06-15 | 西安工业大学 | Mixed-criticality tasks scheduling method based on criticality |
CN110099384A (en) * | 2019-04-25 | 2019-08-06 | 南京邮电大学 | Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user |
CN111954236A (en) * | 2020-07-27 | 2020-11-17 | 河海大学 | Hierarchical edge calculation unloading method based on priority |
CN113597013A (en) * | 2021-08-05 | 2021-11-02 | 哈尔滨工业大学 | Cooperative task scheduling method in mobile edge computing under user mobile scene |
CN113612843A (en) * | 2021-08-02 | 2021-11-05 | 吉林大学 | MEC task unloading and resource allocation method based on deep reinforcement learning |
CN113950066A (en) * | 2021-09-10 | 2022-01-18 | 西安电子科技大学 | Single server part calculation unloading method, system and equipment under mobile edge environment |
-
2023
- 2023-03-03 CN CN202310194760.XA patent/CN115858048B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105677461A (en) * | 2015-12-30 | 2016-06-15 | 西安工业大学 | Mixed-criticality tasks scheduling method based on criticality |
CN110099384A (en) * | 2019-04-25 | 2019-08-06 | 南京邮电大学 | Resource regulating method is unloaded based on side-end collaboration more MEC tasks of multi-user |
WO2020216135A1 (en) * | 2019-04-25 | 2020-10-29 | 南京邮电大学 | Multi-user multi-mec task unloading resource scheduling method based on edge-end collaboration |
CN111954236A (en) * | 2020-07-27 | 2020-11-17 | 河海大学 | Hierarchical edge calculation unloading method based on priority |
CN113612843A (en) * | 2021-08-02 | 2021-11-05 | 吉林大学 | MEC task unloading and resource allocation method based on deep reinforcement learning |
CN113597013A (en) * | 2021-08-05 | 2021-11-02 | 哈尔滨工业大学 | Cooperative task scheduling method in mobile edge computing under user mobile scene |
CN113950066A (en) * | 2021-09-10 | 2022-01-18 | 西安电子科技大学 | Single server part calculation unloading method, system and equipment under mobile edge environment |
Non-Patent Citations (9)
Title |
---|
Feifei Zhang,Jidong Ge,Chifong Wong.Online learning offloading framework for heterogeneous mobile edge computing system.《Journal of Parallel and Distributed Computing》.2019,第128卷(第2019期),第167-183页. * |
Li Tiansen, Huang Shujuan, Xiao Feng.A Mixed-Criticality Task Scheduling Method Based on Comprehensive Impact Factor.《Computers and Electrical Engineering》.2022,第105卷(第2023期),第1-11页. * |
Qi Wang ; Jing Shen ; Yujing Zhao ; Gongming Li ; Jinglong Zhao.Offloading and Delay Optimization Strategies for Power Services in Smart Grid for 5G Edge Computing.《2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC)》.2022,第1423-1427页. * |
Xianfu Chen, Honggang Zhang, Celimuge Wu.Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning.《2018 IEEE 88th Vehicular Technology Conference (VTC-Fall)》.2018,第1-7页. * |
卢海峰 ; 顾春华 ; 罗飞 ; 丁炜超 ; 杨婷 ; 郑帅 ; .基于深度强化学习的移动边缘计算任务卸载研究.计算机研究与发展.2020,(第07期),第195-210页. * |
周驰岷 ; 郭兵 ; 沈艳 ; 邓立国 ; .基于排队系统的最佳拥塞控制比例研究.现代电子技术.2016,第39卷(第12期),第14-17、21页. * |
林峻良.移动边缘计算系统联合任务卸载及资源分配算法研究.《中国优秀硕士学位论文全文数据库 (信息科技辑)》.2020,(第2期),I136-869. * |
董思岐, 李海龙, 屈毓锛.面向优先级用户的移动边缘计算任务调度策略.《计算机应用研究》.2019,第37卷(第9期),第2701-2705页. * |
邓添,沈艳,史奎锐.基于遗传算法的移动边缘计算混合关键任务卸载.《信息与电脑(理论版)》.2021,第33卷(第11期),第26-29页. * |
Also Published As
Publication number | Publication date |
---|---|
CN115858048A (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115858048B (en) | Hybrid critical task oriented dynamic arrival edge unloading method | |
CN113612843B (en) | MEC task unloading and resource allocation method based on deep reinforcement learning | |
CN113950066B (en) | Single server part calculation unloading method, system and equipment under mobile edge environment | |
CN113242568B (en) | Task unloading and resource allocation method in uncertain network environment | |
CN113032120B (en) | Industrial field big data task cooperative scheduling method based on edge calculation | |
CN110971706A (en) | Approximate optimization and reinforcement learning-based task unloading method in MEC | |
CN107708152B (en) | Task unloading method of heterogeneous cellular network | |
Yuan et al. | Online dispatching and fair scheduling of edge computing tasks: A learning-based approach | |
CN112799823B (en) | Online dispatching and scheduling method and system for edge computing tasks | |
CN113626104B (en) | Multi-objective optimization unloading strategy based on deep reinforcement learning under edge cloud architecture | |
CN114567895A (en) | Method for realizing intelligent cooperation strategy of MEC server cluster | |
CN114172558B (en) | Task unloading method based on edge calculation and unmanned aerial vehicle cluster cooperation in vehicle network | |
CN114706631B (en) | Unloading decision method and system in mobile edge calculation based on deep Q learning | |
CN116886703A (en) | Cloud edge end cooperative computing unloading method based on priority and reinforcement learning | |
CN107820278B (en) | Task unloading method for cellular network delay and cost balance | |
CN116016519A (en) | QoE-oriented edge computing resource allocation method | |
CN117407160A (en) | Mixed deployment method for online task and offline task in edge computing scene | |
CN115408072A (en) | Rapid adaptation model construction method based on deep reinforcement learning and related device | |
CN117749796A (en) | Cloud edge computing power network system calculation unloading method and system | |
CN115022893B (en) | Resource allocation method for minimizing total computation time in multi-task edge computing system | |
CN114615705B (en) | Single-user resource allocation strategy method based on 5G network | |
CN115022188B (en) | Container placement method and system in electric power edge cloud computing network | |
CN116204319A (en) | Yun Bianduan collaborative unloading method and system based on SAC algorithm and task dependency relationship | |
CN117891532B (en) | Terminal energy efficiency optimization unloading method based on attention multi-index sorting | |
CN118055160A (en) | System and method for distributing tasks of edge computing server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |