CN116760722A - Storage auxiliary MEC task unloading system and resource scheduling method - Google Patents
Storage auxiliary MEC task unloading system and resource scheduling method Download PDFInfo
- Publication number
- CN116760722A CN116760722A CN202310703258.7A CN202310703258A CN116760722A CN 116760722 A CN116760722 A CN 116760722A CN 202310703258 A CN202310703258 A CN 202310703258A CN 116760722 A CN116760722 A CN 116760722A
- Authority
- CN
- China
- Prior art keywords
- task
- tasks
- resource
- mec
- offload
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000013210 evaluation model Methods 0.000 claims abstract description 30
- 238000010801 machine learning Methods 0.000 claims abstract description 12
- 230000007246 mechanism Effects 0.000 claims abstract description 8
- 230000000903 blocking effect Effects 0.000 claims abstract description 7
- 239000010410 layer Substances 0.000 claims description 31
- 230000006870 function Effects 0.000 claims description 26
- 238000012549 training Methods 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 17
- 230000004913 activation Effects 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 210000002569 neuron Anatomy 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 8
- 238000000354 decomposition reaction Methods 0.000 claims description 7
- 238000012546 transfer Methods 0.000 claims description 5
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 239000002356 single layer Substances 0.000 claims description 4
- 238000011423 initialization method Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 239000013307 optical fiber Substances 0.000 claims description 3
- 238000005192 partition Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims description 2
- 238000013461 design Methods 0.000 claims description 2
- 230000009977 dual effect Effects 0.000 claims description 2
- 238000000638 solvent extraction Methods 0.000 claims description 2
- 239000008186 active pharmaceutical agent Substances 0.000 claims 9
- 230000003139 buffering effect Effects 0.000 claims 1
- 230000011218 segmentation Effects 0.000 abstract description 8
- 230000008859 change Effects 0.000 abstract description 4
- 238000013468 resource allocation Methods 0.000 abstract description 4
- 206010057269 Mucoepidermoid carcinoma Diseases 0.000 description 33
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/142—Network analysis or design using statistical or mathematical methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/40—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/80—Actions related to the user profile or the type of traffic
- H04L47/805—QOS or priority aware
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Pure & Applied Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Algebra (AREA)
- Environmental & Geological Engineering (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a storage auxiliary MEC task unloading system and a resource scheduling method. Considering that Delay Sensitive (DS) unloading tasks and Delay Tolerant (DT) unloading tasks often exist simultaneously in an actual MEC application scene, three mechanisms of preemption, caching and resource segmentation are adopted to carry out cooperative scheduling of the unloading tasks. In addition, in order to reasonably use three mechanisms to cooperatively schedule DS and DT tasks, the resource scheduling method utilizes a machine learning flow prediction model to sense the change of the flow of the unloading task in advance, searches an optimal link bandwidth resource segmentation scheme and an MEC server storage resource allocation scheme meeting the performance index requirement of the system with the aid of a machine learning system performance evaluation model, and pre-segments bandwidth resources and allocates storage resources. On the premise of meeting the task service quality, the method not only reduces the task blocking rate and the preemption rate, but also improves the resource utilization rate and reduces the system resource use cost.
Description
Technical Field
The invention relates to a network resource scheduling algorithm class, in particular to a storage auxiliary MEC task unloading system and a resource scheduling method.
Background
The popularity of intelligent terminal devices and the widespread deployment of emerging mobile applications has spawned massive ultra-low latency, ultra-high reliability communication requirements.
In the cloud computing mode, end users often face the problems of bandwidth shortage, network congestion, overlong delay and the like, and the development of emerging mobile applications is restricted. The MEC technology pushes computing power and storage power resources of the cloud computing center to the network edge. The complex calculation task can be offloaded from the user equipment to the peripheral MEC server nearby, so that network communication delay is effectively reduced, and service quality of the terminal user is guaranteed.
However, the computing power and storage capacity of MEC servers are typically limited. With the advent of more and more computationally intensive tasks, MEC servers suffer from task overload problems. Thus, the collaborative processing of computing tasks by multiple MEC servers is a major form of current MEC technology applications. This has spawned a large need to offload data transfer across MEC servers. Therefore, how to efficiently schedule network resources to meet the transmission requirements of offloading data between MEC servers becomes one of the key issues faced by MEC task offloading systems.
Efficient scheduling of network resources requires accurate predictions of task offloading traffic and accurate estimates of task offloading system performance. However, the existing flow prediction and performance evaluation methods lack sufficient accuracy. Thus, MEC operators often can only configure link bandwidth capacity based on mission peak traffic. This not only causes problems with excess bandwidth in the off-peak period of traffic, but also causes high network transport and system operating costs.
The invention considers Delay Sensitive (DS) unloading tasks and Delay Tolerant (DT) unloading tasks existing in an actual MEC application scene, introduces three mechanisms of preemption, caching and resource segmentation into an MEC task unloading system, and performs cooperative scheduling on the unloading tasks according to different task delay tolerant characteristics.
In addition, the invention also introduces a machine learning model into the resource scheduling method, predicts the flow in advance by utilizing the GRU flow prediction model, evaluates the system performance by utilizing the ANN evaluation model, and then minimizes the network blocking rate and the preemption rate and maximizes the resource utilization rate by utilizing the resource scheduling method based on the double machine learning model. The method utilizes a flow prediction model to sense the change of the flow of the unloading task in advance, adaptively searches an optimal link bandwidth resource segmentation scheme and an MEC server storage resource allocation scheme which meet the requirement of a system performance index, and pre-segments bandwidth resources and allocates storage resources.
The resource scheduling method provided by the invention can be used for scheduling resources in advance, so that a large amount of resource waste is avoided in a traffic valley period, and system performance deterioration and user service quality reduction are avoided in a traffic peak period.
Disclosure of Invention
The invention aims to solve the problem that the existing MEC task unloading system lacks sufficient accuracy in flow prediction and performance evaluation methods, and provides a storage auxiliary MEC task unloading system and a resource scheduling method. The method utilizes a flow prediction model to sense the change of the flow of the unloading task in advance, searches an optimal link bandwidth resource segmentation scheme and an MEC server storage resource allocation scheme meeting the system performance index requirement under the assistance of a system performance evaluation model, and pre-segments bandwidth resources and allocates storage resources.
In order to achieve the above purpose, the technical scheme of the invention is as follows: a storage auxiliary MEC task unloading system comprises an SDN controller and edge nodes communicated with the SDN controller, wherein the edge nodes are deployed on a user site or a remote node and provide computing power close to a client side, and the edge nodes are deployed on a central office or a sink node and expand the service coverage range of the computing power; when an edge node in a user site/far-end node is overloaded due to excessive tasks waiting for processing, the edge node can offload the tasks to the edge nodes of adjacent central office/sink nodes; each edge node is provided with an OpenFlow switch, an MEC server and an optical switch, and the edge nodes are connected through point-to-point optical fiber links; and maintaining the system by adopting the SDN controller, and scheduling the storage resources in the MEC server and the bandwidth resources on the link.
In an embodiment of the invention, the task unloading system adopts three mechanisms of preemption, caching and resource segmentation to cooperatively schedule DS unloading tasks and DT unloading tasks in the MEC system; on one hand, the DS offload task is allowed to preempt the link bandwidth resource which is being used by the DT offload task, and the DS offload task is preferentially used for transmitting data of the DS offload task; on the other hand, when the link bandwidth resource is insufficient, the DT offload task is allowed to temporarily buffer data of the DT offload task by using the MEC server, and the data is continuously transmitted after the link bandwidth is idle.
In one embodiment of the present invention, the task offload system partitions the link bandwidth resource C into three regions, i.e., C ds 、c dt 、c s Wherein c ds Representing the amount of bandwidth resources for DS offload task data transfer, c dt Representing the amount of bandwidth resources for DT offload task data transfers, c s Representing the amount of bandwidth resources shared by both tasks, preemption of DS offload tasks is only allowed to occur at c s S represents the availability on the MEC serverStorage resources for caching DT offload tasks.
In one embodiment of the present invention, the task offloading system may feed back corresponding system performance metrics, including B ds 、B dt 、P dt U, D, wherein B ds Representing the blocking rate of DS offload tasks, B dt Representing the blocking rate of DT off-load tasks, P dt The preemption rate of the DT off-load task is represented, U represents the bandwidth resource utilization, and D represents the average delay required to complete the task.
The invention also provides a resource scheduling method based on the storage auxiliary MEC task unloading system, which comprises the following steps:
s1, designing a GRU flow prediction model based on sliding time window and wavelet transformation; the input of the GRU flow prediction model is lambda t-W ,…,λ t-1 ,Wherein lambda is t-W ,…,λ t-1 For consecutive W historical traffic data extracted using a sliding time window mechanism +.>To utilize wavelet transform to lambda t-1 Performing L detail components obtained by L-degree analysis, < >>Is an approximate component obtained after wavelet decomposition; the output of the GRU flow prediction model is the flow prediction value of the next time t>Acquiring a GRU flow prediction model capable of predicting flow values at the next moment in advance through offline learning training of a historical flow database;
s2, designing a system performance evaluation model based on logarithmic transformation and an ANN neural network; lambda, delta, F, c ds ,c dt ,c s S constitutes the input of the model, lambda is the total task flow value,delta is the proportion of DS unloading task to total task, F is the average value of task data volume, c ds 、c dt 、c s The bandwidth resource quantity used for DS unloading tasks, the bandwidth resource quantity used for DT unloading tasks and the bandwidth resource quantity shared by the two tasks are respectively, and s is the storage resource quantity used for caching DT unloading tasks; constructing five system performance evaluation models based on logarithmic transformation and ANN neural network, wherein the outputs of the five system performance evaluation models are respectively system performance index B ds 、B dt 、P dt U, D, i.e. construction of ANN-based B ds Evaluation model, ANN-based B dt Evaluation model, ANN-based P dt An evaluation model, an ANN-based U evaluation model, an ANN-based D evaluation model; when the system performance index value is less than 10 -2 When the method is used, log (·) transformation of a logarithmic function is introduced, and training is performed after an original numerical domain of a system performance index is converted into a logarithmic domain; when the system performance index value is greater than 10 -2 When the method is used, training is directly carried out in an original numerical domain; acquiring five system performance evaluation models based on logarithmic transformation and an ANN neural network for predicting system performance indexes based on the flow of the unloading task and the quantity of resources through learning a system performance index database;
step S3, designing a resource scheduling method based on a double-machine learning model:
designing optimization targets and constraint conditions: to reduce B ds 、B dt 、P dt Improving U as an optimization target design objective function; will B ds 、B dt 、P dt The upper and lower bounds of U, D system performance index, the upper and lower bounds of the number of link bandwidth resources C, and the upper and lower bounds of the number of MEC service storage resources s are used as constraint conditions.
In an embodiment of the present invention, the GRU flow prediction model includes three layers, including an input layer, a hidden layer and an output layer, wherein the hidden layer is a single-layer GRU layer, the number of neurons is obtained by training and parameter adjustment, and the activation functions of the GRU layer are sigmoid and tanh functions; the output layer contains a fully connected neuron whose activation function is a linear activation function.
In an embodiment of the present invention, the system performance evaluation model based on logarithmic transformation and an ANN neural network includes three layers, including an input layer, a hidden layer and an output layer, wherein the hidden layer is a single-layer full-connection layer, the number of neurons is obtained by training and parameter tuning, and a ReLU is used as an activation function; the output layer contains a fully connected neuron, again using ReLU as an activation function.
In one embodiment of the present invention, in step S3, the specific implementation steps of the resource scheduling method based on the dual machine learning model are as follows:
(1) The optimization target value and resource set Obj used in the initialization method are,
(2) Obtaining historical flow data lambda from a historical flow database using a sliding time window t-W ,…,λ t-1 ;
(3) Historical flow data lambda at time t-1 using Mallat algorithm in wavelet transform t-1 Decomposition into L detail componentsAnd an approximation component->
(4) Lambda is set to t-W ,…,λ t-1 , As input of the GRU flow prediction model, a predicted flow value +.>
(5) According to the predicted flow valueBy means ofANN-based B ds The evaluation model searches all the satisfaction B ds The feasible bandwidth resource amount C' of the constraint condition;
(6) Search for all satisfies C' =c ds +c s { c of (2) ds ,c s Combinations and store them in cdssset;
(7) Utilizing ANN-based B dt 、P dt Evaluation model of U, D, search for satisfaction of B dt 、P dt Bandwidth resource amount c for U and D constraints dt And storing the resource amount s;
(8) Enumerating all of the satisfaction B ds 、B dt 、P dt Feasible scheme combination { c ] of U, D system performance index constraint conditions ds ,c dt ,c s S, and calculating the optimized objective function value Obj' of each combination one by one;
(9) If the current target value Obj ' is better than the existing Optimal target function value Obj, then Obj and Optimal are updated, i.e. Obj ', optimal ' c ≡ ds ,c dt ,c s S }; otherwise, maintaining Obj and Optimal;
(10) Searching all feasible scheme combinations, and returning to an Optimal resource scheduling decision optimal= { c ds ,c dt ,c s ,s}。
Compared with the prior art, the invention has the following beneficial effects:
1. the invention introduces three mechanisms of preemption, caching and resource segmentation into the MEC task unloading system, and performs cooperative scheduling on the unloading tasks according to different task delay tolerance characteristics.
2. The invention introduces a machine learning model into a resource scheduling method, utilizes a flow prediction model to perceive the change of the flow of an unloading task in advance, adaptively searches an optimal link bandwidth resource segmentation scheme and an MEC server storage resource allocation scheme which meet the requirement of a system performance index, and pre-segments bandwidth resources and allocates storage resources.
3. The invention not only reduces the task blocking rate and the preemption rate, but also improves the resource utilization rate and reduces the system resource use cost on the premise of meeting the task service quality.
Drawings
FIG. 1 is a schematic diagram of a task offloading system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a resource scheduling method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a GRU flow prediction model based on sliding time window and wavelet transform in an embodiment of the invention;
FIG. 4 is a schematic diagram of an ANN system performance evaluation model based on logarithmic transformation in an embodiment of the invention;
FIG. 5 is a flowchart of a resource scheduling method according to an embodiment of the present invention;
FIG. 6 is a flow chart of online model update according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is specifically described below with reference to the accompanying drawings.
The embodiment of the invention considers a task unloading system as shown in figure 1. In one aspect, the edge nodes may be deployed either on the customer site or at a remote node that provides computing power near the client. On the other hand, the edge node can be deployed in a central office or a sink node, so that the service coverage of the computing capacity is enlarged. When an edge node in a customer site/remote node is overloaded with too many tasks to wait for processing, it may offload tasks to the edge nodes of neighboring central office/sink nodes. The traffic of interest to the present invention is such off-load task traffic between edge nodes. Edge nodes are connected through point-to-point optical fiber links. In addition, each edge node has an OpenFlow switch, a MEC server, and an optical switch. A centralized Software-defined network (SDN) controller is employed to maintain the system and schedule storage resources in the MEC server and bandwidth resources on the links. Thus, the resources scheduled by the task offloading system include link bandwidth resources divided into three regions, i.e., c ds ,c dt ,c s . And a storage resource s on the MEC server for caching DT tasks.
As shown in the schematic diagram of the resource scheduling method in fig. 2, the resource scheduling method includes the following steps:
and step 1, offline training a GRU flow prediction model based on sliding time window and wavelet transformation.
And carrying out data preprocessing of a sliding time window on the collected historical flow database. Determining an optimal sliding time window size W for different traffic data and extracting consecutive W historical traffic data lambda from the traffic database t-W ,...,λ t-1 For learning the history of the flow data.
Selecting proper wavelet function and decomposition level L, and utilizing Mallat algorithm in wavelet transformation to make historical flow data lambda at time t-1 t-1 Decomposition into L detail componentsA volatility feature for capturing flow; and an approximation component +.>For capturing periodic features of the flow.
Combining sliding time window and wavelet transformed feature lambda t-W ,…,λ t-1 ,As an input feature of the GRU flow prediction model; and will follow the true flow value lambda at the next time t t As output labels for the GRU model. And performing offline training parameter adjustment on the GRU model to obtain the optimal network layer number, neuron number, activation function and loss function. Obtaining a value of the current discharge capacity which can be predicted in advance at the next moment +.>The schematic structure of the GRU flow prediction model is shown in figure 3.
And 2, offline training a system performance evaluation model based on logarithmic transformation and an ANN neural network.
Obtaining training samples from the collected system performance index database, and integrating flow characteristics lambda, delta, F and resource quantity c ds ,c st ,c s S is used as input to the ANN performance assessment model. When the performance index value is less than 10 -2 When the method is used, log (·) transformation is introduced, and the sensitivity of the ANN model to the extremely small value can be improved by converting the original numerical domain of the performance index into the logarithmic domain and then training the logarithmic domain, so that the ANN model is ensured to have better evaluation accuracy. When the performance index value is greater than 10 -2 In the process, the method can obtain better precision by direct training in the original numerical domain. And obtaining the optimal network structure parameters of the ANN model by performing offline training parameter adjustment on the model. Five ANN performance evaluation models are obtained, and the outputs are respectively B ds 、B dt 、P dt And U, D, the function of which is to predict the system performance index based on the flow rate of the unloading task and the quantity of resources, and the model structure is shown in fig. 4.
And step 3, designing constraint conditions and optimizing targets aiming at the MEC task unloading system.
Performance constraints: the formula (1), the formula (2) and the formula (3) ensure B ds 、B dt 、P dt Are both within the upper and lower bounds. Formula (4) ensures that U is greater than its lower bound and formula (5) ensures that D does not exceed its upper bound.
U Min ≤U (4)
D≤D Max (5)
Resource constraint: equation (6) ensures that the sum of the used link bandwidth resources does not exceed C. Equation (7) ensures that the storage resources allocated to the DT tasks do not exceed S.
c ds +c s +c dt ≤C (6)
0≤s≤S (7)
Optimization target: equation (8) sets two targets: (1) minimizing B within upper and lower boundary conditions of system-required performance metrics ds 、B dt And P dt The method comprises the steps of carrying out a first treatment on the surface of the (2) Maximizing the resource utilization U. Alpha, beta, gamma and epsilon are weight factors, and MEC operators can adjust the size of the weight factors according to actual requirements. Wherein B is ds ,B dt ,P dt U and D are each lambda, delta, F, c ds ,c dt ,c s And s.
max α·lg(B ds )+β·lg(B dt )+γ·lg(P dt )+ε·U (8)
Step 4, designing a resource scheduling method based on a double machine learning model:
the flow chart of the resource scheduling method based on the double machine learning model is shown in fig. 5, and the specific steps of the scheduling method are as follows:
(1) The optimization target value and resource set Obj used in the initialization method are,
(2) Obtaining historical flow data lambda from a historical flow database using a sliding time window t-W ,…,λ t-1 ;
(3) Historical flow data lambda at time t-1 using Mallat algorithm in wavelet transform t-1 Decomposition into L detail components And an approximation component->
(4) Lambda is set to t-W ,…,λ t-1 , As input of the GRU flow prediction model, a predicted flow value +.>
(5) Based on predicted flow valuesUtilizing ANN-based B ds The evaluation model searches all the satisfaction B ds The feasible bandwidth resource amount C' of the constraint condition;
(6) Search for all satisfies C' =c ds +c s { c of (2) ds ,c s Combinations and store them in cdssset;
(7) Using a model based on ANN B dt 、P dt U, D performance evaluation model, search for satisfaction of B dt 、P dt Bandwidth resource amount c for U and D constraints dt And storing the resource amount s;
(8) Enumerating all of the satisfaction B ds 、B dt 、P dt Feasible scheme combination { c ] of U, D system performance index constraint conditions ds ,c dt ,c s S, and calculating the optimized objective function value Obj' of each combination one by one;
(9) If the current target value Obj ' is better than the existing Optimal target function value Obj, then Obj and Optimal are updated, i.e. Obj ', optimal ' c ≡ ds ,c dt ,c s S }; otherwise, maintaining Obj and Optimal;
(10) Searching all feasible scheme combinations, and returning to an Optimal resource scheduling decision optimal= { c ds ,c dt ,c s ,s}。
And finally, the task access control module pre-partitions bandwidth resources and allocates storage resources for the unloading task according to the optimal resource partitioning and allocation scheme decision.
And 5, updating the model in real time.
In practical implementations, when the network environment varies greatly, the pre-offline trained model may not accurately predict the offload task flow value and the system performance index. Thus, machine learning models require that the model be updated periodically from new data samples.
As shown in the flowchart of fig. 6, the system continuously collects the latest real flow values and system performance indexes and stores them in the historical flow database and the system performance index database. When the updating period of the model is reached, the real value collected in the period is compared with the predicted value, the loss value of the model is calculated, and when the loss value is not zero, the GRU flow prediction model and the ANN performance evaluation model are updated.
The above is a preferred embodiment of the present invention, and all changes made according to the technical solution of the present invention belong to the protection scope of the present invention when the generated functional effects do not exceed the scope of the technical solution of the present invention.
Claims (8)
1. The storage auxiliary MEC task unloading system is characterized by comprising an SDN controller and edge nodes communicated with the SDN controller, wherein the edge nodes are deployed on a user site or a remote node and provide computing power close to a client side, and the edge nodes are deployed on a central office or a sink node and expand the service coverage range of the computing power; when an edge node in a user site/far-end node is overloaded due to excessive tasks waiting for processing, the edge node can offload the tasks to the edge nodes of adjacent central office/sink nodes; each edge node is provided with an OpenFlow switch, an MEC server and an optical switch, and the edge nodes are connected through point-to-point optical fiber links; and maintaining the system by adopting the SDN controller, and scheduling the storage resources in the MEC server and the bandwidth resources on the link.
2. The storage-assisted MEC task offloading system of claim 1, wherein the task offloading system co-schedules DS offload tasks and DT offload tasks in the MEC system using three mechanisms of preemption, buffering, and resource partitioning; on one hand, the DS offload task is allowed to preempt the link bandwidth resource which is being used by the DT offload task, and the DS offload task is preferentially used for transmitting data of the DS offload task; on the other hand, when the link bandwidth resource is insufficient, the DT offload task is allowed to temporarily buffer data of the DT offload task by using the MEC server, and the data is continuously transmitted after the link bandwidth is idle.
3. The storage assisted MEC task offloading system of claim 1, wherein the task offloading system partitions the link bandwidth resource C into three regions, C ds 、c dt 、c s Wherein c ds Representing the amount of bandwidth resources for DS offload task data transfer, c dt Representing the amount of bandwidth resources for DT offload task data transfers, c s Representing the amount of bandwidth resources shared by both tasks, preemption of DS offload tasks is only allowed to occur at c s S represents the storage resources available on the MEC server to cache DT offload tasks.
4. The storage assisted MEC task offloading system of claim 1, wherein the task offloading system is operable to feed back a respective system performance index comprising B ds 、B dt 、P dt U, D, wherein B ds Representing the blocking rate of DS offload tasks, B dt Representing the blocking rate of DT off-load tasks, P dt The preemption rate of the DT off-load task is represented, U represents the bandwidth resource utilization, and D represents the average delay required to complete the task.
5. A method of scheduling resources based on a storage assisted MEC task offloading system according to any of claims 1-4, comprising the steps of:
step S1Designing a GRU flow prediction model based on sliding time window and wavelet transformation; the input of the GRU flow prediction model is lambda t-W ,...,λ t-1 ,Wherein lambda is t-W ,...,λ t-1 For consecutive W historical traffic data extracted using a sliding time window mechanism +.>To utilize wavelet transform to lambda t-1 Performing L detail components obtained by L-degree analysis, < >>Is an approximate component obtained after wavelet decomposition; the output of the GRU flow prediction model is the flow prediction value of the next time t>Acquiring a GRU flow prediction model capable of predicting flow values at the next moment in advance through offline learning training of a historical flow database;
s2, designing a system performance evaluation model based on logarithmic transformation and an ANN neural network; lambda, delta, F, c ds ,c dt ,c s S constitutes the input of the model, lambda is the total task flow value, delta is the proportion of DS unloading tasks to total tasks, F is the average value of task data quantity, c ds 、c dt 、c s The bandwidth resource quantity used for DS unloading tasks, the bandwidth resource quantity used for DT unloading tasks and the bandwidth resource quantity shared by the two tasks are respectively, and s is the storage resource quantity used for caching DT unloading tasks; constructing five system performance evaluation models based on logarithmic transformation and ANN neural network, wherein the outputs of the five system performance evaluation models are respectively system performance index B ds 、B dt 、P dt U, D, i.e. construction of ANN-based B ds Evaluation model, ANN-based B dt Evaluation model, ANN-based P dt Evaluation ofModel, an ANN-based U-assessment model, an ANN-based D-assessment model; when the system performance index value is less than 10 -2 When the method is used, log (·) transformation of a logarithmic function is introduced, and training is performed after an original numerical domain of a system performance index is converted into a logarithmic domain; when the system performance index value is greater than 10 -2 When the method is used, training is directly carried out in an original numerical domain; acquiring five system performance evaluation models based on logarithmic transformation and an ANN neural network for predicting system performance indexes based on the flow of the unloading task and the quantity of resources through learning a system performance index database;
step S3, designing a resource scheduling method based on a double-machine learning model:
designing optimization targets and constraint conditions: to reduce B ds 、B dt 、P dt Improving U as an optimization target design objective function; will B ds 、B dt 、P dt The upper and lower bounds of U, D system performance index, the upper and lower bounds of the number of link bandwidth resources C, and the upper and lower bounds of the number of MEC server storage resources s are used as constraint conditions.
6. The resource scheduling method of a storage-assisted MEC task offloading system of claim 4, wherein the GRU traffic prediction model comprises three layers, including an input layer, a hidden layer and an output layer, wherein the hidden layer is a single-layer GRU layer, the number of neurons is obtained by training parameters, and the activation function of the GRU layer is a sigmoid and tanh function; the output layer contains a fully connected neuron whose activation function is a linear activation function.
7. The method for scheduling resources of a storage-assisted MEC task offloading system according to claim 4, wherein the system performance evaluation model based on logarithmic transformation and ANN neural network comprises three layers, including an input layer, a hidden layer and an output layer, wherein the hidden layer is a single-layer full-connection layer, the number of neurons is obtained by training a parameter, and a ReLU is used as an activation function; the output layer contains a fully connected neuron, again using ReLU as an activation function.
8. The method for scheduling resources of a storage-assisted MEC task offloading system according to claim 4, wherein in step S3, the method for scheduling resources based on a dual machine learning model is specifically implemented as follows:
(1) The optimization target value and resource set Obj used in the initialization method are,
(2) Obtaining historical flow data lambda from a historical flow database using a sliding time window t-W ,...,λ t-1 ;
(3) Historical flow data lambda at time t-1 using Mallat algorithm in wavelet transform t-1 Decomposition into L detail componentsAnd an approximation component->
(4) Lambda is set to t-W ,...,λ t-1 , As input of the GRU flow prediction model, a predicted flow value +.>
(5) According to the predicted flow valueUtilizing ANN-based B ds The evaluation model searches all the satisfaction B ds The feasible bandwidth resource amount C' of the constraint condition;
(6) Search for all satisfies C' =c ds +c s { c of (2) ds ,c s Combinations and store them in cdssset;
(7) Utilizing ANN-based B dt 、P dt Evaluation model of U, D, search for satisfaction of B dt 、P dt Bandwidth resource amount c for U and D constraints dt And storing the resource amount s;
(8) Enumerating all of the satisfaction B ds 、B dt 、P dt Feasible scheme combination { c ] of U, D system performance index constraint conditions ds ,c dt ,c s S, and calculating the optimized objective function value Obj' of each combination one by one;
(9) If the current target value Obj ' is better than the existing Optimal target function value Obj, then Obj and Optimal are updated, i.e. Obj ', optimal ' c ≡ ds ,c dt ,c s S }; otherwise, maintaining Obj and Optimal;
(10) Searching all feasible scheme combinations, and returning to an Optimal resource scheduling decision optimal= { c ds ,c dt ,c s ,s}。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310703258.7A CN116760722A (en) | 2023-06-14 | 2023-06-14 | Storage auxiliary MEC task unloading system and resource scheduling method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310703258.7A CN116760722A (en) | 2023-06-14 | 2023-06-14 | Storage auxiliary MEC task unloading system and resource scheduling method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116760722A true CN116760722A (en) | 2023-09-15 |
Family
ID=87956573
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310703258.7A Pending CN116760722A (en) | 2023-06-14 | 2023-06-14 | Storage auxiliary MEC task unloading system and resource scheduling method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116760722A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117492856A (en) * | 2023-10-17 | 2024-02-02 | 南昌大学 | Low-delay edge computing and unloading method for credit assessment in financial Internet of things |
-
2023
- 2023-06-14 CN CN202310703258.7A patent/CN116760722A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117492856A (en) * | 2023-10-17 | 2024-02-02 | 南昌大学 | Low-delay edge computing and unloading method for credit assessment in financial Internet of things |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110035410B (en) | Method for joint resource allocation and computational offloading in software-defined vehicle-mounted edge network | |
Jiang | Cellular traffic prediction with machine learning: A survey | |
Manogaran et al. | Machine learning assisted information management scheme in service concentrated IoT | |
CN110839184B (en) | Method and device for adjusting bandwidth of mobile fronthaul optical network based on flow prediction | |
CN110941667A (en) | Method and system for calculating and unloading in mobile edge calculation network | |
CN111049903B (en) | Edge network load distribution algorithm based on application perception prediction | |
CN115175217A (en) | Resource allocation and task unloading optimization method based on multiple intelligent agents | |
CN111522657A (en) | Distributed equipment collaborative deep learning reasoning method | |
CN113825152A (en) | Capacity control method, network management device, management arrangement device, system and medium | |
CN116455768B (en) | Cloud edge end collaborative CNN reasoning method and system for global time delay optimization | |
CN116760722A (en) | Storage auxiliary MEC task unloading system and resource scheduling method | |
CN114490057A (en) | MEC unloaded task resource allocation method based on deep reinforcement learning | |
CN114328291A (en) | Industrial Internet edge service cache decision method and system | |
CN114650228A (en) | Federal learning scheduling method based on computation unloading in heterogeneous network | |
Wang | Edge artificial intelligence-based affinity task offloading under resource adjustment in a 5G network | |
CN111580943B (en) | Task scheduling method for multi-hop unloading in low-delay edge calculation | |
CN116643844B (en) | Intelligent management system and method for automatic expansion of power super-computing cloud resources | |
Dong et al. | Collaborative video analytics on distributed edges with multiagent deep reinforcement learning | |
CN116109058A (en) | Substation inspection management method and device based on deep reinforcement learning | |
CN114693141B (en) | Transformer substation inspection method based on end edge cooperation | |
CN115914230A (en) | Adaptive mobile edge computing unloading and resource allocation method | |
Lu et al. | Enhancing vehicular edge computing system through cooperative computation offloading | |
KR20240016572A (en) | Cloud-Multiple Edge Server Collaboration System and Method Based on Service Classification in Intelligent Video Security Environments | |
CN117062025B (en) | Energy-saving combined computing unloading and resource allocation method for Internet of vehicles | |
CN114077491B (en) | Industrial intelligent manufacturing edge computing task scheduling method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |