CN112860350B - Task cache-based computation unloading method in edge computation - Google Patents

Task cache-based computation unloading method in edge computation Download PDF

Info

Publication number
CN112860350B
CN112860350B CN202110275573.5A CN202110275573A CN112860350B CN 112860350 B CN112860350 B CN 112860350B CN 202110275573 A CN202110275573 A CN 202110275573A CN 112860350 B CN112860350 B CN 112860350B
Authority
CN
China
Prior art keywords
task
mec
model
calculation
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110275573.5A
Other languages
Chinese (zh)
Other versions
CN112860350A (en
Inventor
卞圣强
覃少华
谢志斌
王海燕
张家豪
崔硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Hubei Central China Technology Development Of Electric Power Co ltd
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN202110275573.5A priority Critical patent/CN112860350B/en
Publication of CN112860350A publication Critical patent/CN112860350A/en
Application granted granted Critical
Publication of CN112860350B publication Critical patent/CN112860350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a calculation migration method based on task cache in edge calculation, which comprises the following steps: 1) constructing a system model; 2) constructing a system communication model; 3) constructing a system calculation model; 4) constructing a resource allocation model; 5) constructing a task value model and a cache model; 6) constructing a system overhead model; 7) and (5) solving the problem. The method can effectively reduce the operation time of the algorithm, improve the response speed of the server, reduce the system energy consumption of repeated operation by the task cache mechanism, reduce the calculation cost of the server, and reduce the completion delay and the weighted sum of the energy consumption of all tasks.

Description

Task cache-based computation unloading method in edge computation
Technical Field
The invention relates to the application field of a mobile edge computing system, in particular to a task cache-based computing unloading method in edge computing.
Background
With the rapid development of wireless communication technology, internet of things technology and 5G technology, mobile devices are rapidly popularized, and data traffic is rapidly increased. Some emerging applications, such as online gaming, artificial intelligence, and virtual reality, have high latency requirements and require significant computing resources. However, mobile terminals have limited computing power and battery capacity, and running these applications can introduce higher computational delays and increase the power consumption of the mobile terminal. And the transmission delay caused by the long transmission distance of the service mode based on the cloud computing center is often difficult to meet the requirement of real-time application. The Edge Computing (MEC) paradigm arises from this, and by sinking the service to the Edge of the network, network transmission delay is reduced, and the requirements for low-delay services are met.
The edge computing not only solves the problem of insufficient resources such as computing and storage of the mobile equipment, but also solves the problems of overhigh transmission delay, overlarge server load and the like in the traditional cloud computing center mode. However, as the number of network users increases, people driven by mass psychology have the same access request, which causes a phenomenon that a large amount of contents in a backbone network are repeatedly transmitted in a service period. If the hot content is cached at the edge of the network, the pressure of a backbone network can be reduced, the transmission delay can be reduced, and the user experience can be improved. Tan [ z.tan, f.r.yu, x.li, h.ji, and v.c.raw, "Virtual resource allocation for heterologous services in full duplex-enabled SCNs with mobile computing and caching," IEEE Transactions on cellular Technology, vol.67, No.2, pp.1794-1808,2017 ], et al propose a MEC offload and caching framework in full duplex Small Cell Networks (SCNs) to improve system yield by combining computational offload and content caching. Gu [ w. -c.chien, h. -y.weng, and c. -f.lai, "Q-learning based social cache allocation in mobile computing," Future Generation Computer Systems, vol.102, pp.603-610,2020 ] and the like propose, in order to improve content cache hit rate, an SDN-based MEC architecture, which first models a cache model, maximizes cache hit rate under the limitation of MEC storage resources, and solves a cache strategy using Q learning. However, the document content caching and task offloading are independent of each other. At present, few researches are carried out on actively caching task results, if some tasks with high popularity are actively cached in an edge server, when the tasks are requested by users again, calculation results can be directly transmitted to the users from the MEC, the calculation time delay of the tasks is greatly reduced, and the energy consumption of the users is reduced. In a real scene, there are many scenes in which the calculation result can be reused, for example, some game rendering scenes can be reused by players; in an AR scenario, some AR services are often requested repeatedly; in the video task, some hot videos can be decoded repeatedly, and if the hot calculation task result is cached, the calculation burden of the edge server can be relieved. Zhao et al [ h.zhao, y.wang, and r.sun, "Task active control Based provisioning and Resource Allocation in Mobile-Edge Computing Systems," in 201814 th International Wireless Communications & Mobile Computing Conference (IWCMC),2018, pp.232-237: IEEE ] in a single cell scenario, jointly optimize the Task cache Based Computation offload and Resource Allocation strategy in order to reduce the Task completion latency. Firstly, modeling an unloading, resource allocation and task caching model, taking the minimized completion time delay of all tasks as an optimization target, wherein the modeled time delay optimization problem belongs to an NP difficult problem and cannot be completed in polynomial time, and the model is disassembled into two parts for solving; and then designing a greedy algorithm based on improvement to solve the unloading strategy.
Most of the existing research on task caching designs a caching strategy from the perspective of task popularity, however, for some compute-intensive applications such as virtual reality, AR service, large games and the like, besides popularity, the amount of computation and the data size required for computing tasks will also affect the caching value of the tasks. For example, some tasks, although most popular, may be less computationally intensive to complete, whereas less computationally intensive computing tasks that are cached less popular may be more profitable. How to reasonably allocate the weight of each factor is also a considerable problem, and if the weight allocation is not reasonable, the cache value of the task is affected.
Disclosure of Invention
The invention aims to provide a task cache-based calculation unloading method in edge calculation aiming at the defects in the prior art. The method can effectively reduce the operation time of the algorithm, improve the response speed of the server, reduce the system energy consumption of repeated operation by the task cache mechanism, reduce the calculation cost of the server, and reduce the completion delay and the weighted sum of the energy consumption of all tasks.
The technical scheme for realizing the purpose of the invention is as follows:
calculation uninstaller based on task cache in edge calculationThe method comprises the following steps under the condition of a single-cell scene: 1) constructing a system model: configuring an MEC server at a base station, wherein the MEC server is connected with a remote center cloud through an optical fiber, assuming that a plurality of mobile devices are randomly distributed in a cell, each user only generates one task in one service period or time slot, the number of the mobile user is expressed as M ∈ {1, 2.., M }, and the task T generated by the mobile device belongs to {1, 2., M }i(i ∈ {1, 2...., N }) is represented as a quadruple, i.e., a square
Figure GDA0003506970100000021
Wherein
Figure GDA0003506970100000022
The input data volume of the task is represented, and the unit is kbit; ciThe unit of the CPU periodicity required for completing the task is cycle;
Figure GDA0003506970100000023
the output data volume of the task is represented, namely the output size of the task calculation result is represented in a unit of kbit; tau isiThe maximum tolerant time delay for completing the task is expressed in ms, the system model construction is completed, and the research work of the technical scheme mainly focuses on the mobile terminal and the edge server: the mobile terminal generates and runs an application program, if the computing requirement of the application program cannot be met locally, an unloading request can be provided to the edge server through the mobile network, the edge server receives the unloading request from the mobile terminal, appropriate computing resources are distributed to ensure that the task can be completed within the tolerance time delay, and whether the task is cached or not is determined according to the caching value of the task;
2) constructing a system communication model: the MEC server may provide computing services to mobile devices within a cell, each user may only generate one compute-intensive task per time slot, and the user may choose whether to offload a task to MEC execution, let a ═ a1,a2,...,aNDenotes the set of decision actions for all MD, where ai0 represents MiThe task of (b) is executed in the local device, ai1 denotes task offload to MEC execution, assuming no cells exist within a cellInternal interference, according to the Shannon equation, MiThe transmission rate for transmitting the task to the MEC server is as follows:
Figure GDA0003506970100000031
wherein g isiRepresenting a mobile device MiAnd channel gain, p, between MEC servicesiRepresents MiThe transmitting power of the system is N, the Gaussian noise power is N, and the unit is dbm, so that the system communication model is constructed;
3) constructing a system calculation model: in the process of processing tasks, a user sends an unloading request to an edge server, and the calculation tasks are unloaded only under the condition that the tolerance time delay of an application program is met and the system overhead can be reduced;
3.1) constructing a local calculation model: when task TiSelection when executed locally, Mobile device MiHas a CPU frequency of
Figure GDA0003506970100000032
In GHz, task TiThe calculation delay executed locally is
Figure GDA0003506970100000033
Figure GDA0003506970100000034
Energy consumption
Figure GDA0003506970100000035
Calculated from the following formula:
Figure GDA0003506970100000036
where κ denotes a CPU power consumption coefficient, and is a fixed constant depending on chip processes, and κ is set to 10-26
When computing task TiWhen performed locally, the total overhead of the system includes the local computation delay and the terminal energy consumption, in which case the overhead of a single device
Figure GDA0003506970100000037
Comprises the following steps:
Figure GDA0003506970100000038
in the formula, α and β are weight coefficients of time delay and energy consumption, respectively, and satisfy the following conditions:
a+β=1,0≤α≤1,0≤β≤1;
3.2) constructing an MEC calculation model: when computing task TiSelecting offload to MEC execution, latency including slave mobile device MiThe transmission delay to and execution delay at the MEC, the computational power to assign the MEC to the user is denoted fi cThe delay incurred by offloading to MEC calculation includes the transmission of Ti tranAnd execution delay Ti execOffload, therefore, the total latency for task offload to MEC is denoted as Ti c
Figure GDA0003506970100000041
At this time, the energy consumption generated by the mobile device unloading to the MEC is generated by the task unloading through the wireless link, and the transmission energy consumption is represented as Ei c
Figure GDA0003506970100000042
The energy consumption of the MEC is independent of the user and is not considered as system overhead, so the total overhead of task offloading to the MEC server is:
Figure GDA0003506970100000043
completing the construction of a system computing model;
4) constructing a resource allocation model: under the tolerance time delay constraint of the task, the edge server distributes proper computing resources according to the task attribute, wherein f is f1,f2,...,fn]Expressed as an allocation vector of computational resources, where fiIs denoted as MiThe distributed computing resources finish the construction of a resource distribution model;
5) constructing a task value model and a cache model: expressing the cache vector of the MEC to the task content requested by the user as H ═ H1,h2,...,hnIn which h isiIs a binary variable indicating whether the MEC has cached the user MiTask and related data content, hi0 means that the MEC does not cache the content, hi1, the MEC already caches the content, if the MEC already caches the content, task unloading is not needed when the MEC requests the task, the MEC directly returns a result to the mobile device after completing the task, the task with higher caching value is stored in the MEC under the limitation of cost and storage space, the task with low caching value is replaced, and a popularity calculation formula of the task is calculated according to the Zipf distribution (zip distribution) of the popularity file as follows:
Figure GDA0003506970100000044
in the formula, thetaiDenotes the popularity of the ith task, Z denotes the ziff constant, where the task set cached by the MEC service is denoted as HcC is the maximum storage quantity and is initialized to be empty, and because the memory capacity of the MEC server is limited, the MEC server only caches the tasks with higher cache value, and then outputs a task cache strategy H*The cache value is defined as follows:
Figure GDA0003506970100000045
wherein w1、w2And w3The weight coefficients are respectively the popularity of the task, the size of the task input data and the size of the calculated amount required by the task, and the following conditions are met:
w1+w2+w3=1,0≤w1≤1,0≤w2≤1,0≤w3≤1,
completing the construction of a system task value model and a cache model;
6) constructing a system overhead model: task completion delay and mobile terminal energy consumption are key indexes for calculating the quality of an unloading strategy, under the constraint of MEC computing capacity, storage resources and task tolerance delay, system overhead is minimized, and an overhead minimization problem is modeled as follows:
Figure GDA0003506970100000051
the technical scheme takes the system benefit as the target for optimizing the target to obtain the optimal unloading strategy A*Computing resource allocation F*And task cache policy H*The system overhead is minimized, and therefore, the optimization problem is expressed as:
Figure GDA0003506970100000052
A={a1,a2,...,aN}
F={f1,f2,...,fN}
H={h1,h2,...,hN},
constraint conditions are as follows:
Figure GDA0003506970100000056
Figure GDA0003506970100000055
Figure GDA0003506970100000053
Figure GDA0003506970100000054
C5:τi≤τ,
the method comprises the following steps that A is an unloading strategy set, F is a calculation resource distribution strategy set, H is a task caching strategy set, a constraint condition C1 indicates that calculation tasks can only be executed in a local server or an MEC server, C2 and C3 indicate that the sum of calculation resources distributed by the MEC server cannot exceed the calculation capacity of the MEC server, C4 indicates that the total cache capacity cannot exceed the total storage space of the MEC server, C5 indicates time delay control, namely the completion time of the tasks cannot exceed the maximum time delay which can be tolerated by the tasks, and system overhead model construction is completed;
7) solving the problem: solving by adopting a Q-Learning algorithm and an analytic hierarchy process:
the task cache-based computational migration researched by the technical scheme is a typical multi-target combinatorial optimization problem, belongs to an NP (non-trivial) problem, and can be solved by a plurality of heuristic algorithms at present: for example, Game Theory algorithm (Game Theory), Genetic Algorithm (GA), ant colony algorithm (PSO), and the like, although these heuristic algorithms have good performance on their respective targets, such as reducing the delay cost or energy consumption cost of the user, tasks in these algorithms arrive randomly within a service period, these algorithms respond to the tasks and make decisions periodically, and in a real scene, the tasks arrive randomly and need real-time response, and in addition, the conventional algorithms need to perform a large number of iterations to obtain an optimal solution, resulting in a high running time cost;
in order to solve the problem, the technical scheme provides a reinforcement Learning method to solve the problems of task unloading and resource allocation, wherein the reinforcement Learning is to obtain an approximately optimal solution through continuous trial and error, mainly comprises four basic elements of a state, an action, a return and an agent (agent), aims to maximize long-term income, and is divided into a reinforcement Learning algorithm with a model and a reinforcement Learning algorithm without the model, and the model cannot be established due to the fact that a real network scene is complex, so that the technical scheme uses a Q-Learning algorithm;
an Analytic Hierarchy Process (AHP) is a qualitative and quantitative analysis decision model, which divides elements related to decision into a target layer, a criterion layer and a scheme layer, carries out quantitative analysis on importance of pairwise comparison of one-level elements, and reflects a weight coefficient of each element through calculation, thereby being very suitable for a task scheduling scene of weight distribution;
the technical scheme determines the task value from a plurality of angles, and the task with higher processing value has higher priority, the technical scheme mainly considers the value of the task from three aspects, namely the maximum tolerant time delay of the task, the calculation resource required by the task and the data volume of the task, and the technical scheme mainly aims at maximizing the completion rate of the task, so that the tolerant time delay of the task has the maximum weight, all the calculation capacity and the data volume of the task have certain influence on the completion rate of the task, and other factors have smaller influence on the overall value relative to the tolerant time delay of the task;
therefore, the importance degree of the maximum tolerance time delay of the task is highest, the importance degree of the task data size is lowest, the technical scheme firstly constructs a hierarchical structure model, then compares every two factors of a criterion layer, constructs a judgment matrix of the criterion layer according to an objective judgment result, calculates a characteristic vector, a characteristic root and a weighted value according to the judgment matrix, and finally judges the effectiveness of the judgment matrix through consistency check analysis;
Q-Learning algorithm:
the system state is as follows: the system state s unloads the decision vector a, the computational resource allocation vector F, and the remaining computational resource vector G, i.e.:
S={A,F,G},
the system acts as follows: in the system, which tasks are unloaded and which are not unloaded are decided by the Agent, and how much computing resources are allocated to each task, so the system action is expressed as:
a={ai,fi},
wherein a isiRepresenting a task TiUnloading scheme of fiIs represented to task TiAn allocated computing resource;
and (3) system reward: in the t time slot, after the Agent executes each possible action, a reward R (S, a) is obtained in a certain state, the reward function should be associated with the target function, and the optimization problem at this time is to minimize the system overhead, which is specifically defined as follows:
Figure GDA0003506970100000071
wherein c islocalRepresenting the system total cost of executing the tasks locally at the time t, and c (s, a) representing the system total cost of the JORC method in the current state;
in the Q-Learning algorithm, the agent observes the current environmental state s at time ttSelecting action a according to Q tabletPerforming action atThen enters the state st+1Obtaining the reward r, updating the Q table through the following formula, and continuously and circularly iterating until the Q value is converged to obtain the optimal strategy pi*
Figure GDA0003506970100000072
Wherein, delta is the learning rate, gamma is more than 0 and less than 1 is a discount factor;
the Q-Learning algorithm is as follows:
inputting: training round number T, learning rate mu, discount factor gamma, greedy coefficient epsilon, task set M1 and MEC residual computing resources;
and (3) outputting: offload policy A*Allocation of computing resources F*
1. Initializing a Q matrix
2. Enter the following circulation
3. At an initial state s, an action a is selected according to a greedy strategy
4. Enter the following circulation
5. Obtaining a return r at s selection action a according to a greedy strategy and entering the next state st+1
6.
Figure GDA0003506970100000073
7.s=st+1
8. Exiting the current layer loop until s is the termination state
9. Exiting the total cycle until the Q value is converged;
analytic hierarchy process:
1. building a hierarchical model
The hierarchical model is 3 layers, the first layer is a task priority P, the second layer is a task maximum tolerance time delay tau, a task calculated quantity C and a task data quantity D, and the third layer is a task T;
2. constructing a judgment matrix in the hierarchy by aijThe importance of the ith element and the jth element relative to a certain factor of the previous layer is shown, the criterion layer has 3 elements in total, and a judgment matrix A is constructed (a)ij)3×3
Wherein:
Figure GDA0003506970100000081
three elements tau, C, D are all governed by P, and the judgment matrix is:
Figure GDA0003506970100000082
according to the judgment matrix, the maximum eigenvalue lambda of the matrix is solvedmaxAnd the corresponding weight vector W ═ W (W)1,w2,w3)T
3. And (3) consistency check, namely calculating an inconsistency index CI of the judgment matrix A according to the evaluation index, and solving a random consistency ratio CR value according to the RI value:
Figure GDA0003506970100000083
when CR <0.1, proving that the judgment matrix has satisfactory consistency, otherwise, revising A again until the satisfactory consistency is reached, and finally obtaining a weight vector W of the calculation task, wherein the cache value of the task is expressed as follows:
Figure GDA0003506970100000084
wherein w1、w2And w3The weight coefficients are respectively the popularity of the task, the size of the task input data and the size of the calculated amount required by the task, and the following conditions are met:
w1+w2+w3=1,0≤w1≤1,0≤w2≤1,0≤w3≤1,
the time delay and energy consumption cost of the system are minimized by a combined optimization calculation unloading, resource allocation and task caching method, the JORC comprehensively considers the scheme of task unloading, bandwidth and calculation resource allocation, firstly, an unloading, resource allocation and task caching model is modeled, for task caching, the caching value is defined according to various attributes of tasks, secondly, the calculation unloading and resource allocation problem is formalized and modeled into a Markov model, the weighted sum of the time delay and the energy consumption of task completion is minimized to be an evaluation index, an MEC server detects whether the tasks are cached or not, if the MEC caches the task calculation result, the MEC returns directly and determines whether to replace an MEC caching task set or not by a JORC method, if the calculation task result is not cached, an unloading strategy and a resource allocation strategy are determined by a Q-learning algorithm to minimize the system cost, and the specific JORC method is described as follows, the JORC method is a method for calculating unloading, resource allocation and task caching through joint optimization:
inputting: the user request set N1, server information G, cache status H,
and (3) outputting: h*,A*,F*
1. When i is 1: N1, the following loop is performed
2. Mobile device i generates task TiAnd issues an offload request
3. If T isiIn a cache set
4. Returning the calculation results
5. Otherwise
6. Task addition to task set M1
7. End judgment
8. Inputting M1 into Q-learning algorithm to obtain A*,F*
9. If the cache value of the task in H is less than
Figure GDA0003506970100000091
10. Will TiReplacing the original result
11. End judgment
12. Output task caching strategy H*
13. End the cycle
And (5) solving the problem.
Compared with the existing research, the technical scheme has the following characteristics:
1. aiming at the task cache problem, the technical scheme comprehensively considers the task unloading, the bandwidth and the calculation resource allocation, and the task cache, and models the problems of task completion delay and energy consumption minimization under the constraint of calculation, wireless and MEC storage resources. Modeling the unloading process into an MDP model, and designing an algorithm based on Q-Learning to solve.
2. The task cache value is determined based on multiple attributes of the task, specifically, the task popularity, the size of the task input data and the size of the calculated amount required by the task are integrated, different weights are distributed to determine the task cache value, the task with higher cache value is prioritized, the server calculation cost can be better reduced, and the completion delay and the energy consumption weighted sum of all the tasks are reduced.
3. And (4) comprehensively considering various attributes of the tasks, and providing a weight distribution algorithm based on an analytic hierarchy process to distribute weights.
The method can effectively reduce the operation time of the algorithm, improve the response speed of the server, reduce the system energy consumption of repeated operation by the task cache mechanism, reduce the calculation cost of the server, and reduce the completion delay and the weighted sum of the energy consumption of all tasks.
Drawings
FIG. 1 is a frame diagram of an embodiment;
FIG. 2 is a schematic view of a hierarchy model of a chromatography in an embodiment.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific examples, but the invention is not limited thereto.
Example (b):
the example considers the development of research for different types of virtual reality tasks in a single-cell scenario. A combined unloading and task caching strategy is provided, the strategy comprehensively considers the schemes of task unloading, bandwidth and computing resource allocation, firstly, unloading, resource allocation and task caching models are modeled, for task caching, caching values are defined according to various attributes of tasks, secondly, the problems of computing unloading and resource allocation are formalized and modeled into a Markov model, and the sum of time delay for completing the tasks and energy consumption weighting is minimized to serve as evaluation indexes.
Referring to fig. 1, a task cache-based computation offloading method in edge computation includes, in a single-cell scenario, the following steps:
1) constructing a system model: configuring an MEC server at a base station, wherein the MEC server is connected with a remote center cloud through an optical fiber, assuming that a plurality of mobile devices are randomly distributed in a cell, each user only generates one task in one service period or time slot, the number of the mobile user is expressed as M E {1,2i(i ∈ {1, 2...., N }) is represented as a quadruple, i.e., a square
Figure GDA0003506970100000101
Wherein
Figure GDA0003506970100000102
The input data volume of the task is represented, and the unit is kbit; ciThe unit of the CPU periodicity required for completing the task is cycle;
Figure GDA0003506970100000103
the output data volume of the task is represented, namely the output size of the task calculation result is represented in a unit of kbit; tau isiThe maximum tolerant time delay of the completed task is expressed, the unit is ms, and the system model is constructed;
2) constructing a system communication model: the MEC server may provide computing services to mobile devices within a cell, each user may only generate one compute-intensive task per time slot, and the user may choose whether to offload a task to MEC execution, let a ═ a1,a2,...,aNDenotes the set of decision actions for all MD, where ai0 represents MiThe task of (a) is performed at the local devicei1 denotes task offloading to MEC execution, assuming no intra-cell interference is present in the cell, according to shannon's formula, MiThe transmission rate for transmitting the task to the MEC server is as follows:
Figure GDA0003506970100000111
wherein g isiRepresenting a mobile device MiAnd channel gain, p, between MEC servicesiRepresents MiThe transmitting power of the system is N, the Gaussian noise power is N, and the unit is dbm, so that the system communication model is constructed;
3) constructing a system calculation model: in the method, the time delay and energy consumption of task processing are taken into comprehensive consideration as the system overhead, in the process of processing the tasks, a user sends an unloading request to an edge server, and the calculation tasks are unloaded only under the conditions that the time delay tolerance of an application program is met and the system overhead can be reduced;
3.1) constructing a local calculation model: when task TiSelection when executed locally, Mobile device MiHas a CPU frequency of
Figure GDA0003506970100000112
In GHz, task TiThe calculation delay executed locally is
Figure GDA0003506970100000113
Figure GDA0003506970100000114
Energy consumption
Figure GDA0003506970100000115
Calculated from the following formula:
Figure GDA0003506970100000116
where κ denotes a CPU power consumption coefficient, and is a fixed constant depending on chip processes, and κ is set to 10-26
When computing task TiWhen performed locally, the total overhead of the system includes the local computation delay and the terminal energy consumption, in which case the overhead of a single device
Figure GDA0003506970100000117
Comprises the following steps:
Figure GDA0003506970100000118
in the formula, α and β are weight coefficients of time delay and energy consumption, respectively, and satisfy the following conditions:
α+β=1,0≤α≤1,0≤β≤1;
3.2) constructing an MEC calculation model: when computing task TiSelecting offload to MEC execution, latency including slave mobile device MiThe transmission delay to and execution delay at the MEC, the computational power to assign the MEC to the user is denoted fi cThe delay incurred by offloading to MEC calculation includes the transmission of Ti tranAnd execution delay Ti execOffload, therefore, the total latency for task offload to MEC is denoted as Ti c
Figure GDA0003506970100000119
At this time, the energy consumption generated by the mobile device unloading to the MEC is generated by the task unloading through the wireless link, and the transmission energy consumption is represented as Ei c
Figure GDA0003506970100000121
The energy consumption of the MEC is independent of the user and is not considered as system overhead, so the total overhead of task offloading to the MEC server is:
Figure GDA0003506970100000122
completing the construction of a system computing model;
4) constructing a resource allocation model: under the tolerance time delay constraint of the task, the edge server distributes proper computing resources according to the task attribute, wherein f is f1,f2,...,fn]Expressed as an allocation vector of computational resources, where fiIs denoted as MiThe distributed computing resources finish the construction of a resource distribution model;
5) constructing a task value model and a cache model: in the embodiment, three factors of task popularity, calculation amount required by the task and task data amount are comprehensively considered to determine the caching strategy, and the caching vector of the MEC for the task content requested by the user is expressed as H ═ H1,h2,...,hnIn which h isiIs a binary variable that indicates whether the MEC has cached user MiTask and related data content, hi0 means MEC does not cache the content, hi1 means that the MEC has cached the content, if the MEC has cached the content, then the task is requested without task offloading, and the MEC will doAfter the task is completed, directly returning the result to the mobile equipment, storing the task with higher cache value to the MEC under the limit of cost and storage space, replacing the task with low cache value, and calculating the popularity calculation formula of the task according to the Zipf distribution of the popularity file as follows:
Figure GDA0003506970100000123
in the formula, thetaiRepresents the popularity of the ith task, Z represents the Ziff constant, wherein the task set of the MEC service cache is represented as HcC is the maximum storage quantity and is initialized to be empty, and because the memory capacity of the MEC server is limited, the MEC server only caches the tasks with higher cache value, and then outputs a task cache strategy H*The cache value is defined as follows:
Figure GDA0003506970100000124
wherein w1、w2And w3The weight coefficients are respectively the popularity of the task, the size of the task input data and the size of the calculated amount required by the task, and the following conditions are met:
w1+w2+w3=1,0≤w1≤1,0≤w2≤1,0≤w3≤1,
completing the construction of a system task value model and a cache model;
6) constructing a system overhead model: task completion delay and mobile terminal energy consumption are both key indexes of the quality of a calculation unloading strategy, in the embodiment, the calculation unloading, resource allocation and task caching strategies are jointly optimized under the scene of a single MEC, under the constraint of the MEC calculation capacity, the storage resources and the task tolerance delay, the system overhead is minimized, and the problem of overhead minimization is modeled as follows:
Figure GDA0003506970100000131
the optimization target of the embodiment takes the system benefit as the target to obtain the optimal unloading strategy A*Computing resource allocation F*And task cache policy H*The system overhead is minimized, and therefore, the optimization problem is expressed as:
Figure GDA0003506970100000132
A={a1,a2,...,aN}
F={f1,f2,...,fN}
H={h1,h2,...,hN},
constraint conditions are as follows:
Figure GDA0003506970100000133
Figure GDA0003506970100000134
Figure GDA0003506970100000135
Figure GDA0003506970100000136
C5:τi≤τ,
the method comprises the following steps that A is an unloading strategy set, F is a calculation resource distribution strategy set, H is a task caching strategy set, a constraint condition C1 indicates that calculation tasks can only be executed in a local server or an MEC server, C2 and C3 indicate that the sum of calculation resources distributed by the MEC server cannot exceed the calculation capacity of the MEC server, C4 indicates that the total cache capacity cannot exceed the total storage space of the MEC server, C5 indicates time delay control, namely the completion time of the tasks cannot exceed the maximum time delay which can be tolerated by the tasks, and system overhead model construction is completed;
7) solving the problem: solving by adopting a Q-Learning algorithm and an analytic hierarchy process:
the task cache-based computational migration researched in the embodiment is a typical multi-target combinatorial optimization problem, belongs to an NP (non-trivial) problem, and can be solved by a plurality of heuristic algorithms at present: for example, Game Theory algorithm (Game Theory), Genetic Algorithm (GA), ant colony algorithm (PSO), and the like, although these heuristic algorithms have good performance on their respective targets, such as reducing the delay cost or energy consumption cost of the user, tasks in these algorithms arrive randomly within a service period, these algorithms respond to the tasks and make decisions periodically, and in a real scene, the tasks arrive randomly and need real-time response, and in addition, the conventional algorithms need to perform a large number of iterations to obtain an optimal solution, resulting in a high running time cost;
in order to solve the problem, the embodiment provides a reinforcement Learning method to solve the problems of task unloading and resource allocation, wherein the reinforcement Learning is to obtain an approximately optimal solution by continuous trial and error, mainly comprises four basic elements of a state, an action, a return and an agent (agent), and aims to maximize long-term profit, and is divided into reinforcement Learning algorithms with a model and without a model, and the model cannot be established due to the fact that a real network scene is complex, so that the embodiment uses a Q-Learning algorithm;
the analytic hierarchy process is a qualitative and quantitative decision model, it will be with decision-making related element to divide into target layer, criterion layer and scheme layer, carry on the quantitative analysis to the importance that a level element compares pairwise, then reflect the weight coefficient of every element through calculating, very suitable for task scheduling scene of weight distribution;
the task value is determined from a plurality of angles, the task with higher processing value has higher priority, the task value is considered mainly from three aspects, namely the maximum tolerant delay of the task, the calculation resource required by the task and the data volume of the task, the completion rate of the task is maximized, the tolerant delay of the task has the maximum weight, all the calculation capacity and the data volume of the task have certain influence on the completion rate of the task, and other factors have small influence on the overall value relative to the tolerant delay of the task;
therefore, the importance degree of the maximum tolerance time delay of the task is highest, the importance degree of the task data size is lowest, the technical scheme firstly constructs a hierarchical structure model, then compares every two factors of a criterion layer, constructs a judgment matrix of the criterion layer according to an objective judgment result, calculates a characteristic vector, a characteristic root and a weighted value according to the judgment matrix, and finally judges the effectiveness of the judgment matrix through consistency check analysis;
Q-Learning algorithm:
the system state is as follows: the system state s unloads the decision vector a, the computational resource allocation vector F, and the remaining computational resource vector G, i.e.:
S={A,F,G},
the system acts as follows: in the system, which tasks are unloaded and which are not unloaded are decided by the Agent, and how much computing resources are allocated to each task, so the system action is expressed as:
a={ai,fi},
wherein a isiRepresenting a task TiUnloading scheme of fiIs represented to task TiAn allocated computing resource;
and (3) system reward: in the t time slot, after the Agent executes each possible action, a reward R (S, a) is obtained in a certain state, the reward function should be associated with the target function, and the optimization problem at this time is to minimize the system overhead, which is specifically defined as follows:
Figure GDA0003506970100000151
wherein c islocalRepresenting the system total cost of executing the tasks locally at the time t, and c (s, a) representing the system total cost of the JORC method in the current state;
in the Q-Learning algorithm, agentsObserving the current environmental state s at time ttSelecting action a according to Q tabletPerforming action atThen enters the state st+1Obtaining the reward r, updating the Q table through the following formula, and continuously and circularly iterating until the Q value is converged to obtain the optimal strategy pi*
Figure GDA0003506970100000152
Wherein, delta is the learning rate, gamma is more than 0 and less than 1 is a discount factor;
the Q-Learning algorithm is as follows:
inputting: training round number T, learning rate mu, discount factor gamma, greedy coefficient epsilon, task set M1 and MEC residual computing resources;
and (3) outputting: offload policy A*Computing resource allocation F*
1. Initializing a Q matrix
2. Enter the following circulation
3. At an initial state s, an action a is selected according to a greedy strategy
4. Enter the following circulation
5. Get a return r at s-select action a according to a greedy policy and enter the next state st+1
6.
Figure GDA0003506970100000153
7.s=st+1
8. Until s is the termination state, exiting the current layer loop
9. Exiting the total cycle until the Q value is converged;
analytic hierarchy process:
1. building a hierarchical model
The hierarchical model is 3 layers, the first layer is task priority P, the second layer is task maximum tolerance time delay tau, task calculated quantity C and task data quantity D, and the third layer is task T, as shown in FIG. 2;
2. constructing a decision matrix in the hierarchy byaijThe importance of the ith element and the jth element relative to a certain factor of the previous layer is shown, the criterion layer has 3 elements in total, and a judgment matrix A is constructed (a)ij)3×3
Wherein the content of the first and second substances,
Figure GDA0003506970100000161
three elements tau, C, D are all governed by P, and the judgment matrix is:
Figure GDA0003506970100000162
according to the judgment matrix, the maximum eigenvalue lambda of the matrix is solvedmaxAnd the corresponding weight vector W ═ W (W)1,w2,w3)T
3. And (3) consistency check, namely calculating an inconsistency index CI of the judgment matrix A according to the evaluation index, and solving a random consistency ratio CR value according to the RI value:
Figure GDA0003506970100000163
when CR <0.1, proving that the judgment matrix has satisfactory consistency, otherwise, revising A again until the satisfactory consistency is reached, and finally obtaining a weight vector W of the calculation task, wherein the cache value of the task is expressed as follows:
Figure GDA0003506970100000164
wherein w1、w2And w3The weight coefficients are respectively the popularity of the task, the size of the task input data and the size of the calculated amount required by the task, and the following conditions are met:
w1+w2+w3=1,0≤w1≤1,0≤w2≤1,0≤w3≤1,
the method includes the steps that time delay and energy consumption cost of a system are minimized through a joint optimization calculation unloading, resource allocation and task caching method, an MEC server detects whether tasks are cached, if the MEC caches task calculation results, the MEC directly returns, whether an MEC cache task set is replaced is determined through a JORC method, if the calculation task results are not cached, an unloading strategy and a resource allocation strategy are determined through a Q-learning algorithm, and system cost is minimized, and the specific JORC method is described as follows, wherein the JORC method calculates unloading, resource allocation and task caching through joint optimization:
inputting: the user request set N1, server information G, cache status H,
and (3) outputting: h*,A*,F*
1. When i is 1: N1, the following loop is performed
2. Mobile device i generates task TiAnd issues an offload request
3. If T isiIn a cache set
4. Returning the calculation results
5. Otherwise
6. Task addition to task set M1
7. End judgment
8. Inputting M1 into Q-learning algorithm to obtain A*,F*
9. If the cache value of the tasks in the H is less than
Figure GDA0003506970100000171
10. Will TiReplacing the original result
11. End judgment
12. Output task caching strategy H*
13. End the cycle
And (5) solving the problem.

Claims (1)

1. A task cache-based computation unloading method in edge computation is characterized by comprising the following steps in a single-cell scene:
1) constructing a system model: configuring an MEC server at a base station, wherein the MEC server is connected with a remote center cloud through an optical fiber, assuming that a plurality of mobile devices are randomly distributed in a cell, each user only generates one task in one service period or time slot, the number of the mobile user is expressed as M ∈ {1, 2.., M }, and the task T generated by the mobile device belongs to {1, 2., M }i(i ∈ {1, 2...., N }) is represented as a quadruple, i.e., a square
Figure FDA0003506970090000011
Wherein
Figure FDA0003506970090000012
The input data volume of the task is represented, and the unit is kbit; ciThe unit of the CPU periodicity required for completing the task is cycle;
Figure FDA0003506970090000013
the output data volume of the task is represented, namely the output size of the task calculation result is represented in a unit of kbit; tau.iThe maximum tolerant time delay of the completed task is expressed, the unit is ms, and the system model is constructed;
2) constructing a system communication model: the MEC server may provide computing services to mobile devices within a cell, each user may only generate one compute-intensive task per time slot, and the user may choose whether to offload a task to MEC execution, let a ═ a1,a2,...,aNDenotes the set of decision actions for all MD, where ai0 represents MiThe task of (a) is performed at the local devicei1 denotes task offloading to MEC execution, assuming no intra-cell interference is present in the cell, M according to shannon's formulaiThe transmission rate for transmitting the task to the MEC server is as follows:
Figure FDA0003506970090000014
wherein g isiRepresenting a mobile device MiAnd the MEC serviceChannel gain, piRepresents MiThe transmitting power of the system is N, the Gaussian noise power is N, and the unit is dbm, so that the system communication model is constructed;
3) constructing a system calculation model: in the process of processing tasks, a user sends an unloading request to an edge server, and the calculation tasks are unloaded only under the condition that the tolerance time delay of an application program is met and the system overhead can be reduced;
3.1) constructing a local calculation model: when task TiSelection being performed locally, the mobile device MiHas a CPU frequency of fi lIn GHz, task TiThe computation delay executed locally is Ti l:
Figure FDA0003506970090000015
Energy consumption
Figure FDA0003506970090000016
Calculated from the following formula:
Figure FDA0003506970090000017
where κ denotes a CPU power consumption coefficient, and is a fixed constant depending on a chip process, and κ is set to 10 ═ k-26
When computing task TiWhen performed locally, the total overhead of the system includes the local computation delay and the terminal energy consumption, in which case the overhead of a single device
Figure FDA0003506970090000021
Comprises the following steps:
Figure FDA0003506970090000022
in the formula, α and β are weight coefficients of time delay and energy consumption, respectively, and satisfy the following conditions:
α+β=1,0≤α≤1,0≤β≤1;
3.2) constructing an MEC calculation model: when computing task TiSelecting offload to MEC execution, latency including slave mobile device MiThe transmission delay to and execution delay at the MEC, the computational power to assign the MEC to the user is denoted fi cThe delay incurred by offloading to MEC calculation includes the transmission of Ti tranAnd execution delay Ti execOffload, therefore, the total latency for task offload to MEC is denoted as Ti c
Figure FDA0003506970090000023
At this time, the energy consumption generated by the mobile device unloading to the MEC is generated by the task unloading through the wireless link, and the transmission energy consumption is expressed as
Figure FDA0003506970090000024
Figure FDA0003506970090000025
The energy consumption of the MEC is independent of the user and is not considered as system overhead, so the total overhead of task offloading to the MEC server is:
Figure FDA0003506970090000026
completing the construction of a system computing model;
4) constructing a resource allocation model: under the tolerance time delay constraint of the task, the edge server distributes proper computing resources according to the task attribute, wherein f is f1,f2,...,fn]Expressed as an allocation vector of computational resources, where fiIs denoted as MiDistributed computing resources, completing resource partitioningConstructing a matching model;
5) constructing a task value model and a cache model: expressing the cache vector of the MEC to the task content requested by the user as H ═ H1,h2,...,hnIn which h isiIs a binary variable indicating whether the MEC has cached the user MiTask and related data content, hi0 means that the MEC does not cache the content, hi1 indicates that the MEC has cached the content, if the MEC has cached the content, the task unloading is not needed when the task is requested, after the MEC completes the task, the result is directly returned to the mobile device, the task with higher caching value is stored to the MEC under the limitation of cost and storage space, the task with low caching value is replaced, and according to the ziff distribution of the popularity file, the popularity calculation formula of the task is calculated as follows:
Figure FDA0003506970090000031
in the formula, thetaiDenotes the popularity of the ith task, Z denotes the ziff constant, where the task set cached by the MEC service is denoted as HcC is the maximum storage quantity and is initialized to be empty, and because the memory capacity of the MEC server is limited, the MEC server only caches the tasks with higher cache value, and then outputs a task cache strategy H*The cache value is defined as follows:
Figure FDA0003506970090000032
wherein w1、w2And w3The weight coefficients are respectively the popularity of the task, the size of the task input data and the size of the calculated amount required by the task, and the following conditions are met:
w1+w2+w3=1,0≤w1≤1,0≤w2≤1,0≤w3≤1,
completing construction of a system task value model and a cache model;
6) constructing a system overhead model: task completion delay and mobile terminal energy consumption are key indexes for calculating the quality of an unloading strategy, under the constraint of MEC computing capacity, storage resources and task tolerance delay, system overhead is minimized, and an overhead minimization problem is modeled as follows:
Figure FDA0003506970090000033
the optimization target takes the system benefit as the target to obtain the optimal unloading strategy A*Computing resource allocation F*And a task caching policy H*The system overhead is minimized, and therefore, the optimization problem is expressed as:
Figure FDA0003506970090000034
A={a1,a2,...,aN}
F={f1,f2,...,fN}
H={h1,h2,...,hN},
constraint conditions are as follows:
C1:ai∈{0,1},
Figure FDA0003506970090000046
C2:fi c>0,
Figure FDA0003506970090000042
C3:
Figure FDA0003506970090000043
C4:
Figure FDA0003506970090000044
C5:τi≤τ,
the method comprises the following steps that A is an unloading strategy set, F is a calculation resource distribution strategy set, H is a task caching strategy set, a constraint condition C1 indicates that calculation tasks can only be executed in a local server or an MEC server, C2 and C3 indicate that the sum of calculation resources distributed by the MEC server cannot exceed the calculation capacity of the MEC server, C4 indicates that the total cache capacity cannot exceed the total storage space of the MEC server, C5 indicates time delay control, namely the completion time of the tasks cannot exceed the maximum time delay which can be tolerated by the tasks, and system overhead model construction is completed;
7) solving the problem: solving by adopting a Q-Learning algorithm and an analytic hierarchy process:
Q-Learning algorithm:
the system state is as follows: the system state s unloads the policy set a, the computing resource allocation policy set F, and the remaining computing resource vector G, that is:
S={A,F,G},
the system acts as follows: in the system, which tasks are unloaded and which are not unloaded are decided by the Agent, and how much computing resources are allocated to each task, so the system action is expressed as:
a={ai,fi},
wherein a isiRepresenting a task TiUnloading scheme of (f)iIs represented to task TiAn allocated computing resource;
and (3) system reward: in the t time slot, after the Agent executes each possible action, a reward R (S, a) is obtained in a certain state, the reward function should be associated with the target function, and the optimization problem at this time is to minimize the system overhead, which is specifically defined as follows:
Figure FDA0003506970090000045
wherein c islocalRepresents the total system overhead of the tasks executed locally at the moment t, c(s,a)Representing Joint optimization computing offload, Resource allocation and task caching (Joint Offloading, Resource) in the current stateThe total system overhead of allocation and Caching, JORC for short);
in the Q-Learning algorithm, the agent observes the current environmental state s at time ttSelecting action a according to Q tabletPerforming action atThen enters the state st+1Obtaining the reward r, updating the Q table through the following formula, and continuously and circularly iterating until the Q value is converged to obtain the optimal strategy pi*
Figure FDA0003506970090000051
Wherein, delta is the learning rate, gamma is more than 0 and less than 1 is a discount factor;
the Q-Learning algorithm is as follows:
inputting: training round number T, learning rate mu, discount factor gamma, greedy coefficient epsilon, task set M1 and MEC residual computing resources;
and (3) outputting: offload policy A*Computing resource allocation F*
1. Initializing a Q matrix
2. Enter the following circulation
3. At an initial state s, an action a is selected according to a greedy strategy
4. Enter the following circulation
5. Get a return r at s-select action a according to a greedy policy and enter the next state st+1
6.
Figure FDA0003506970090000052
7.s=st+1
8. Exiting the current layer loop until s is the termination state
9. Exiting the total cycle until the Q value is converged;
analytic hierarchy process:
1. building a hierarchical model
The hierarchical model is 3 layers, the first layer is a task priority P, the second layer is a task maximum tolerance time delay tau, a task calculated quantity C and a task data quantity D, and the third layer is a task T;
2. constructing a judgment matrix in the hierarchy by aijThe importance of the ith element and the jth element relative to a certain factor of the previous layer is shown, the criterion layer has 3 elements in total, and a judgment matrix A is constructed (a)ij)3×3
Wherein the content of the first and second substances,
Figure FDA0003506970090000061
three elements tau, C, D are all governed by P, and the judgment matrix is:
Figure FDA0003506970090000062
according to the judgment matrix, the maximum eigenvalue lambda of the matrix is solvedmaxAnd the corresponding weight vector W ═ W (W)1,w2,w3)T
3. And (3) consistency check, namely calculating an inconsistency index CI of the judgment matrix A according to the evaluation index, and solving a random consistency ratio CR value according to the RI value:
Figure FDA0003506970090000063
when CR <0.1, proving that the judgment matrix has satisfactory consistency, otherwise, revising A again until the satisfactory consistency is reached, and finally obtaining a weight vector W of the calculation task, wherein the cache value of the task is expressed as follows:
Figure FDA0003506970090000064
wherein w1、w2And w3Weight coefficients respectively representing the popularity of the task, the size of the input data of the task and the size of the calculated amount required by the task, and satisfying the following conditions:
w1+w2+w3=1,0≤w1≤1,0≤w2≤1,0≤w3≤1,
The method includes the steps that time delay and energy consumption cost of a system are minimized through a joint optimization calculation unloading, resource allocation and task caching method, an MEC server detects whether tasks are cached, if the MEC caches task calculation results, the MEC directly returns, whether an MEC cache task set is replaced is determined through a JORC method, if the calculation task results are not cached, an unloading strategy and a resource allocation strategy are determined through a Q-learning algorithm, and system cost is minimized, and the specific JORC method is described as follows, wherein the JORC method calculates unloading, resource allocation and task caching through joint optimization:
inputting: the user request set N1, server information G, cache status H,
and (3) outputting: h*,A*,F*
1. When i is 1: N1, enter the following loop
2. Mobile device i generates task TiAnd issues an offload request
3. If T isiIn a cache set
4. Returning the calculation results
5. Otherwise
6. Task addition to task set M1
7. End judgment
8. Inputting M1 into Q-learning algorithm to obtain A*,F*
9. If the cache value of the task in H is less than
Figure FDA0003506970090000071
10. Will TiReplacing the original result
11. End judgment
12. Output task caching strategy H*
13. Ending the circulation;
and (5) solving the problem.
CN202110275573.5A 2021-03-15 2021-03-15 Task cache-based computation unloading method in edge computation Active CN112860350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110275573.5A CN112860350B (en) 2021-03-15 2021-03-15 Task cache-based computation unloading method in edge computation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110275573.5A CN112860350B (en) 2021-03-15 2021-03-15 Task cache-based computation unloading method in edge computation

Publications (2)

Publication Number Publication Date
CN112860350A CN112860350A (en) 2021-05-28
CN112860350B true CN112860350B (en) 2022-06-03

Family

ID=75994467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110275573.5A Active CN112860350B (en) 2021-03-15 2021-03-15 Task cache-based computation unloading method in edge computation

Country Status (1)

Country Link
CN (1) CN112860350B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995343B (en) * 2021-04-22 2021-09-21 华南理工大学 Edge node calculation unloading method with performance and demand matching capability
CN113434212B (en) * 2021-06-24 2023-03-21 北京邮电大学 Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning
CN113452625B (en) * 2021-06-28 2022-04-15 重庆大学 Deep reinforcement learning-based unloading scheduling and resource allocation method
CN113515378A (en) * 2021-06-28 2021-10-19 国网河北省电力有限公司雄安新区供电公司 Method and device for migration and calculation resource allocation of 5G edge calculation task
CN113504986A (en) * 2021-06-30 2021-10-15 广州大学 Cache-based edge computing system unloading method, device, equipment and medium
CN113377547B (en) * 2021-08-12 2021-11-23 南京邮电大学 Intelligent unloading and safety guarantee method for computing tasks in 5G edge computing environment
CN113726862B (en) * 2021-08-20 2023-07-14 北京信息科技大学 Computing unloading method and device under multi-edge server network
CN113950103B (en) * 2021-09-10 2022-11-04 西安电子科技大学 Multi-server complete computing unloading method and system under mobile edge environment
CN113852692B (en) * 2021-09-24 2024-01-30 中国移动通信集团陕西有限公司 Service determination method, device, equipment and computer storage medium
CN113965961B (en) * 2021-10-27 2024-04-09 中国科学院计算技术研究所 Edge computing task unloading method and system in Internet of vehicles environment
CN114461299B (en) * 2022-01-26 2023-06-06 中国联合网络通信集团有限公司 Unloading decision determining method and device, electronic equipment and storage medium
CN114745396B (en) * 2022-04-12 2024-03-08 广东技术师范大学 Multi-agent-based end edge cloud 3C resource joint optimization method
CN115022188B (en) * 2022-05-27 2024-01-09 国网经济技术研究院有限公司 Container placement method and system in electric power edge cloud computing network
CN114860345B (en) * 2022-05-31 2023-09-08 南京邮电大学 Calculation unloading method based on cache assistance in smart home scene
CN115174566B (en) * 2022-06-08 2024-03-15 之江实验室 Edge computing task unloading method based on deep reinforcement learning
CN115051998B (en) * 2022-06-09 2023-06-20 电子科技大学 Adaptive edge computing offloading method, apparatus and computer-readable storage medium
CN115297013B (en) * 2022-08-04 2023-11-28 重庆大学 Task unloading and service cache joint optimization method based on edge collaboration
CN115484314B (en) * 2022-08-10 2024-04-02 重庆大学 Edge cache optimization method for recommending enabling under mobile edge computing network
CN115766241A (en) * 2022-11-21 2023-03-07 西安工程大学 Distributed intrusion detection system task scheduling and unloading method based on DQN algorithm
CN116320354B (en) * 2023-01-16 2023-09-29 浙江大学 360-degree virtual reality video user access control system and control method
CN117042051B (en) * 2023-08-29 2024-03-08 燕山大学 Task unloading strategy generation method, system, equipment and medium in Internet of vehicles
CN116865842B (en) * 2023-09-05 2023-11-28 武汉能钠智能装备技术股份有限公司 Resource allocation system and method for communication multiple access edge computing server

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109302709A (en) * 2018-09-14 2019-02-01 重庆邮电大学 The unloading of car networking task and resource allocation policy towards mobile edge calculations
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110351754A (en) * 2019-07-15 2019-10-18 北京工业大学 Industry internet machinery equipment user data based on Q-learning calculates unloading decision-making technique
EP3605329A1 (en) * 2018-07-31 2020-02-05 Commissariat à l'énergie atomique et aux énergies alternatives Connected cache empowered edge cloud computing offloading
CN111124666A (en) * 2019-11-25 2020-05-08 哈尔滨工业大学 Efficient and safe multi-user multi-task unloading method in mobile Internet of things
CN111405568A (en) * 2020-03-19 2020-07-10 三峡大学 Computing unloading and resource allocation method and device based on Q learning
CN111818168A (en) * 2020-06-19 2020-10-23 重庆邮电大学 Self-adaptive joint calculation unloading and resource allocation method in Internet of vehicles

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684075B (en) * 2018-11-28 2023-04-07 深圳供电局有限公司 Method for unloading computing tasks based on edge computing and cloud computing cooperation
CN111031102B (en) * 2019-11-25 2022-04-12 哈尔滨工业大学 Multi-user, multi-task mobile edge computing system cacheable task migration method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3605329A1 (en) * 2018-07-31 2020-02-05 Commissariat à l'énergie atomique et aux énergies alternatives Connected cache empowered edge cloud computing offloading
CN109302709A (en) * 2018-09-14 2019-02-01 重庆邮电大学 The unloading of car networking task and resource allocation policy towards mobile edge calculations
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110351754A (en) * 2019-07-15 2019-10-18 北京工业大学 Industry internet machinery equipment user data based on Q-learning calculates unloading decision-making technique
CN111124666A (en) * 2019-11-25 2020-05-08 哈尔滨工业大学 Efficient and safe multi-user multi-task unloading method in mobile Internet of things
CN111405568A (en) * 2020-03-19 2020-07-10 三峡大学 Computing unloading and resource allocation method and device based on Q learning
CN111818168A (en) * 2020-06-19 2020-10-23 重庆邮电大学 Self-adaptive joint calculation unloading and resource allocation method in Internet of vehicles

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Joint task offloading and data caching in mobile edge computing networks;Ni Zhang 等;《Computer Networks》;20201209;第182卷;全文 *

Also Published As

Publication number Publication date
CN112860350A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112860350B (en) Task cache-based computation unloading method in edge computation
Nath et al. Deep reinforcement learning for dynamic computation offloading and resource allocation in cache-assisted mobile edge computing systems
CN111757354B (en) Multi-user slicing resource allocation method based on competitive game
CN113055489B (en) Implementation method of satellite-ground converged network resource allocation strategy based on Q learning
CN111556461A (en) Vehicle-mounted edge network task distribution and unloading method based on deep Q network
CN111262944B (en) Method and system for hierarchical task offloading in heterogeneous mobile edge computing network
CN112286677A (en) Resource-constrained edge cloud-oriented Internet of things application optimization deployment method
CN111813539A (en) Edge computing resource allocation method based on priority and cooperation
CN111488528A (en) Content cache management method and device and electronic equipment
CN116260871A (en) Independent task unloading method based on local and edge collaborative caching
CN113590279A (en) Task scheduling and resource allocation method for multi-core edge computing server
CN113573363A (en) MEC calculation unloading and resource allocation method based on deep reinforcement learning
CN114641076A (en) Edge computing unloading method based on dynamic user satisfaction in ultra-dense network
Li et al. Computation offloading and service allocation in mobile edge computing
Zhang et al. A deep reinforcement learning approach for online computation offloading in mobile edge computing
CN115396953A (en) Calculation unloading method based on improved particle swarm optimization algorithm in mobile edge calculation
CN116209084A (en) Task unloading and resource allocation method in energy collection MEC system
CN116321307A (en) Bidirectional cache placement method based on deep reinforcement learning in non-cellular network
Li et al. DQN-enabled content caching and quantum ant colony-based computation offloading in MEC
CN116828534B (en) Intensive network large-scale terminal access and resource allocation method based on reinforcement learning
CN112905315A (en) Task processing method, device and equipment in Mobile Edge Computing (MEC) network
Shaodong et al. Multi-step reinforcement learning-based offloading for vehicle edge computing
CN115499441A (en) Deep reinforcement learning-based edge computing task unloading method in ultra-dense network
Zhou et al. Recommendation-Driven Multi-Cell Cooperative Caching: A Multi-Agent Reinforcement Learning Approach
CN115580900A (en) Unmanned aerial vehicle assisted cooperative task unloading method based on deep reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231212

Address after: 430070 Hubei Province, Wuhan city Hongshan District Luoyu Road No. 546

Patentee after: HUBEI CENTRAL CHINA TECHNOLOGY DEVELOPMENT OF ELECTRIC POWER Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20231212

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 541004 No. 15 Yucai Road, Qixing District, Guilin, the Guangxi Zhuang Autonomous Region

Patentee before: Guangxi Normal University