CN112860350A - Task cache-based computation unloading method in edge computation - Google Patents

Task cache-based computation unloading method in edge computation Download PDF

Info

Publication number
CN112860350A
CN112860350A CN202110275573.5A CN202110275573A CN112860350A CN 112860350 A CN112860350 A CN 112860350A CN 202110275573 A CN202110275573 A CN 202110275573A CN 112860350 A CN112860350 A CN 112860350A
Authority
CN
China
Prior art keywords
task
mec
model
calculation
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110275573.5A
Other languages
Chinese (zh)
Other versions
CN112860350B (en
Inventor
卞圣强
覃少华
谢志斌
王海燕
张家豪
崔硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Hubei Central China Technology Development Of Electric Power Co ltd
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN202110275573.5A priority Critical patent/CN112860350B/en
Publication of CN112860350A publication Critical patent/CN112860350A/en
Application granted granted Critical
Publication of CN112860350B publication Critical patent/CN112860350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a task cache-based calculation migration method in edge calculation, which comprises the following steps: 1) constructing a system model; 2) constructing a system communication model; 3) constructing a system calculation model; 4) constructing a resource allocation model; 5) constructing a task value model and a cache model; 6) constructing a system overhead model; 7) and (5) solving the problem. The method can effectively reduce the operation time of the algorithm, improve the response speed of the server, reduce the system energy consumption of repeated operation by the task cache mechanism, reduce the calculation cost of the server, and reduce the completion delay and the weighted sum of the energy consumption of all tasks.

Description

Task cache-based computation unloading method in edge computation
Technical Field
The invention relates to the application field of a mobile edge computing system, in particular to a task cache-based computing unloading method in edge computing.
Background
With the rapid development of wireless communication technology, internet of things technology and 5G technology, mobile devices are rapidly popularized, and data traffic is rapidly increased. Some emerging applications, such as online gaming, artificial intelligence, and virtual reality, have high latency requirements and require significant computing resources. However, mobile terminals have limited computing power and battery capacity, and running these applications can introduce higher computational delays and increase the power consumption of the mobile terminal. And the transmission delay caused by the long transmission distance of the service mode based on the cloud computing center is often difficult to meet the requirement of real-time application. The Edge Computing (MEC) paradigm arises from this, and by sinking the service to the Edge of the network, network transmission delay is reduced, and the requirements for low-delay services are met.
The edge computing not only solves the problem of insufficient resources such as computing and storage of the mobile equipment, but also solves the problems of overhigh transmission delay, overlarge server load and the like in the traditional cloud computing center mode. However, as the number of network users increases, people can have the same access request driven by the public mind, which causes the phenomenon that a large amount of content in the backbone network is repeatedly transmitted in a service period. If the hot content is cached at the edge of the network, the pressure of a backbone network can be reduced, the transmission delay can be reduced, and the user experience can be improved. Tan [ z.tan, f.r.yu, x.li, h.ji, and v.c.raw, "Virtual resource allocation for heterologous services in full duplex-enabled SCNs with mobile computing and caching," IEEE Transactions on cellular Technology, vol.67, No.2, pp.1794-1808,2017 ], et al propose a MEC offload and caching framework in full duplex Small Cell Networks (SCNs) to improve system yield by combining computational offload and content caching. Gu [ w. -c.chien, h. -y.weng, and c. -f.lai, "Q-learning based social cache allocation in mobile computing," Future Generation Computer Systems, vol.102, pp.603-610,2020 ] and the like propose, in order to improve content cache hit rate, an SDN-based MEC architecture, which first models a cache model, maximizes cache hit rate under the limitation of MEC storage resources, and solves a cache strategy using Q learning. However, the document content caching and task offloading are independent of each other. At present, few researches are carried out on actively caching task results, if some tasks with high popularity are actively cached in an edge server, when the tasks are requested by users again, calculation results can be directly transmitted to the users from the MEC, the calculation time delay of the tasks is greatly reduced, and the energy consumption of the users is reduced. In a real scene, there are many scenes in which the calculation result can be reused, for example, some game rendering scenes can be reused by players; in an AR scenario, some AR services are often requested repeatedly; in the video task, some hot videos can be decoded repeatedly, and if the hot calculation task result is cached, the calculation burden of the edge server can be relieved. Zhao et al [ h.zhao, y.wang, and r.sun, "Task active control Based provisioning and Resource Allocation in Mobile-Edge Computing Systems," in 201814 th International Wireless Communications & Mobile Computing Conference (IWCMC),2018, pp.232-237: IEEE ] in a single cell scenario, jointly optimize the Task cache Based Computation offload and Resource Allocation strategy in order to reduce the Task completion latency. Firstly, modeling an unloading, resource allocation and task caching model, taking the minimized completion time delay of all tasks as an optimization target, wherein the modeled time delay optimization problem belongs to an NP difficult problem and cannot be completed in polynomial time, and the model is disassembled into two parts for solving; and then designing a greedy algorithm based on improvement to solve the unloading strategy.
Most of the existing research on task caching designs a caching strategy from the perspective of task popularity, however, for some compute-intensive applications such as virtual reality, AR service, large games and the like, besides popularity, the amount of computation and the data size required for computing tasks will also affect the caching value of the tasks. For example, some tasks, although most popular, may be less computationally intensive to complete, whereas less computationally intensive computing tasks that are cached less popular may be more profitable. How to reasonably allocate the weight of each factor is also a considerable problem, and if the weight allocation is not reasonable, the cache value of the task is affected.
Disclosure of Invention
The invention aims to provide a task cache-based calculation unloading method in edge calculation aiming at the defects in the prior art. The method can effectively reduce the operation time of the algorithm, improve the response speed of the server, reduce the system energy consumption of repeated operation by the task cache mechanism, reduce the calculation cost of the server, and reduce the completion delay and the weighted sum of the energy consumption of all tasks.
The technical scheme for realizing the purpose of the invention is as follows:
a task cache-based computation unloading method in edge computation comprises the following steps under the condition of a single-cell scene: 1) constructing a system model: configuring an MEC server at a base station, wherein the MEC server is connected with a remote center cloud through an optical fiber, assuming that a plurality of mobile devices are randomly distributed in a cell, each user only generates one task in one service period or time slot, the number of the mobile user is expressed as M ∈ {1, 2.., M }, and the task T generated by the mobile device belongs to {1, 2., M }i(i ∈ {1, 2...., N }) is represented as a quadruple, i.e., a square
Figure BDA0002976500350000021
Wherein
Figure BDA0002976500350000022
The input data volume of the task is represented, and the unit is kbit; ciThe unit of the CPU periodicity required for completing the task is cycle;
Figure BDA0002976500350000023
the output data volume of the task is represented, namely the output size of the task calculation result is represented in a unit of kbit; tau isiThe maximum tolerant time delay for completing the task is expressed in ms, the system model construction is completed, and the research work of the technical scheme mainly focuses on the mobile terminal and the edge server: the mobile terminal generates and runs the application program, and if the local situation can not satisfy the counting of the application programCalculating the demand, can propose the unloading request to the edge server through the mobile network, the edge server end receives the unloading request from the mobile terminal, and distribute the appropriate computational resource and guarantee the task can be finished within tolerating the time delay, and determine whether the task buffers according to the buffer value of the task;
2) constructing a system communication model: the MEC server may provide computing services to mobile devices within a cell, each user may only generate one compute-intensive task per time slot, and the user may choose whether to offload a task to MEC execution, let a ═ a1,a2,...,aNDenotes the set of decision actions for all MD, where ai0 represents MiThe task of (a) is performed at the local devicei1 denotes task offloading to MEC execution, assuming no intra-cell interference is present in the cell, according to shannon's formula, MiThe transmission rate for transmitting the task to the MEC server is as follows:
Figure BDA0002976500350000031
wherein g isiRepresenting a mobile device MiAnd channel gain, p, between MEC servicesiRepresents MiThe transmitting power of the system is N, the Gaussian noise power is N, and the unit is dbm, so that the system communication model is constructed;
3) constructing a system calculation model: in the process of processing tasks, a user sends an unloading request to an edge server, and the calculation tasks are unloaded only under the condition that the tolerance time delay of an application program is met and the system overhead can be reduced;
3.1) constructing a local calculation model: when task TiSelection being performed locally, the mobile device MiHas a CPU frequency of fi lIn GHz, task TiThe computation delay executed locally is Ti l
Figure BDA0002976500350000034
Energy consumption
Figure BDA0002976500350000035
Calculated from the following formula:
Figure BDA0002976500350000036
where κ denotes a CPU power consumption coefficient, and is a fixed constant depending on chip processes, and κ is set to 10-26
When computing task TiWhen performed locally, the total overhead of the system includes the local computation delay and the terminal energy consumption, in which case the overhead of a single device
Figure BDA0002976500350000037
Comprises the following steps:
Figure BDA0002976500350000038
in the formula, α and β are weight coefficients of time delay and energy consumption, respectively, and satisfy the following conditions:
α+β=1,0≤α≤1,0≤β≤1;
3.2) constructing an MEC calculation model: when computing task TiSelecting offload to MEC execution, latency including slave mobile device MiThe transmission delay to and execution delay at the MEC, the computational power to assign the MEC to the user is denoted fi cThe delay incurred by offloading to MEC calculation includes the transmission of Ti tranAnd execution delay Ti execOffload, therefore, the total latency for task offload to MEC is denoted as Ti c
Figure BDA0002976500350000041
At this time, the energy consumption generated by the mobile device unloading to the MEC is generated by the task unloading through the wireless link, and the transmission energy consumption is expressed as
Figure BDA0002976500350000042
Figure BDA0002976500350000043
The energy consumption of the MEC is independent of the user and is not considered as system overhead, so the total overhead of task offloading to the MEC server is:
Figure BDA0002976500350000044
completing the construction of a system computing model;
4) constructing a resource allocation model: under the tolerance time delay constraint of the task, the edge server distributes proper computing resources according to the task attribute, wherein f is f1,f2,...,fn]Expressed as an allocation vector of computational resources, where fiIs denoted as MiThe distributed computing resources finish the construction of a resource distribution model;
5) constructing a task value model and a cache model: expressing the cache vector of the MEC to the task content requested by the user as H ═ H1,h2,...,hnIn which h isiIs a binary variable indicating whether the MEC has cached the user MiTask and related data content, hi0 means that the MEC does not cache the content, hi1, the MEC already caches the content, if the MEC already caches the content, task unloading is not needed when the MEC requests the task, the MEC directly returns a result to the mobile device after completing the task, the task with higher caching value is stored in the MEC under the limitation of cost and storage space, the task with low caching value is replaced, and a popularity calculation formula of the task is calculated according to the Zipf distribution (zip distribution) of the popularity file as follows:
Figure BDA0002976500350000045
in the formula, thetaiDenotes the popularity of the ith task, Z denotes the ziff constant, where the task set cached by the MEC service is denoted as HcC is the maximum storage quantity and is initialized to be empty, and because the memory capacity of the MEC server is limited, the MEC server only caches the tasks with higher cache value, and finally outputs a task cache policy set H*The cache value is defined as follows:
Figure BDA0002976500350000046
wherein w1、w2And w3The weight coefficients are respectively the popularity of the task, the size of the task input data and the size of the calculated amount required by the task, and the following conditions are met:
w1+w2+w3=1,0≤w1≤1,0≤w2≤1,0≤w3≤1,
completing the construction of a system task value model and a cache model;
6) constructing a system overhead model: task completion delay and mobile terminal energy consumption are key indexes for calculating the quality of an unloading strategy, under the constraint of MEC computing capacity, storage resources and task tolerance delay, system overhead is minimized, and an overhead minimization problem is modeled as follows:
Figure BDA0002976500350000051
the optimization target of the technical scheme takes system benefit as a target to obtain the optimal unloading decision A*Computing resource allocation F*And an optimal caching strategy H*The system overhead is minimized, and therefore, the optimization problem is expressed as:
Figure BDA0002976500350000052
A={a1,a2,...,aN}
F={f1,f2,...,fN}
H={h1,h2,...,hN},
constraint conditions are as follows:
Figure BDA0002976500350000053
Figure BDA0002976500350000054
Figure BDA0002976500350000055
Figure BDA0002976500350000056
C5:τi≤τ,
the method comprises the following steps that A is an unloading strategy set, F is a calculation resource distribution strategy set, H is a task caching strategy set, a constraint condition C1 indicates that calculation tasks can only be executed in a local server or an MEC server, C2 and C3 indicate that the sum of calculation resources distributed by the MEC server cannot exceed the calculation capacity of the MEC server, C4 indicates that the total cache capacity cannot exceed the total storage space of the MEC server, C5 indicates time delay control, namely the completion time of the tasks cannot exceed the maximum time delay which can be tolerated by the tasks, and system overhead model construction is completed;
7) solving the problem: solving by adopting a Q-Learning algorithm and an analytic hierarchy process:
the task cache-based computational migration researched by the technical scheme is a typical multi-target combinatorial optimization problem, belongs to an NP (non-trivial) problem, and can be solved by a plurality of heuristic algorithms at present: for example, Game Theory algorithm (Game Theory), Genetic Algorithm (GA), ant colony algorithm (PSO), and the like, although these heuristic algorithms have good performance on their respective targets, such as reducing the delay cost or energy consumption cost of the user, tasks in these algorithms arrive randomly within a service period, these algorithms respond to the tasks and make decisions periodically, and in a real scene, the tasks arrive randomly and need real-time response, and in addition, the conventional algorithms need to perform a large number of iterations to obtain an optimal solution, resulting in a high running time cost;
in order to solve the problem, the technical scheme provides a reinforcement Learning method to solve the problems of task unloading and resource allocation, wherein the reinforcement Learning is to obtain an approximately optimal solution by continuous trial and error, mainly comprises four basic elements of a state, an action, a return and an agent (agent), and aims to maximize long-term benefits, and is divided into reinforcement Learning algorithms with a model and without a model;
an Analytic Hierarchy Process (AHP) is a qualitative and quantitative analysis decision model, which divides elements related to decision into a target layer, a criterion layer and a scheme layer, carries out quantitative analysis on importance of pairwise comparison of one-level elements, and reflects a weight coefficient of each element through calculation, thereby being very suitable for a task scheduling scene of weight distribution;
the technical scheme determines the task value from a plurality of angles, and the task with higher processing value has higher priority, the technical scheme mainly considers the value of the task from three aspects, namely the maximum tolerant time delay of the task, the calculation resource required by the task and the data volume of the task, and the technical scheme mainly aims at maximizing the completion rate of the task, so that the tolerant time delay of the task has the maximum weight, all the calculation capacity and the data volume of the task have certain influence on the completion rate of the task, and other factors have smaller influence on the overall value relative to the tolerant time delay of the task;
the technical scheme includes that a hierarchical structure model is constructed firstly, then factors of a criterion layer are compared pairwise, a judgment matrix of the criterion layer is constructed according to an objective judgment result, a characteristic vector, a characteristic root and a weighted value are calculated according to the judgment matrix, and finally the effectiveness of the judgment matrix is judged through consistency check analysis;
Q-Learning algorithm:
the system state is as follows: the system state s unloads the decision vector a, the computational resource allocation vector F, and the remaining computational resource vector G, i.e.:
S={A,F,G},
the system acts as follows: in the system, which tasks are unloaded and which are not unloaded are decided by the Agent, and how much computing resources are allocated to each task, so the system action is expressed as:
a={ai,fi},
wherein a isiRepresenting a task TiUnloading scheme of fiIs represented to task TiAn allocated computing resource;
and (3) system reward: in the t time slot, after the Agent executes each possible action, a reward R (S, a) is obtained in a certain state, the reward function should be associated with the target function, and the optimization problem at this time is to minimize the system overhead, which is specifically defined as follows:
Figure BDA0002976500350000071
wherein c islocalRepresenting the system total cost of executing the tasks locally at the time t, and c (s, a) representing the system total cost of the JORC algorithm in the current state;
in the Q-Learning algorithm, the agent observes the current environmental state s at time ttSelecting action a according to Q tabletPerforming action atThen enters the state st+1Obtaining the reward r, updating the Q table through the following formula, and continuously and circularly iterating until the Q value is converged to obtain the optimal strategy pi*
Figure BDA0002976500350000072
Wherein, delta is the learning rate, and gamma (0 < gamma < 1) is the discount factor;
the Q-Learning algorithm is as follows:
inputting: training round number T, learning rate mu, discount factor gamma, greedy coefficient epsilon, task set N and MEC residual computing resources;
and (3) outputting: offload policy A*Resource allocation strategy F*
1. Initializing a Q matrix
2.Repeat:
3. At an initial state s, an action a is selected according to a greedy strategy
4.Repeat
5. Get a return r at s-select action a according to a greedy policy and enter the next state st+1
6.
Figure BDA0002976500350000081
7.s=st+1
Until s is the termination State
Until Q value convergence;
analytic hierarchy process:
1. building a hierarchical model
The hierarchical model is 3 layers, the first layer is task priority P, the second layer is task maximum tolerance time delay tau, the size of task calculated quantity C and the size of task data quantity D, and the third layer is task T;
2. constructing a judgment matrix in the hierarchy by aijThe importance of the ith element and the jth element relative to a certain factor of the previous layer is shown, the criterion layer has 3 elements in total, and a judgment matrix A is constructed (a)ij)3×3
Wherein:
Figure BDA0002976500350000082
three elements tau, C, D are all governed by P, and the judgment matrix is:
Figure BDA0002976500350000083
according to the judgment matrix, the maximum eigenvalue lambda of the matrix is solvedmaxAnd the corresponding feature vector W ═ W (W)1,w2,w3)T
3. And (3) consistency check, namely calculating an inconsistency index CI of the judgment matrix A according to the evaluation index, and solving a random consistency ratio CR value according to the RI value:
Figure BDA0002976500350000084
when CR <0.1, proving that the judgment matrix has satisfactory consistency, otherwise, revising A again until the satisfactory consistency is reached, and finally obtaining a weight vector W of the calculation task, wherein the cache value of the task is expressed as follows:
Figure BDA0002976500350000085
wherein w1、w2And w3The weighting coefficients are respectively the popularity of the task, the CPU period size required by the task and the data volume size of the task, and the following conditions are met:
w1+w2+w3=10≤w1≤1,0≤w2≤1,0≤w3≤1,
the method includes that time delay and energy consumption cost of a system are minimized through a Joint optimization computation unloading, resource allocation and task Caching method (JORC for short), an MEC server detects whether tasks are cached or not, if the MEC caches task computation results, the MEC returns directly, whether an MEC cache task set is replaced or not is determined through the JORC method, and if the computation task results are not cached, an unloading strategy and a resource allocation strategy are determined through a Q-learning algorithm to minimize the system cost, wherein the JORC method is described as follows:
inputting: user request set N, server information G, cache status H,
and (3) outputting: h*,A*,F*
1.for i=1:N do
2. Mobile device i generates task TiAnd issues an offload request
3.if TiIn a cache set
4. Returning the calculation results
5.else
6. Adding tasks to a set of tasks M
7.end if
8. Inputting M into Q-learning algorithm to obtain A*,F*
If H has task cache value less than
Figure BDA0002976500350000091
10. Will TiReplacing the original result
11.end if
12. Output caching policy set H*
13.end if
And (5) solving the problem.
Compared with the existing research, the technical scheme has the following characteristics:
1. aiming at the task cache problem, the technical scheme comprehensively considers the task unloading, the bandwidth and the calculation resource allocation, and the task cache, and models the problems of task completion delay and energy consumption minimization under the constraint of calculation, wireless and MEC storage resources. Modeling the unloading process into an MDP model, and designing an algorithm based on Q-Learning to solve.
2. The task cache value is determined based on multiple attributes of the task, specifically, the task popularity, the size of the task input data and the size of the calculated amount required by the task are integrated, different weights are distributed to determine the task cache value, the task with higher cache value is prioritized, the server calculation cost can be better reduced, and the completion delay and the energy consumption weighted sum of all the tasks are reduced.
3. And (4) comprehensively considering various attributes of the tasks, and providing a weight distribution algorithm based on an analytic hierarchy process to distribute weights.
The method can effectively reduce the operation time of the algorithm, improve the response speed of the server, reduce the system energy consumption of repeated operation by the task cache mechanism, reduce the calculation cost of the server, and reduce the completion delay and the weighted sum of the energy consumption of all tasks.
Drawings
FIG. 1 is a frame diagram of an embodiment;
FIG. 2 is a schematic view of a hierarchy model of a hierarchy analysis method according to an embodiment.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific examples, but the invention is not limited thereto.
Example (b):
the example considers the development of research for different types of virtual reality tasks in a single-cell scenario. A combined unloading and task caching strategy is provided, the strategy comprehensively considers the schemes of task unloading, bandwidth and computing resource allocation, firstly, unloading, resource allocation and task caching models are modeled, for task caching, caching values are defined according to various attributes of tasks, secondly, the problems of computing unloading and resource allocation are formalized and modeled into a Markov model, and the sum of time delay for completing the tasks and energy consumption weighting is minimized to serve as evaluation indexes.
Referring to fig. 1, a task cache-based computation offloading method in edge computation includes, in a single-cell scenario, the following steps:
1) constructing a system model: configuring an MEC server at a base station, wherein the MEC server is connected with a remote center cloud through an optical fiber, assuming that a plurality of mobile devices are randomly distributed in a cell, each user only generates one task in one service period or time slot, the number of the mobile user is expressed as M ∈ {1, 2.., M }, and the task T generated by the mobile device belongs to {1, 2., M }i(i ∈ {1, 2...., N }) is represented as a quadruple, i.e., a square
Figure BDA0002976500350000101
Wherein
Figure BDA0002976500350000102
The input data volume of the task is represented, and the unit is kbit; ciThe unit of the CPU periodicity required for completing the task is cycle;
Figure BDA0002976500350000103
the output data volume of the task is represented, namely the output size of the task calculation result is represented in a unit of kbit; tau isiThe maximum tolerant time delay of the completed task is expressed, the unit is ms, and the system model is constructed;
2) constructing a system communication model: the MEC server may provide computing services to mobile devices within a cell, each user may only generate one compute-intensive task per time slot, and the user may choose whether to offload a task to MEC execution, let a ═ a1,a2,...,aNDenotes the set of decision actions for all MD, where ai0 represents MiThe task of (a) is performed at the local devicei1 denotes task offloading to MEC execution, assuming no intra-cell interference is present in the cell, according to shannon's formula, MiThe transmission rate for transmitting the task to the MEC server is as follows:
Figure BDA0002976500350000111
wherein g isiRepresenting a mobile device MiAnd channel gain, p, between MEC servicesiRepresents MiThe transmitting power of the system is N, the Gaussian noise power is N, and the unit is dbm, so that the system communication model is constructed;
3) constructing a system calculation model: in the method, the time delay and energy consumption of task processing are taken into comprehensive consideration as the system overhead, in the process of processing the tasks, a user sends an unloading request to an edge server, and the calculation tasks are unloaded only under the conditions that the time delay tolerance of an application program is met and the system overhead can be reduced;
3.1) constructing a local calculation model: when task TiSelection being performed locally, the mobile device MiHas a CPU frequency of fi lIn GHz, task TiThe computation delay executed locally is Ti l
Figure BDA0002976500350000114
Energy consumption
Figure BDA0002976500350000115
Calculated from the following formula:
Figure BDA0002976500350000116
where κ denotes a CPU power consumption coefficient, and is a fixed constant depending on chip processes, and κ is set to 10-26
When computing task TiWhen performed locally, the total overhead of the system includes the local computation delay and the terminal energy consumption, in which case the overhead of a single device
Figure BDA0002976500350000117
Comprises the following steps:
Figure BDA0002976500350000118
in the formula, α and β are weight coefficients of time delay and energy consumption, respectively, and satisfy the following conditions:
α+β=1,0≤α≤1,0≤β≤1;
3.2) constructing an MEC calculation model: when computing task TiSelecting offload to MEC execution, latency including slave mobile device MiThe transmission delay to and execution delay at the MEC, the computational power to assign the MEC to the user is denoted fi cThe delay incurred by offloading to MEC calculation includes the transmission of Ti tranAnd execution delay Ti execUnloading, therefore task unloadingThe total delay from load to MEC is denoted as Ti c
Figure BDA0002976500350000119
At this time, the energy consumption generated by the mobile device unloading to the MEC is generated by the task unloading through the wireless link, and the transmission energy consumption is expressed as
Figure BDA00029765003500001110
Figure BDA0002976500350000121
The energy consumption of the MEC is independent of the user and is not considered as system overhead, so the total overhead of task offloading to the MEC server is:
Figure BDA0002976500350000122
completing the construction of a system computing model;
4) constructing a resource allocation model: under the tolerance time delay constraint of the task, the edge server distributes proper computing resources according to the task attribute, wherein f is f1,f2,...,fn]Expressed as an allocation vector of computational resources, where fiIs denoted as MiThe distributed computing resources finish the construction of a resource distribution model;
5) constructing a task value model and a cache model: in the embodiment, three factors of task popularity, calculation amount required by the task and task data amount are comprehensively considered to determine the caching strategy, and the caching vector of the MEC for the task content requested by the user is expressed as H ═ H1,h2,...,hnIn which h isiIs a binary variable indicating whether the MEC has cached the user MiTask and related data content, hi0 means that the MEC does not cache the content, hi1 means that the MEC has cached the content, if the MEC has cached the contentIf the task is requested, the task is not required to be unloaded, after the task is completed, the MEC directly returns the result to the mobile device, the task with higher cache value is stored to the MEC under the limit of cost and storage space, the task with low cache value is replaced, and according to the Zipf distribution (zip distribution) of the popularity file, the popularity calculation formula of the task is calculated as follows:
Figure BDA0002976500350000123
in the formula, thetaiDenotes the popularity of the ith task, Z denotes the ziff constant, where the task set cached by the MEC service is denoted as HcC is the maximum storage quantity and is initialized to be empty, and because the memory capacity of the MEC server is limited, the MEC server only caches the tasks with higher cache value, and finally outputs a task cache policy set H*The cache value is defined as follows:
Figure BDA0002976500350000124
wherein w1、w2And w3The weight coefficients are respectively the popularity of the task, the size of the task input data and the size of the calculated amount required by the task, and the following conditions are met:
w1+w2+w3=1,0≤w1≤1,0≤w2≤1,0≤w3≤1,
completing the construction of a system task value model and a cache model;
6) constructing a system overhead model: task completion delay and mobile terminal energy consumption are both key indexes of the quality of a calculation unloading strategy, in the embodiment, the calculation unloading, resource allocation and task caching strategies are jointly optimized under the scene of a single MEC, under the constraint of the MEC calculation capacity, the storage resources and the task tolerance delay, the system overhead is minimized, and the problem of overhead minimization is modeled as follows:
Figure BDA0002976500350000131
the optimization target of the embodiment takes the system benefit as the target to obtain the optimal unloading decision A*Computing resource allocation F*And an optimal caching strategy H*The system overhead is minimized, and therefore, the optimization problem is expressed as:
Figure BDA0002976500350000132
A={a1,a2,...,aN}
F={f1,f2,...,fN}
H={h1,h2,...,hN},
constraint conditions are as follows:
Figure BDA0002976500350000133
Figure BDA0002976500350000134
Figure BDA0002976500350000135
Figure BDA0002976500350000136
C5:τi≤τ,
the method comprises the following steps that A is an unloading strategy set, F is a calculation resource distribution strategy set, H is a task caching strategy set, a constraint condition C1 indicates that calculation tasks can only be executed in a local server or an MEC server, C2 and C3 indicate that the sum of calculation resources distributed by the MEC server cannot exceed the calculation capacity of the MEC server, C4 indicates that the total cache capacity cannot exceed the total storage space of the MEC server, C5 indicates time delay control, namely the completion time of the tasks cannot exceed the maximum time delay which can be tolerated by the tasks, and system overhead model construction is completed;
7) solving the problem: solving by adopting a Q-Learning algorithm and an analytic hierarchy process:
the task cache-based computational migration researched in the embodiment is a typical multi-target combinatorial optimization problem, belongs to an NP (non-trivial) problem, and can be solved by a plurality of heuristic algorithms at present: for example, Game Theory algorithm (Game Theory), Genetic Algorithm (GA), ant colony algorithm (PSO), and the like, although these heuristic algorithms have good performance on their respective targets, such as reducing the delay cost or energy consumption cost of the user, tasks in these algorithms arrive randomly within a service period, these algorithms respond to the tasks and make decisions periodically, and in a real scene, the tasks arrive randomly and need real-time response, and in addition, the conventional algorithms need to perform a large number of iterations to obtain an optimal solution, resulting in a high running time cost;
in order to solve the problem, the embodiment provides a reinforcement Learning method to solve the problems of task unloading and resource allocation, wherein the reinforcement Learning is to obtain an approximately optimal solution by continuous trial and error, mainly comprises four basic elements of a state, an action, a return and an agent (agent), and aims to maximize long-term profit, and is divided into reinforcement Learning algorithms with a model and without a model, and the model cannot be established due to the fact that a real network scene is complex, so that the embodiment uses a Q-Learning algorithm;
the analytic hierarchy process is a qualitative and quantitative decision model, it will be with decision-making related element to divide into target layer, criterion layer and scheme layer, carry on the quantitative analysis to the importance that a level element compares pairwise, then reflect the weight coefficient of every element through calculating, very suitable for task scheduling scene of weight distribution;
the task value is determined from a plurality of angles, the task with higher processing value has higher priority, the task value is considered mainly from three aspects, namely the maximum tolerant delay of the task, the calculation resource required by the task and the data volume of the task, the completion rate of the task is maximized, the tolerant delay of the task has the maximum weight, all the calculation capacity and the data volume of the task have certain influence on the completion rate of the task, and other factors have small influence on the overall value relative to the tolerant delay of the task;
the technical scheme includes that a hierarchical structure model is constructed firstly, then factors of a criterion layer are compared pairwise, a judgment matrix of the criterion layer is constructed according to an objective judgment result, a characteristic vector, a characteristic root and a weighted value are calculated according to the judgment matrix, and finally the effectiveness of the judgment matrix is judged through consistency check analysis;
Q-Learning algorithm:
the system state is as follows: the system state s unloads the decision vector a, the computational resource allocation vector F, and the remaining computational resource vector G, i.e.:
S={A,F,G},
the system acts as follows: in the system, which tasks are unloaded and which are not unloaded are decided by the Agent, and how much computing resources are allocated to each task, so the system action is expressed as:
a={ai,fi},
wherein a isiRepresenting a task TiUnloading scheme of fiIs represented to task TiAn allocated computing resource;
and (3) system reward: in the t time slot, after the Agent executes each possible action, a reward R (S, a) is obtained in a certain state, the reward function should be associated with the target function, and the optimization problem at this time is to minimize the system overhead, which is specifically defined as follows:
Figure BDA0002976500350000151
wherein c islocalRepresenting the system total cost of executing the tasks locally at the time t, and c (s, a) representing the system total cost of the JORC algorithm in the current state;
in the Q-Learning algorithm, the agent observes the current environmental state s at time ttSelecting action a according to Q tabletPerforming action atThen enters the state st+1Obtaining the reward r, updating the Q table through the following formula, and continuously and circularly iterating until the Q value is converged to obtain the optimal strategy pi*
Figure BDA0002976500350000152
Wherein, delta is the learning rate, and gamma (0 < gamma < 1) is the discount factor;
the Q-Learning algorithm is as follows:
inputting: training round number T, learning rate mu, discount factor gamma, greedy coefficient epsilon, task set N and MEC residual computing resources;
and (3) outputting: offload policy A*Resource allocation strategy F*
1. Initializing a Q matrix
2.Repeat:
3. At an initial state s, an action a is selected according to a greedy strategy
4.Repeat
5. Get a return r at s-select action a according to a greedy policy and enter the next state st+1
6.
Figure BDA0002976500350000153
7.s=st+1
Until s is the termination State
Until Q value convergence;
analytic hierarchy process:
1. building a hierarchical model
The hierarchical model is 3 layers, the first layer is task priority P, the second layer is task maximum tolerance time delay tau, task calculated quantity C and task data quantity D, and the third layer is task T, as shown in FIG. 2;
2. constructing a judgment matrix in the hierarchy by aijThe importance of the ith element and the jth element relative to a certain factor of the previous layer is shown, the criterion layer has 3 elements in total, and a judgment matrix A is constructed (a)ij)3×3
Wherein,
Figure BDA0002976500350000161
three elements tau, C, D are all governed by P, and the judgment matrix is:
Figure BDA0002976500350000162
according to the judgment matrix, the maximum eigenvalue lambda of the matrix is solvedmaxAnd the corresponding feature vector W ═ W (W)1,w2,w3)T
3. And (3) consistency check, namely calculating an inconsistency index CI of the judgment matrix A according to the evaluation index, and solving a random consistency ratio CR value according to the RI value:
Figure BDA0002976500350000163
when CR <0.1, proving that the judgment matrix has satisfactory consistency, otherwise, revising A again until the satisfactory consistency is reached, and finally obtaining a weight vector W of the calculation task, wherein the cache value of the task is expressed as follows:
Figure BDA0002976500350000164
wherein w1、w2And w3The weighting coefficients are respectively the popularity of the task, the CPU period size required by the task and the data volume size of the task, and the following conditions are met:
w1+w2+w3=10≤w1≤1,0≤w2≤1,0≤w3≤1,
the method includes the steps that time delay and energy consumption cost of a system are minimized through a combined optimization calculation unloading, resource allocation and task caching method, an MEC server detects whether tasks are cached, if the MEC caches task calculation results, the tasks are directly returned, whether an MEC cache task set is replaced is determined through a JORC method, if the calculation task results are not cached, an unloading strategy and a resource allocation strategy are determined through a Q-learning algorithm, and system cost is minimized, wherein the specific JORC method is described as follows:
inputting: user request set N, server information G, cache status H,
and (3) outputting: h*,A*,F*
1.for i=1:N do
2. Mobile device i generates task TiAnd issues an offload request
3.if TiIn a cache set
4. Returning the calculation results
5.else
6. Adding tasks to a set of tasks M
7.end if
8. Inputting M into Q-learning algorithm to obtain A*,F*
If H has task cache value less than
Figure BDA0002976500350000171
10. Will TiReplacing the original result
11.end if
12. Output caching policy set H*
13.end if
And (5) solving the problem.

Claims (1)

1. A task cache-based computation unloading method in edge computation is characterized by comprising the following steps of:
1) constructing a system model: configuring an MEC server at a base station, the MEC server being connected to a remote central cloud via optical fibers, assuming a plurality of mobile devicesThe mobile terminal is randomly distributed in a cell, only one task is generated in each user in one service period or time slot, the number of a mobile user is expressed as M e {1, 2.. multidot.M }, and the task T generated by the mobile terminal isi(i ∈ {1, 2...., N }) is represented as a quadruple, i.e., a square
Figure FDA0002976500340000011
Wherein
Figure FDA0002976500340000012
The input data volume of the task is represented, and the unit is kbit; ciThe unit of the CPU periodicity required for completing the task is cycle;
Figure FDA0002976500340000013
the output data volume of the task is represented, namely the output size of the task calculation result is represented in a unit of kbit; tau isiThe maximum tolerant time delay of the completed task is expressed, the unit is ms, and the system model is constructed;
2) constructing a system communication model: the MEC server may provide computing services to mobile devices within a cell, each user may only generate one compute-intensive task per time slot, and the user may choose whether to offload a task to MEC execution, let a ═ a1,a2,...,aNDenotes the set of decision actions for all MD, where ai0 represents MiThe task of (a) is performed at the local devicei1 denotes task offloading to MEC execution, assuming no intra-cell interference is present in the cell, according to shannon's formula, MiThe transmission rate for transmitting the task to the MEC server is as follows:
Figure FDA0002976500340000014
wherein g isiRepresenting a mobile device MiAnd channel gain, p, between MEC servicesiRepresents MiThe transmitting power of the system is N, the Gaussian noise power is N, and the unit is dbm, so that the system communication model is constructed;
3) constructing a system calculation model: in the process of processing tasks, a user sends an unloading request to an edge server, and the calculation tasks are unloaded only under the condition that the tolerance time delay of an application program is met and the system overhead can be reduced;
3.1) constructing a local calculation model: when task TiSelection being performed locally, the mobile device MiHas a CPU frequency of fi lIn GHz, task TiThe computation delay executed locally is Ti l:
Figure FDA0002976500340000015
Energy consumption
Figure FDA0002976500340000016
Calculated from the following formula:
Figure FDA0002976500340000017
where κ denotes a CPU power consumption coefficient, and is a fixed constant depending on chip processes, and κ is set to 10-26
When computing task TiWhen performed locally, the total overhead of the system includes the local computation delay and the terminal energy consumption, in which case the overhead of a single device
Figure FDA0002976500340000021
Comprises the following steps:
Figure FDA0002976500340000022
in the formula, α and β are weight coefficients of time delay and energy consumption, respectively, and satisfy the following conditions:
α+β=1,0≤α≤1,0≤β≤1;
3.2) construction of MEC computational modelType (2): when computing task TiSelecting offload to MEC execution, latency including slave mobile device MiThe transmission delay to and execution delay at the MEC, the computational power to assign the MEC to the user is denoted fi cThe delay incurred by offloading to MEC calculation includes the transmission of Ti tranAnd execution delay Ti execOffload, therefore, the total latency for task offload to MEC is denoted as Ti c
Figure FDA0002976500340000023
At this time, the energy consumption generated by the mobile device unloading to the MEC is generated by the task unloading through the wireless link, and the transmission energy consumption is expressed as
Figure FDA0002976500340000024
Figure FDA0002976500340000025
The energy consumption of the MEC is independent of the user and is not considered as system overhead, so the total overhead of task offloading to the MEC server is:
Figure FDA0002976500340000026
completing the construction of a system computing model;
4) constructing a resource allocation model: under the tolerance time delay constraint of the task, the edge server distributes proper computing resources according to the task attribute, wherein f is f1,f2,...,fn]Expressed as an allocation vector of computational resources, where fiIs denoted as MiThe distributed computing resources finish the construction of a resource distribution model;
5) constructing a task value model and a cache model: expressing the cache vector of the MEC to the task content requested by the user as H ═ H{h1,h2,...,hnIn which h isiIs a binary variable indicating whether the MEC has cached the user MiTask and related data content, hi0 means that the MEC does not cache the content, hi1, the MEC already caches the content, if the MEC already caches the content, task unloading is not needed when the MEC requests the task, the MEC directly returns a result to the mobile device after the task is completed, the task with higher caching value is stored in the MEC under the limits of cost and storage space, the task with low caching value is replaced, and according to the ziff distribution of the popularity file, a popularity calculation formula of the task is calculated as follows:
Figure FDA0002976500340000031
in the formula, thetaiDenotes the popularity of the ith task, Z denotes the ziff constant, where the task set cached by the MEC service is denoted as HcC is the maximum storage quantity and is initialized to be empty, and because the memory capacity of the MEC server is limited, the MEC server only caches the tasks with higher cache value, and finally outputs a task cache policy set H*The cache value is defined as follows:
Figure FDA0002976500340000032
wherein w1、w2And w3The weight coefficients are respectively the popularity of the task, the size of the task input data and the size of the calculated amount required by the task, and the following conditions are met:
w1+w2+w3=1,0≤w1≤1,0≤w2≤1,0≤w3≤1,
completing the construction of a system task value model and a cache model;
6) constructing a system overhead model: task completion delay and mobile terminal energy consumption are key indexes for calculating the quality of an unloading strategy, under the constraint of MEC computing capacity, storage resources and task tolerance delay, system overhead is minimized, and an overhead minimization problem is modeled as follows:
Figure FDA0002976500340000033
the optimization target takes the system benefit as the target to obtain the optimal unloading decision A*Computing resource allocation F*And an optimal caching strategy H*The system overhead is minimized, and therefore, the optimization problem is expressed as:
Figure FDA0002976500340000034
A={a1,a2,...,aN}
F={f1,f2,...,fN}
H={h1,h2,...,hN},
constraint conditions are as follows:
Figure FDA0002976500340000041
Figure FDA0002976500340000042
Figure FDA0002976500340000043
Figure FDA0002976500340000044
C5:τi≤τ,
the method comprises the following steps that A is an unloading strategy set, F is a calculation resource distribution strategy set, H is a task caching strategy set, a constraint condition C1 indicates that calculation tasks can only be executed in a local server or an MEC server, C2 and C3 indicate that the sum of calculation resources distributed by the MEC server cannot exceed the calculation capacity of the MEC server, C4 indicates that the total cache capacity cannot exceed the total storage space of the MEC server, C5 indicates time delay control, namely the completion time of the tasks cannot exceed the maximum time delay which can be tolerated by the tasks, and system overhead model construction is completed;
7) solving the problem: solving by adopting a Q-Learning algorithm and an analytic hierarchy process:
Q-Learning algorithm:
the system state is as follows: the system state s unloads the decision vector a, the computational resource allocation vector F, and the remaining computational resource vector G, i.e.:
S={A,F,G},
the system acts as follows: in the system, which tasks are unloaded and which are not unloaded are decided by the Agent, and how much computing resources are allocated to each task, so the system action is expressed as:
a={ai,fi},
wherein a isiRepresenting a task TiUnloading scheme of fiIs represented to task TiAn allocated computing resource;
and (3) system reward: in the t time slot, after the Agent executes each possible action, a reward R (S, a) is obtained in a certain state, the reward function should be associated with the target function, and the optimization problem at this time is to minimize the system overhead, which is specifically defined as follows:
Figure FDA0002976500340000045
wherein c islocalRepresenting the system total cost of executing the tasks locally at the time t, and c (s, a) representing the system total cost of the JORC algorithm in the current state;
in the Q-Learning algorithm, the agent observes the current environmental state s at time ttSelecting action a according to Q tabletPerforming action atThen enters the state st+1Obtaining the reward r, updating the Q table through the following formula, and continuously and circularly iterating until the Q value is converged to obtain the optimal strategy pi*
Figure FDA0002976500340000051
Wherein, delta is the learning rate, and gamma (0 < gamma < 1) is the discount factor;
the Q-Learning algorithm is as follows:
inputting: training round number T, learning rate mu, discount factor gamma, greedy coefficient epsilon, task set N and MEC residual computing resources;
and (3) outputting: offload policy A*Resource allocation strategy F*
Figure FDA0002976500340000052
Analytic hierarchy process:
1. building a hierarchical model
The hierarchical model is 3 layers, the first layer is task priority P, the second layer is task maximum tolerance time delay tau, the size of task calculated quantity C and the size of task data quantity D, and the third layer is task T;
2. constructing a judgment matrix in the hierarchy by aijThe importance of the ith element and the jth element relative to a certain factor of the previous layer is shown, the criterion layer has 3 elements in total, and a judgment matrix A is constructed (a)ij)3×3
Wherein,
Figure FDA0002976500340000061
three elements tau, C, D are all governed by P, and the judgment matrix is:
Figure FDA0002976500340000062
according to the judgment matrix, the maximum eigenvalue lambda of the matrix is solvedmaxAnd the corresponding feature vector W ═ W (W)1,w2,w3)T
3. And (3) consistency check, namely calculating an inconsistency index CI of the judgment matrix A according to the evaluation index, and solving a random consistency ratio CR value according to the RI value:
Figure FDA0002976500340000063
when CR <0.1, proving that the judgment matrix has satisfactory consistency, otherwise, revising A again until the satisfactory consistency is reached, and finally obtaining a weight vector W of the calculation task, wherein the cache value of the task is expressed as follows:
Figure FDA0002976500340000064
wherein w1、w2And w3The weighting coefficients are respectively the popularity of the task, the CPU period size required by the task and the data volume size of the task, and the following conditions are met:
w1+w2+w3=10≤w1≤1,0≤w2≤1,0≤w3≤1,
the method includes the steps that time delay and energy consumption cost of a system are minimized through a combined optimization calculation unloading, resource allocation and task caching method, an MEC server detects whether tasks are cached, if the MEC caches task calculation results, the tasks are directly returned, whether an MEC cache task set is replaced is determined through a JORC method, if the calculation task results are not cached, an unloading strategy and a resource allocation strategy are determined through a Q-learning algorithm, and system cost is minimized, wherein the specific JORC method is described as follows:
inputting: user request set N, server information G, cache status H,
and (3) outputting: h*,A*,F*
1.for i=1:N do
2. Mobile device i generates task TiAnd issues an offload request
3.if TiIn a cache set
4. Returning the calculation results
5.else
6. Adding tasks to a set of tasks M
7.end if
8. Inputting M into Q-learning algorithm to obtain A*,F*
If H has task cache value less than
Figure FDA0002976500340000071
10. Will TiReplacing the original result
11.end if
12. Output caching policy set H*
13.end if
And (5) solving the problem.
CN202110275573.5A 2021-03-15 2021-03-15 Task cache-based computation unloading method in edge computation Active CN112860350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110275573.5A CN112860350B (en) 2021-03-15 2021-03-15 Task cache-based computation unloading method in edge computation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110275573.5A CN112860350B (en) 2021-03-15 2021-03-15 Task cache-based computation unloading method in edge computation

Publications (2)

Publication Number Publication Date
CN112860350A true CN112860350A (en) 2021-05-28
CN112860350B CN112860350B (en) 2022-06-03

Family

ID=75994467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110275573.5A Active CN112860350B (en) 2021-03-15 2021-03-15 Task cache-based computation unloading method in edge computation

Country Status (1)

Country Link
CN (1) CN112860350B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995343A (en) * 2021-04-22 2021-06-18 华南理工大学 Edge node calculation unloading method with performance and demand matching capability
CN113377547A (en) * 2021-08-12 2021-09-10 南京邮电大学 Intelligent unloading and safety guarantee method for computing tasks in 5G edge computing environment
CN113434212A (en) * 2021-06-24 2021-09-24 北京邮电大学 Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning
CN113452625A (en) * 2021-06-28 2021-09-28 重庆大学 Deep reinforcement learning-based unloading scheduling and resource allocation method
CN113504986A (en) * 2021-06-30 2021-10-15 广州大学 Cache-based edge computing system unloading method, device, equipment and medium
CN113515378A (en) * 2021-06-28 2021-10-19 国网河北省电力有限公司雄安新区供电公司 Method and device for migration and calculation resource allocation of 5G edge calculation task
CN113726862A (en) * 2021-08-20 2021-11-30 北京信息科技大学 Calculation unloading method and device under multi-edge server network
CN113852692A (en) * 2021-09-24 2021-12-28 中国移动通信集团陕西有限公司 Service determination method, device, equipment and computer storage medium
CN113900739A (en) * 2021-10-27 2022-01-07 大连理工大学 Calculation unloading method and system under many-to-many edge calculation scene
CN113918318A (en) * 2021-09-03 2022-01-11 山东师范大学 Joint optimization method and system for mobile edge calculation
CN113950103A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Multi-server complete computing unloading method and system under mobile edge environment
CN113965961A (en) * 2021-10-27 2022-01-21 中国科学院计算技术研究所 Method and system for unloading edge computing tasks in Internet of vehicles environment
CN113986486A (en) * 2021-10-15 2022-01-28 东华大学 Joint optimization method for data caching and task scheduling in edge environment
CN114461299A (en) * 2022-01-26 2022-05-10 中国联合网络通信集团有限公司 Unloading decision determining method and device, electronic equipment and storage medium
CN114745396A (en) * 2022-04-12 2022-07-12 广东技术师范大学 Multi-agent-based end edge cloud 3C resource joint optimization method
CN114860345A (en) * 2022-05-31 2022-08-05 南京邮电大学 Cache-assisted calculation unloading method in smart home scene
CN114928862A (en) * 2022-05-12 2022-08-19 湖南大学 Method and system for reducing system overhead based on task unloading and service caching
CN115022188A (en) * 2022-05-27 2022-09-06 国网经济技术研究院有限公司 Container placement method and system in power edge cloud computing network
CN115051998A (en) * 2022-06-09 2022-09-13 电子科技大学 Adaptive edge computing offloading method, apparatus and computer-readable storage medium
CN115174566A (en) * 2022-06-08 2022-10-11 之江实验室 Edge calculation task unloading method based on deep reinforcement learning
CN115297013A (en) * 2022-08-04 2022-11-04 重庆大学 Task unloading and service cache joint optimization method based on edge cooperation
CN115484314A (en) * 2022-08-10 2022-12-16 重庆大学 Edge cache optimization method for recommending performance under mobile edge computing network
CN115766241A (en) * 2022-11-21 2023-03-07 西安工程大学 Distributed intrusion detection system task scheduling and unloading method based on DQN algorithm
CN116320354A (en) * 2023-01-16 2023-06-23 浙江大学 360-degree virtual reality video user access control system and control method
CN116865842A (en) * 2023-09-05 2023-10-10 武汉能钠智能装备技术股份有限公司 Resource allocation system and method for communication multiple access edge computing server
CN117042051A (en) * 2023-08-29 2023-11-10 燕山大学 Task unloading strategy generation method, system, equipment and medium in Internet of vehicles
CN117785332A (en) * 2024-02-28 2024-03-29 国维技术有限公司 Virtual three-dimensional space dynamic resource loading and releasing method
CN118377556A (en) * 2024-06-21 2024-07-23 安徽大学 Intelligent warehouse task unloading method based on mobile edge calculation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109302709A (en) * 2018-09-14 2019-02-01 重庆邮电大学 The unloading of car networking task and resource allocation policy towards mobile edge calculations
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 Method for unloading computing tasks based on edge computing and cloud computing cooperation
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110351754A (en) * 2019-07-15 2019-10-18 北京工业大学 Industry internet machinery equipment user data based on Q-learning calculates unloading decision-making technique
EP3605329A1 (en) * 2018-07-31 2020-02-05 Commissariat à l'énergie atomique et aux énergies alternatives Connected cache empowered edge cloud computing offloading
CN111031102A (en) * 2019-11-25 2020-04-17 哈尔滨工业大学 Multi-user, multi-task mobile edge computing system cacheable task migration method
CN111124666A (en) * 2019-11-25 2020-05-08 哈尔滨工业大学 Efficient and safe multi-user multi-task unloading method in mobile Internet of things
CN111405568A (en) * 2020-03-19 2020-07-10 三峡大学 Computing unloading and resource allocation method and device based on Q learning
CN111818168A (en) * 2020-06-19 2020-10-23 重庆邮电大学 Self-adaptive joint calculation unloading and resource allocation method in Internet of vehicles

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3605329A1 (en) * 2018-07-31 2020-02-05 Commissariat à l'énergie atomique et aux énergies alternatives Connected cache empowered edge cloud computing offloading
CN109302709A (en) * 2018-09-14 2019-02-01 重庆邮电大学 The unloading of car networking task and resource allocation policy towards mobile edge calculations
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 Method for unloading computing tasks based on edge computing and cloud computing cooperation
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110351754A (en) * 2019-07-15 2019-10-18 北京工业大学 Industry internet machinery equipment user data based on Q-learning calculates unloading decision-making technique
CN111031102A (en) * 2019-11-25 2020-04-17 哈尔滨工业大学 Multi-user, multi-task mobile edge computing system cacheable task migration method
CN111124666A (en) * 2019-11-25 2020-05-08 哈尔滨工业大学 Efficient and safe multi-user multi-task unloading method in mobile Internet of things
CN111405568A (en) * 2020-03-19 2020-07-10 三峡大学 Computing unloading and resource allocation method and device based on Q learning
CN111818168A (en) * 2020-06-19 2020-10-23 重庆邮电大学 Self-adaptive joint calculation unloading and resource allocation method in Internet of vehicles

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NI ZHANG 等: "Joint task offloading and data caching in mobile edge computing networks", 《COMPUTER NETWORKS》 *

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995343A (en) * 2021-04-22 2021-06-18 华南理工大学 Edge node calculation unloading method with performance and demand matching capability
CN112995343B (en) * 2021-04-22 2021-09-21 华南理工大学 Edge node calculation unloading method with performance and demand matching capability
CN113434212A (en) * 2021-06-24 2021-09-24 北京邮电大学 Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning
CN113434212B (en) * 2021-06-24 2023-03-21 北京邮电大学 Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning
CN113452625A (en) * 2021-06-28 2021-09-28 重庆大学 Deep reinforcement learning-based unloading scheduling and resource allocation method
CN113515378A (en) * 2021-06-28 2021-10-19 国网河北省电力有限公司雄安新区供电公司 Method and device for migration and calculation resource allocation of 5G edge calculation task
CN113504986A (en) * 2021-06-30 2021-10-15 广州大学 Cache-based edge computing system unloading method, device, equipment and medium
CN113377547A (en) * 2021-08-12 2021-09-10 南京邮电大学 Intelligent unloading and safety guarantee method for computing tasks in 5G edge computing environment
CN113377547B (en) * 2021-08-12 2021-11-23 南京邮电大学 Intelligent unloading and safety guarantee method for computing tasks in 5G edge computing environment
CN113726862A (en) * 2021-08-20 2021-11-30 北京信息科技大学 Calculation unloading method and device under multi-edge server network
CN113726862B (en) * 2021-08-20 2023-07-14 北京信息科技大学 Computing unloading method and device under multi-edge server network
CN113918318A (en) * 2021-09-03 2022-01-11 山东师范大学 Joint optimization method and system for mobile edge calculation
CN113950103B (en) * 2021-09-10 2022-11-04 西安电子科技大学 Multi-server complete computing unloading method and system under mobile edge environment
CN113950103A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Multi-server complete computing unloading method and system under mobile edge environment
CN113852692B (en) * 2021-09-24 2024-01-30 中国移动通信集团陕西有限公司 Service determination method, device, equipment and computer storage medium
CN113852692A (en) * 2021-09-24 2021-12-28 中国移动通信集团陕西有限公司 Service determination method, device, equipment and computer storage medium
CN113986486A (en) * 2021-10-15 2022-01-28 东华大学 Joint optimization method for data caching and task scheduling in edge environment
CN113965961B (en) * 2021-10-27 2024-04-09 中国科学院计算技术研究所 Edge computing task unloading method and system in Internet of vehicles environment
CN113965961A (en) * 2021-10-27 2022-01-21 中国科学院计算技术研究所 Method and system for unloading edge computing tasks in Internet of vehicles environment
CN113900739A (en) * 2021-10-27 2022-01-07 大连理工大学 Calculation unloading method and system under many-to-many edge calculation scene
CN114461299A (en) * 2022-01-26 2022-05-10 中国联合网络通信集团有限公司 Unloading decision determining method and device, electronic equipment and storage medium
CN114461299B (en) * 2022-01-26 2023-06-06 中国联合网络通信集团有限公司 Unloading decision determining method and device, electronic equipment and storage medium
CN114745396B (en) * 2022-04-12 2024-03-08 广东技术师范大学 Multi-agent-based end edge cloud 3C resource joint optimization method
CN114745396A (en) * 2022-04-12 2022-07-12 广东技术师范大学 Multi-agent-based end edge cloud 3C resource joint optimization method
CN114928862A (en) * 2022-05-12 2022-08-19 湖南大学 Method and system for reducing system overhead based on task unloading and service caching
CN115022188A (en) * 2022-05-27 2022-09-06 国网经济技术研究院有限公司 Container placement method and system in power edge cloud computing network
CN115022188B (en) * 2022-05-27 2024-01-09 国网经济技术研究院有限公司 Container placement method and system in electric power edge cloud computing network
CN114860345B (en) * 2022-05-31 2023-09-08 南京邮电大学 Calculation unloading method based on cache assistance in smart home scene
CN114860345A (en) * 2022-05-31 2022-08-05 南京邮电大学 Cache-assisted calculation unloading method in smart home scene
CN115174566B (en) * 2022-06-08 2024-03-15 之江实验室 Edge computing task unloading method based on deep reinforcement learning
CN115174566A (en) * 2022-06-08 2022-10-11 之江实验室 Edge calculation task unloading method based on deep reinforcement learning
CN115051998A (en) * 2022-06-09 2022-09-13 电子科技大学 Adaptive edge computing offloading method, apparatus and computer-readable storage medium
CN115297013A (en) * 2022-08-04 2022-11-04 重庆大学 Task unloading and service cache joint optimization method based on edge cooperation
CN115297013B (en) * 2022-08-04 2023-11-28 重庆大学 Task unloading and service cache joint optimization method based on edge collaboration
CN115484314B (en) * 2022-08-10 2024-04-02 重庆大学 Edge cache optimization method for recommending enabling under mobile edge computing network
CN115484314A (en) * 2022-08-10 2022-12-16 重庆大学 Edge cache optimization method for recommending performance under mobile edge computing network
CN115766241A (en) * 2022-11-21 2023-03-07 西安工程大学 Distributed intrusion detection system task scheduling and unloading method based on DQN algorithm
CN116320354A (en) * 2023-01-16 2023-06-23 浙江大学 360-degree virtual reality video user access control system and control method
CN116320354B (en) * 2023-01-16 2023-09-29 浙江大学 360-degree virtual reality video user access control system and control method
CN117042051B (en) * 2023-08-29 2024-03-08 燕山大学 Task unloading strategy generation method, system, equipment and medium in Internet of vehicles
CN117042051A (en) * 2023-08-29 2023-11-10 燕山大学 Task unloading strategy generation method, system, equipment and medium in Internet of vehicles
CN116865842A (en) * 2023-09-05 2023-10-10 武汉能钠智能装备技术股份有限公司 Resource allocation system and method for communication multiple access edge computing server
CN116865842B (en) * 2023-09-05 2023-11-28 武汉能钠智能装备技术股份有限公司 Resource allocation system and method for communication multiple access edge computing server
CN117785332A (en) * 2024-02-28 2024-03-29 国维技术有限公司 Virtual three-dimensional space dynamic resource loading and releasing method
CN117785332B (en) * 2024-02-28 2024-05-28 国维技术有限公司 Virtual three-dimensional space dynamic resource loading and releasing method
CN118377556A (en) * 2024-06-21 2024-07-23 安徽大学 Intelligent warehouse task unloading method based on mobile edge calculation
CN118377556B (en) * 2024-06-21 2024-09-03 安徽大学 Intelligent warehouse task unloading method based on mobile edge calculation

Also Published As

Publication number Publication date
CN112860350B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN112860350B (en) Task cache-based computation unloading method in edge computation
Nath et al. Deep reinforcement learning for dynamic computation offloading and resource allocation in cache-assisted mobile edge computing systems
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN111800828B (en) Mobile edge computing resource allocation method for ultra-dense network
Chen et al. Dynamic task offloading for internet of things in mobile edge computing via deep reinforcement learning
CN109639760B (en) It is a kind of based on deeply study D2D network in cache policy method
CN113242568A (en) Task unloading and resource allocation method in uncertain network environment
CN113055489B (en) Implementation method of satellite-ground converged network resource allocation strategy based on Q learning
CN111757354A (en) Multi-user slicing resource allocation method based on competitive game
CN111262944B (en) Method and system for hierarchical task offloading in heterogeneous mobile edge computing network
CN110401936A (en) A kind of task unloading and resource allocation methods based on D2D communication
CN111488528A (en) Content cache management method and device and electronic equipment
CN116260871A (en) Independent task unloading method based on local and edge collaborative caching
CN113590279A (en) Task scheduling and resource allocation method for multi-core edge computing server
WO2024174426A1 (en) Task offloading and resource allocation method based on mobile edge computing
CN113573363A (en) MEC calculation unloading and resource allocation method based on deep reinforcement learning
CN113946423A (en) Multi-task edge computing scheduling optimization method based on graph attention network
Zhang et al. A deep reinforcement learning approach for online computation offloading in mobile edge computing
CN118250750B (en) Satellite edge computing task unloading and resource allocation method based on deep reinforcement learning
Li et al. Computation offloading and service allocation in mobile edge computing
CN115396953A (en) Calculation unloading method based on improved particle swarm optimization algorithm in mobile edge calculation
CN115499441A (en) Deep reinforcement learning-based edge computing task unloading method in ultra-dense network
Zhou et al. Recommendation-driven multi-cell cooperative caching: A multi-agent reinforcement learning approach
Wang et al. Resource allocation based on Radio Intelligence Controller for Open RAN towards 6G
Han et al. Multi-step reinforcement learning-based offloading for vehicle edge computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231212

Address after: 430070 Hubei Province, Wuhan city Hongshan District Luoyu Road No. 546

Patentee after: HUBEI CENTRAL CHINA TECHNOLOGY DEVELOPMENT OF ELECTRIC POWER Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20231212

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 541004 No. 15 Yucai Road, Qixing District, Guilin, the Guangxi Zhuang Autonomous Region

Patentee before: Guangxi Normal University