CN114860345B - Calculation unloading method based on cache assistance in smart home scene - Google Patents

Calculation unloading method based on cache assistance in smart home scene Download PDF

Info

Publication number
CN114860345B
CN114860345B CN202210607144.8A CN202210607144A CN114860345B CN 114860345 B CN114860345 B CN 114860345B CN 202210607144 A CN202210607144 A CN 202210607144A CN 114860345 B CN114860345 B CN 114860345B
Authority
CN
China
Prior art keywords
edge server
task
subtask
cache
cached
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210607144.8A
Other languages
Chinese (zh)
Other versions
CN114860345A (en
Inventor
宋巧凤
王珺
刘家豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210607144.8A priority Critical patent/CN114860345B/en
Publication of CN114860345A publication Critical patent/CN114860345A/en
Application granted granted Critical
Publication of CN114860345B publication Critical patent/CN114860345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a calculation unloading method based on cache assistance in an intelligent home scene, which comprises the following steps: establishing a service system of a local edge server, a remote edge server and a central cloud server, and establishing a calculation unloading model; checking a cache information table of each terminal device, judging whether the calculation result of the current task is cached, if not, calculating the cache value of the calculation result of the current task, determining whether to cache the calculation result of the task according to the sequence, and dividing the task without caching the calculation result into a plurality of subtasks; checking a cache information table of a local edge server, judging whether related data of a current subtask is cached, if not, calculating the cache value of the current subtask, and determining whether to cache the subtask according to the ordering; the local edge server makes an unloading decision of the subtasks, and the subtasks are respectively transmitted to the local edge server, the off-site edge server and the central cloud server for calculation; and feeding back the task calculation result to the terminal equipment.

Description

Calculation unloading method based on cache assistance in smart home scene
Technical Field
The invention relates to a mobile edge computing technology, in particular to a computing and unloading method based on cache assistance in an intelligent home scene.
Background
With the development of intelligent mobile devices and the advent of the internet of things, many new mobile applications such as virtual reality and augmented reality require rich computing resources, while they also require lower latency. The resource-constrained user device may offload computation-intensive tasks to a remote cloud with richer resources for computation. However, offloading tasks to the remote cloud may introduce greater communication overhead due to the longer distance between the remote cloud and the user, and may not meet the low latency requirements of the new application. To solve this problem, researchers have proposed Moving Edge Computing (MEC) and fog computing. I.e. a large number of mini-servers are deployed in the vicinity of the terminal device. The terminal equipment can access the edge server through the wireless network, then the complex calculation task is unloaded to the edge server, and the distance between the edge server and the terminal equipment is relatively short, so that the transmission delay caused by unloading transmission is reduced.
The focus of the edge computing research is on computing offloading and caching, but most of the edge computing research is computing offloading independent research, only a small part of the edge computing research jointly considers the computing offloading and caching, but the related research of caching tasks as an auxiliary means for reducing computing offloading cost is relatively less. In edge computing, the data transmission time has a great influence on the response time of the task, and the buffer memory can reduce the transmission time of the task, so that it is necessary to research a buffer memory-assisted computing unloading strategy. Particularly in smart home scenarios, such as smart door locks, smart sweeping robots, smart curtains, etc., terminal devices have repeated content requests in different slots, respectively, and different devices may share part of the same related data. There are still many unnecessary computation and transmission costs between the mobile device and the edge server.
Some of the prior task caches are related researches as an auxiliary means for reducing the computational offloading cost, and some are related researches for considering the computational results of the cached tasks, but when the occurrence times of the completely same tasks are less, the policy optimization effect can be ignored. Some consider the relevant data of the cache task (the execution code and the original data), but the relevant data of most tasks occupy larger cache resources, so that many data cannot be cached. Moreover, different tasks are not shared by all data, so that the utilization rate of all data of the direct cache task is not high and the optimization effect is not very good.
Disclosure of Invention
The invention aims to provide a calculation unloading method based on cache assistance in an intelligent home scene, which considers the calculation result of a collaborative cache task and the related data of sub-tasks after the task is divided, designs a reasonable cache mechanism, reduces the response time delay of the task and reduces the total cost of a system.
A calculation unloading method based on cache assistance in an intelligent home scene comprises the following steps:
establishing an edge server cluster comprising a local edge server and a different-place edge server, establishing a service system of the edge server cluster and a central cloud server, and establishing a cache auxiliary calculation unloading model under an intelligent home scene comprising a system model, a communication model and a calculation model;
checking a cache information table of each intelligent home terminal, judging whether the calculation result of the current task is cached, if not, calculating the cache value of the calculation result of the current task, determining whether to cache the calculation result of the task according to the ordering, and dividing the task without caching the calculation result into a plurality of subtasks for next unloading;
checking a cache information table of a local edge server, and judging whether related data of a current subtask is cached or not; if the sub-task is cached, the sub-task is directly calculated at a cached edge server, if the sub-task is not cached, the caching value of the related data of the current sub-task is calculated, and whether the related data of the sub-task is cached or not is determined according to the sequencing; transmitting the subtasks to a local edge server, and making an unloading decision of the subtasks by the local edge server;
according to the unloading decision, respectively transmitting the subtasks to a local edge server, a different-place edge server and a central cloud server for calculation;
and feeding back the calculation result of the task to the intelligent home terminal equipment.
Further, the constraints computed on the local edge server, the off-site edge server, and the central cloud server are
x k =1 means that the subtask is processed at the local edge server, then execution of the subtask at this time requires computational latencyAnd energy consumption->
Indicating that the subtask is processed on the off-site edge server m, then the execution of the subtask at this time requires a computational delayAnd energy consumption->And transmission delay for transmitting the task to the off-site edge server>And energy consumption->
Wherein, gamma UE,LE ,Representing the energy consumption of transmitting unit task data from the terminal equipment to the local edge server and from the local edge server to the off-site edge server m, respectively, B UE,LE H is the bandwidth between the user and the local edge server UE For transmitting energy to the user G UE,LE Sigma, the gain between the user and the local edge server 2 For Gaussian noise power of the transmission channel, +.>The input data size for the task;
z k =1 indicates that the task is unloaded to the central cloud server for processing, and then the execution of the subtask at this time requires a computation delayAnd energy consumption->And transmission delay for transmitting the task to the off-site edge server>And energy consumption->
f CC Showing cpu computing power, e, of a central cloud server CC Representing the energy consumption of a central cloud server computing unit task, gamma LE,CC Representing the energy consumption of transmitting a unit task data size from a local edge server to a central cloud server.
Further, the local edge server, the off-site edge server and the central cloud server calculate and optimize an objective function through a depth reinforcement learning algorithm of depth deterministic strategy gradient
Wherein the method comprises the steps of
A n The weight factors representing the nth intelligent home terminal equipment, beta and (1-beta) respectively represent time delay and energy consumption, and the following conditions are satisfied:
0≤β≤1。
further, the cache value λ of the task n Calculated by
λ n =P n W n
Wherein P is n For the popularity of a task,n is the number of tasks co-generated by a device in a certain time period, alpha is a constant, N is a task index, W n The CPU cycles required to calculate the task.
Further, each buffer task is divided into a plurality of subtasks, the subtasks with strong correlation are divided into task unloading clusters according to a graph reconstruction algorithm, the tasks in each task unloading cluster form directed acyclic graphs, all the directed acyclic graphs are ordered through topology, and all the directed acyclic graphs are ordered through topology.
Further, the ordered directed acyclic graph is partitioned into subtask offload clusters, and subtasks in the subtask offload clusters are transmitted to the same edge server for processing.
The method of claim 4, wherein subtask is relatedCache value lambda of data k
Wherein J is k D is the number of times of data use k The size of the data is entered for the task.
Compared with the prior art, the invention has the following advantages: (1) And combining the calculation result cache of the proposed task with the relevant data collaborative cache of the subtasks. The edge server establishes a server cluster according to the geographic position, each server in the cluster caches different subtask related data, the servers can be connected with each other and share the cached data, the cache resources of the terminal equipment and the edge server are fully utilized, the cache hit rate is improved, and the calculation time delay and the transmission time delay caused by task unloading are reduced; (2) The task caching mechanism designs different caching cost functions based on various attributes of the task and cached contents (task calculation results and related data of subtasks) and is used for updating the cached contents, so that the caching utilization rate is improved, and the total cost of a system is reduced; (3) The sub-tasks after division are sequenced by using the graph reconstruction and graph segmentation algorithm, and the task execution sequence is obtained, so that the time for waiting for the completion of the precursor task can be avoided; (4) The depth reinforcement learning algorithm based on DDPG (depth deterministic strategy gradient) is not only suitable for a dynamic environment which is constantly changed, but also can well solve the problems of high-dimensional continuous starvation of a state space and an action space, and has good convergence.
The invention is further described below with reference to the drawings.
Drawings
FIG. 1 is a schematic flow chart of the method of the invention.
Fig. 2 is a system architecture diagram of an embodiment.
Fig. 3 is a server profile of an embodiment.
FIG. 4 is a cache management mechanism diagram of task data.
Fig. 5 is a schematic diagram of a reconstruction algorithm.
Fig. 6 is a schematic diagram of a graph segmentation algorithm.
Detailed Description
With reference to fig. 2,3 and 4, in this embodiment, a service system is established, and the service system includes a local edge server (LE), a remote edge server (RE) and a central cloud; the local edge server is used as a different-place edge server of the local edge server in a certain circular area around the circle center, the local edge server and the different-place edge server form an edge server cluster, and M= {1,2,3. The local edge server establishes contact with corresponding intelligent home terminal equipment (UE). A connection may be established between the local edge server and each of the off-site edge servers, while the local edge server also establishes a connection with the central cloud. The local edge server is provided with a cache control server, and the cache control server is provided with a cache information table, wherein the cache information table stores the content of the related data of each edge server (comprising the local edge server and the different-place edge server) cache subtask in one edge server cluster. Each edge server in an edge server cluster has cache and computing resources, Q m Representing the cache space capacity of the edge server. In addition, the local edge server is also connected with the central cloud.
Referring to fig. 1, a calculation unloading method based on cache assistance in an intelligent home scene based on the service system includes the following steps:
step S100, establishing a cache auxiliary computing unloading model in an intelligent home scene comprising a system model, a communication model and a computing model;
step S200, reading a cache state of a calculation result of a request task of a cache acquisition request on equipment;
step S300, dividing the task without caching the calculation result by a graph reconstruction and graph segmentation algorithm to obtain an execution sequence of the subtasks;
step S400, obtaining the buffer status of the subtask related data;
step S500, the local edge server makes an unloading decision of the subtask;
step S600, solving the optimal unloading decision and the caching decision. The optimization problem proposed by the embodiment is a non-convex NP-hard problem, and the invention adopts a depth reinforcement learning algorithm based on DDPG (depth deterministic strategy gradient) to solve the problem.
Specifically, the system model described in step S100 is: there are N smart home terminal devices in each smart home application scenario, which may be represented as n= {1,2,3., N, each intelligent home terminal equipment has smaller cache resource C n And each intelligent home terminal equipment only generates one task in each time slot. The task generated by the intelligent home terminal equipment in each time slot is a complete intelligent terminal request task, and if the task needs to be calculated, the task is divided into sub-tasks for next calculation and unloading. In connection with fig. 1, each smart home application scenario establishes communication with a local edge server.
The communication model in step S100 includes signal transmission between the smart home terminal device and the local edge server, signal transmission between the local edge server and the off-site edge server, and signal transmission between the local edge server and the central cloud. And data transmission is carried out between the intelligent home terminal equipment and the local edge server through a wireless medium, such as 4G, 5G, wi-fi, bluetooth and the like, so that the transmission rate is greatly influenced by the transmission environment. The system constitution of the local edge server and the off-site edge server is completely consistent, the local edge server and the off-site edge server are connected by a wire optical cable, and the link transmission rate between the local edge server and the off-site edge server is V LE,REm =V c Is a fixed value, and is determined after the installation of the device is completed.
The calculation model in step S100 is a calculation model of the cost of the system, and is a weighted sum of the time delay and the energy consumption of each intelligent home terminal device to generate a task. The attributes of each task include the CPU cycles W required to calculate the task n And the data size of the output result of the taskWherein n is the task of the nth intelligent furniture terminalRepresented by a binary array +.>Each task may be divided into K sub-tasks, which may be expressed as k= {1,2, 3..once, K }, each sub-task K contains two attributes, the input data size of the task +.>(in bytes) and the CPU cycle W required to calculate the task k Represented by a binary array +.>The time delay and energy consumption generated in the process of unloading the subtasks can be obtained according to the unloading and buffering decisions.
The specific process of step S200 includes:
step S201, a cache information table on the intelligent home terminal equipment is read, whether a calculation result of a request task is cached on the terminal equipment is judged, and a task cache state on the intelligent home terminal equipment is expressed as A n The method comprises the steps of carrying out a first treatment on the surface of the If A n =1, the calculation result representing the task is cached on the intelligent home terminal device, and then the intelligent home terminal device directly obtains the calculation result without calculation and transmission, and at this time, the time delay generated by the task processingAnd energy consumption->The calculation cost of the task is zero, the calculation cost is the optimization target, and the calculation unloading process of the task is finished; if A n =0, indicating that the calculation result of the task is not cached, and step S202 is performed;
step S202, calculating the buffer value of the task according to formula (1)
λ n =P n W n (1)
Wherein P is n For popularity of task, P n Following the law of Zipf,alpha is a constant, N is the number of tasks co-generated by the device in a certain time period, W n CPU cycles required for computing the task; popularity P of task n The higher the CPU cycle W required to compute the task n The larger the cache value of the task is, the higher (the cache value is larger than the minimum value of the cached task, and the task can be cached), and the process goes to step S203;
step S203, making a buffering decision of the task: the tasks are ordered in descending order according to the caching value, the intelligent home terminal equipment sequentially caches the calculation results of the tasks from high to low, and the data identifiers of the tasks (used for marking each task) until the caching resources of the terminal equipment are used up.
The specific process of step S300 includes:
step S301, dividing a complete task into K subtasks, dividing the subtasks with strong correlation into task unloading clusters through a graph reconstruction algorithm, wherein each task unloading cluster is mutually independent and can be processed in parallel as shown in FIG. 5; the K subtasks will form a plurality of directed acyclic graphs; the dependencies refer to dependencies between subtasks, which are related if completion of one subtask depends on completion of another subtask;
step S302, ordering all directed acyclic graphs through topology, and ensuring that task nodes arranged in front are not dependent on nodes arranged in back;
step S303, further dividing the ordered directed acyclic graph into smaller subtask unloading clusters through a graph dividing algorithm, as shown in FIG. 6; these subtask offload clusters can also be processed in parallel, with the subtasks in the subtask offload clusters being transmitted to the same edge server for processing.
Referring to fig. 5, in step S303, the principle of the graph segmentation algorithm is as follows: for graph g= (V, E), if E can be divided into two non-empty subsets E 1 And E is 2 And making G [ E ] 1 ]And G [ E ] 2 ]There is and only one common vertex V, then vertex V is the cut point of G. In the task structure diagram after topological sorting, the node data volume of the complete diagram is large, meanwhile, the insides of the subgraphs on the left and right of the T4 node are strongly communicated, and therefore the diagram can be cut into two subgraphs by taking the T4 node as a cutting point.
The specific process of step S400 is:
step S401, reading the cache information table, and determining whether the subtask related data is cached on the edge server cluster, wherein the cache state of the subtask related data on the edge server cluster is represented as B km The method comprises the steps of carrying out a first treatment on the surface of the If B km The expression of=1 indicates that the subtask related data is cached on the edge server cluster, the subtask is directly calculated by the edge server with cached data without transmitting the subtask, and at this time, only calculation time delay and corresponding energy consumption are generated in the unloading process, and the transmission time delay and the energy consumption of the task are not generated; in particular, when m=0, it indicates that the task data is cached on the local edge server, and is directly processed on the local edge server, where m is an index of the edge servers in the off-site edge server cluster; if B km =0, indicating that the relevant data of the subtask is not cached in the edge server cluster, and step S402 is performed;
step S402, calculating the buffer value lambda of the subtask related data through the formula (2) k
Wherein J is k D is the number of times of data use k Inputting the size of data for the task, the number of times of use J of the data k The more the task inputs data D k The smaller it is, the higher its cache value is;
step S403, making a buffer decision of the subtask related data;
and step S404, transmitting the subtasks to a local edge server by the intelligent home terminal equipment, and cooperatively processing the subtasks by the local edge server, the off-site edge server and the central cloud. The local edge server makes an offloading decision of the subtasks, and at this time, a task transmission delay and energy consumption are generated.
In conjunction with fig. 3 and 4, in step S403, the edge servers cooperatively cache the subtask related data, so as to ensure that the content of the subtask data cached by each edge server in the cluster is different. The method comprises the following specific steps:
step S4031, dividing an edge server cluster within a certain distance range by taking a local edge server as a circle center based on the geographic position;
step S4032, when the subtasks reach the local edge server, the cache control server performs descending order sequencing on the subtasks according to the cache cost function, and caches the subtask related data in the edge servers in the edge server cluster in sequence from high to low;
in step S4033, the subtask related data is buffered from the local edge server, and when the server buffering resources are full, the subtask related data is buffered to the other edge server closest to the local edge server, and the buffering is sequentially performed until the buffering resources of all edge servers in the cluster are used up.
The cache table is monitored in real time by the controller, when a task unloading request arrives, the cache table can be accessed at any time to check which edge servers cache which contents, and then corresponding unloading requests and cache decisions are made. The cache management mechanism is shown in fig. 3.
The specific process of step S500 includes:
in step S501, the local edge server makes a collaborative offloading decision of sub-tasks for which relevant data is not cached, and x may be used for offloading decision of the kth sub-task k ={0,1},z k ={0,1},/>Wherein, the subtasks can be processed at a local edge server, or can be offloaded to a different edge server or processed by a central cloud server;
Step S502, setting an offloading decisionThe local edge server transmits the subtasks to the corresponding server for processing or processes at the local edge server according to the unloading decision, and generates corresponding transmission and calculation time delay and energy consumption, and the method specifically comprises the following steps:
x k =1 indicates that the subtask is processed at the local edge server, then execution of the subtask at this time requires computational latencyAnd energy consumption->The computation delay is determined by the computation power of the local edge server;
indicating that the task is offloaded to the off-site edge server m for processing, then execution of the subtask at this time requires a computational delay +.>And energy consumption->And transmission delay for transmitting the task to the off-site edge server>And energy consumption->The computation time delay is determined by the computation capability of the off-site edge server;
wherein, gamma UE,LE ,Representing the energy consumption of transmitting unit task data from the terminal equipment to the local edge server and from the local edge server to the off-site edge server m, respectively, B UE,LE H is the bandwidth between the user and the local edge server UE For transmitting energy to the user G UE,LE Sigma, the gain between the user and the local edge server 2 Is Gaussian noise power for the transmission channel;
z k =1 indicates that the task is unloaded to the central cloud server for processing, and then the execution of the subtask at this time requires a computation delayAnd energy consumption->And transmission delay for transmitting the task to the off-site edge server>And energy consumption->The computation time delay is determined by the computation power of the central cloud server
f CC Representing cpu computing power, e, of a central cloud server CC Representing the energy consumption of a central cloud server computing unit task, gamma LE,CC Representing the energy consumption of transmitting a unit task data size from a local edge server to a central cloud server.
The specific process of step S600 includes:
step S601, obtaining an optimized objective function: in the case of the intelligent home, the computing unloading and caching strategies are jointly optimized, the system cost is minimized under the constraint of cache resources of the terminal equipment and the edge server and MEC computing capacity, and the optimization objective function can be expressed as:
wherein the method comprises the steps of
A n The weight factors representing the nth intelligent home terminal equipment, beta and (1-beta) respectively represent time delay and energy consumption, and the following conditions are satisfied:
0≤β≤1
wherein the subtasks must be performed at a certain edge server or central cloud and can only be processed at one server and cannot be divided again. The content cached on the device and the edge server cannot be greater than its corresponding cache capacity. The rate of unloading the subtasks from the terminal equipment to the local edge server is smaller than or equal to the sum of the rates of transmitting the subtasks to the different-place edge server and the central cloud server by the local edge server, so that the task backlog is not generated after the subtasks are unloaded to the local edge server;
s602, acquiring a state space, an action space set and a reward function as follows:
(1) State space: s is(s) t ={A t-1 ,T t ,B t-1 ,sT t }
Wherein A is t-1 And B t-1 The buffer decisions of the terminal device and the local edge server in the previous time slot respectively can also represent the buffer state of the system at the current moment. T (T) t And sT t Respectively representing the vector of the task requested on the terminal equipment at the current moment and the vector of the task waiting to be processed on the local edge server after the task is divided.
(2) Action space: a, a t ={A t ,B t ,X t }
Wherein A is t And B t Respectively representing buffer decisions of terminal equipment and local edge server, X t Then the offload decisions made by the local edge server are represented.
(3) Bonus function: after a specific action is made according to the current state, the environment feeds back a reward, which can be expressed as:
r t =R(s t ,a t )=-C t
wherein the method comprises the steps ofReferring to the total cost of the system, the smaller the cost of the system, the greater the resulting reward for achieving the goal of minimizing the total cost of the system.
S603, carrying out strategy iteration and completing problem solving:
(1) Strategy iteration: in deep reinforcement learning such as Q-learning, the strategy iteration function is:
wherein, delta is learning rate, gamma is 0.ltoreq.gamma.ltoreq.1, and then is discount factor.
(2) The specific implementation is as follows: the present invention employs an Actor-Critic approach that uses two independent DNNs, where the Critic network Q (s t ,a tQ ) For approximating the Q function, and an Actor network μ (s tμ ) For updating policy functions, where θ Q And theta μ Respectively represent the weighting factors of the two neural networks. And simultaneously adopts a strategy iteration function:to update the policy.
In the proposed DDPG framework, a four-layer fully connected neural network with two hidden layers is used for the actor and critic networks. The number of neurons in the two hidden layers was 8N and 6N, respectively. The activation function of the hidden layer of the neural network uses a ReLU function, and the final output layer of the Actor network uses a sigmoid layer to limit actions. Whereas the adaptive moment estimation (Adam) method is used to update neural network parameters.
Finally, the calculation result of the task is fed back to the intelligent home terminal equipment, and the feedback time is negligible due to the fact that the data of the calculation result are small, and the calculation result is not discussed here. Meanwhile, whether the calculation result is cached or not is determined by the caching decision of the task.

Claims (6)

1. The calculation unloading method based on cache assistance in the smart home scene is characterized by comprising the following steps:
establishing an edge server cluster comprising a local edge server and a different-place edge server, establishing a service system of the edge server cluster and a central cloud server, and establishing a cache auxiliary calculation unloading model under an intelligent home scene comprising a system model, a communication model and a calculation model;
checking a cache information table of each intelligent home terminal, judging whether the calculation result of the current task is cached, if not, calculating the cache value of the calculation result of the current task, determining whether to cache the calculation result of the task according to the ordering, and dividing the task without caching the calculation result into a plurality of subtasks for next unloading;
checking a cache information table of the edge server cluster, and judging whether the related data of the current subtask is cached or not; if the sub-task is cached, the sub-task is directly calculated at a cached edge server, if the sub-task is not cached, the caching value of the related data of the current sub-task is calculated, and whether the related data of the sub-task is cached or not is determined according to the sequencing; transmitting the subtasks to a local edge server, and making an unloading decision of the subtasks by the local edge server;
according to the unloading decision, respectively transmitting the subtasks to a local edge server, a different-place edge server and a central cloud server for calculation;
the calculation result of the task is fed back to the intelligent home terminal equipment;
the constraint of the subtasks transmitted to the local edge server, the off-site edge server and the central cloud server is calculated as follows
x k =1 means that the subtask is processed at the local edge server, then execution of the subtask at this time requires computational latencyAnd energy consumption
Indicating that the subtask is processed on the off-site edge server m, then execution of the subtask at this time requires a computational delay +.>And energy consumption->And transmission delay for transmitting the task to the off-site edge server>And energy consumption->
Wherein, gamma UE,LE ,Representing the energy consumption of transmitting unit task data from the terminal equipment to the local edge server and from the local edge server to the off-site edge server m, respectively, B UE,LE H is the bandwidth between the user and the local edge server UE For transmitting energy to the user G UE,LE Sigma, the gain between the user and the local edge server 2 For Gaussian noise power of the transmission channel, +.>The input data size for the task;
z k =1 indicates that the task is unloaded to the central cloud server for processing, and then the execution of the subtask at this time requires a computation delayAnd energy consumption->And transmission delay for transmitting the task to the off-site edge server>And energy consumption->
f CC Showing cpu computing power, e, of a central cloud server CC Representing the energy consumption of a central cloud server computing unit task, gamma LE,CC Representing transmission unit task data size from a local edge server to a central cloud serverEnergy consumption of (2);
the local edge server, the off-site edge server and the central cloud server calculate an optimization objective function through a depth reinforcement learning algorithm of depth deterministic strategy gradient to obtain a calculation result, wherein the optimization objective function is
Wherein the method comprises the steps of
A n The weight factors representing the nth intelligent home terminal equipment, beta and (1-beta) respectively represent time delay and energy consumption, and the following conditions are satisfied:
0≤β≤1。
2. the method of claim 1, wherein the task calculates a cache value λ of the result n Calculated by
λ n =P n W n
Wherein P is n For the popularity of a task,n is the number of tasks co-generated by a device in a certain time period, alpha is a constant, N is a task index, W n The CPU cycles required to calculate the task.
3. The method of claim 2, wherein each cache task is divided into a plurality of subtasks, the subtasks of strong correlation are divided into task offload clusters according to a graph reconstruction algorithm, the tasks in each task offload cluster form directed acyclic graphs, all directed acyclic graphs are topologically ordered, and all directed acyclic graphs are topologically ordered.
4. The method of claim 1, wherein the method of determining whether subtask-related data is cached on an edge server cluster is as follows:
the cache state of the subtask related data on the edge server cluster is denoted as B km
If B km =1, meaning that the subtask related data is cached on the edge server cluster, the subtask is directly calculated by the edge server with cached data without transmitting the subtask, and only calculation time delay and corresponding energy consumption are generated in the unloading process;
when m=0, indicating that task data is cached on a local edge server, directly processing the task data on the local edge server, wherein m is an index of an edge server in a different-place edge server cluster;
if B km =0, indicating that the relevant data for this subtask is not cached in the edge server cluster.
5. A method according to claim 3, characterized in that the ordered directed acyclic graph is partitioned into subtask offload clusters, the subtasks in the subtask offload clusters being transmitted to the same edge server process.
6. The method of claim 2, wherein the subtask-related data has a cache value λ of k
Wherein J is k D is the number of times of data use k The size of the data is entered for the task.
CN202210607144.8A 2022-05-31 2022-05-31 Calculation unloading method based on cache assistance in smart home scene Active CN114860345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210607144.8A CN114860345B (en) 2022-05-31 2022-05-31 Calculation unloading method based on cache assistance in smart home scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210607144.8A CN114860345B (en) 2022-05-31 2022-05-31 Calculation unloading method based on cache assistance in smart home scene

Publications (2)

Publication Number Publication Date
CN114860345A CN114860345A (en) 2022-08-05
CN114860345B true CN114860345B (en) 2023-09-08

Family

ID=82641373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210607144.8A Active CN114860345B (en) 2022-05-31 2022-05-31 Calculation unloading method based on cache assistance in smart home scene

Country Status (1)

Country Link
CN (1) CN114860345B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860350A (en) * 2021-03-15 2021-05-28 广西师范大学 Task cache-based computation unloading method in edge computation
CN113434212A (en) * 2021-06-24 2021-09-24 北京邮电大学 Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning
CN113950066A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Single server part calculation unloading method, system and equipment under mobile edge environment
CN114268923A (en) * 2021-12-15 2022-04-01 南京邮电大学 Internet of vehicles task unloading scheduling method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734726B2 (en) * 2001-11-27 2010-06-08 International Business Machines Corporation System and method for dynamically allocating processing on a network amongst multiple network servers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860350A (en) * 2021-03-15 2021-05-28 广西师范大学 Task cache-based computation unloading method in edge computation
CN113434212A (en) * 2021-06-24 2021-09-24 北京邮电大学 Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning
CN113950066A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Single server part calculation unloading method, system and equipment under mobile edge environment
CN114268923A (en) * 2021-12-15 2022-04-01 南京邮电大学 Internet of vehicles task unloading scheduling method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度强化学习的移动边缘计算任务卸载研究;卢海峰;顾春华;罗飞;丁炜超;杨婷;郑帅;;计算机研究与发展(第07期);全文 *

Also Published As

Publication number Publication date
CN114860345A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
Elgendy et al. Joint computation offloading and task caching for multi-user and multi-task MEC systems: reinforcement learning-based algorithms
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN111556461B (en) Vehicle-mounted edge network task distribution and unloading method based on deep Q network
CN113434212B (en) Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning
Li et al. NOMA-enabled cooperative computation offloading for blockchain-empowered Internet of Things: A learning approach
CN112422644B (en) Method and system for unloading computing tasks, electronic device and storage medium
Chen et al. Efficiency and fairness oriented dynamic task offloading in internet of vehicles
Hu et al. An efficient online computation offloading approach for large-scale mobile edge computing via deep reinforcement learning
CN113778691B (en) Task migration decision method, device and system
CN114205353B (en) Calculation unloading method based on hybrid action space reinforcement learning algorithm
CN113626104B (en) Multi-objective optimization unloading strategy based on deep reinforcement learning under edge cloud architecture
Chen et al. Cache-assisted collaborative task offloading and resource allocation strategy: A metareinforcement learning approach
CN113645637B (en) Method and device for unloading tasks of ultra-dense network, computer equipment and storage medium
Qi et al. Vehicular edge computing via deep reinforcement learning
Zhao et al. Adaptive Swarm Intelligent Offloading Based on Digital Twin-assisted Prediction in VEC
CN116489708B (en) Meta universe oriented cloud edge end collaborative mobile edge computing task unloading method
CN117436485A (en) Multi-exit point end-edge-cloud cooperative system and method based on trade-off time delay and precision
CN114860345B (en) Calculation unloading method based on cache assistance in smart home scene
CN111930435A (en) Task unloading decision method based on PD-BPSO technology
Shaodong et al. Multi-step reinforcement learning-based offloading for vehicle edge computing
CN116996941A (en) Calculation force unloading method, device and system based on cooperation of cloud edge ends of distribution network
CN116367231A (en) Edge computing Internet of vehicles resource management joint optimization method based on DDPG algorithm
Cui et al. Multi-Agent Reinforcement Learning Based Cooperative Multitype Task Offloading Strategy for Internet of Vehicles in B5G/6G Network
CN114980160A (en) Unmanned aerial vehicle-assisted terahertz communication network joint optimization method and device
Zhang et al. Learning to coordinate in mobile-edge computing for decentralized task offloading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant