CN115185660A - Unloading and buffer storage method and system for MAR task in multi-access edge calculation - Google Patents

Unloading and buffer storage method and system for MAR task in multi-access edge calculation Download PDF

Info

Publication number
CN115185660A
CN115185660A CN202210795740.3A CN202210795740A CN115185660A CN 115185660 A CN115185660 A CN 115185660A CN 202210795740 A CN202210795740 A CN 202210795740A CN 115185660 A CN115185660 A CN 115185660A
Authority
CN
China
Prior art keywords
cache
task
mar
unloading
subtask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210795740.3A
Other languages
Chinese (zh)
Inventor
翟临博
李玉美
李年新
杨峰
赵景梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202210795740.3A priority Critical patent/CN115185660A/en
Publication of CN115185660A publication Critical patent/CN115185660A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/485Resource constraint
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to the technical field of task unloading and cache of MAR mobile equipment tasks, and provides an unloading and cache method and a system of MAR tasks in multi-access edge calculation, wherein the unloading and cache method comprises the following steps: dividing the MAR task into a plurality of subtasks; performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue; placing and initializing cache files in a cache set to obtain a cache placing strategy; initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy; and optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization. The speed of task uninstallation and cache placement is improved.

Description

Unloading and buffer storage method and system for MAR task in multi-access edge calculation
Technical Field
The invention belongs to the technical field of task unloading and buffer storage of Mobile Augmented Reality (MAR) Mobile equipment tasks, and particularly relates to an unloading and buffer storage method and system of an MAR task in multi-access edge computing.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
A number of new video applications are emerging, such as Augmented Reality (AR), virtual Reality (VR). These new video applications may bring a better experience due to the nature of immersive scenes. Therefore, the method is applied to a plurality of fields such as the Internet of things, education, remote medical treatment and the like. In the central cloud computing system, tasks with large computing loads can be unloaded to the central cloud for processing. However, central cloud systems are not sufficient to support these computational heavy, low latency requirements. The reason is that the central cloud is far away from the user, the calculation load is high, and the service quality cannot be guaranteed. As a major evolution in 5G communication systems, mobile Edge Computing (MEC) provides a good direction for solving these problems with its powerful smart storage and Computing capabilities. The MEC sinks the central cloud computing service to the edge of the core network, closer to the data generated by the user. When users request content or offload tasks, they may access the edge servers directly, rather than accessing the remote central cloud. This can greatly reduce backhaul load, the number of serving users, and transmission link distance. Therefore, it can provide high bandwidth, low latency network services to users.
Since MAR mobile devices are close to the edge servers, reducing end-to-end delay is a major advantage compared to cloud-based architectures. AR functionality is less dependent on infrastructure links than cloud-based architectures because servers are deployed at the edge. The edge server ensures more reliable communication than a cloud-based architecture. Content caching is possible due to the localized nature of the information. Content caching reduces end-to-end delay and congestion in infrastructure networks other than edge servers. User data is not transmitted through a public network, and safer communication is ensured. The edge-based architecture may support a lightweight and power-efficient MAR device, such as a wearable device, because it supports computing offloading.
However, there are some problems with the offloading and buffering procedures of the MAR task in the multi-access edge computation at present. In particular, the mobile edge cache mainly utilizes the storage resource provided by the mobile edge server, which can reduce the network data traffic and thus shorten the content access delay of the user. Some studies are based on the independent operation of a single edge server, the caching capacity of which is often particularly limited, which degrades the performance of the wireless mobile network in many ways. In addition, designing a caching scheme for each server separately is not only cumbersome, but also fails to make full use of caching resources. To address these challenges, cooperative caching schemes have been proposed to improve network performance. Although cooperative caching improves cache utilization over non-cooperative caching, it still has some architectural disadvantages. In the research of task unloading, some documents consider the queuing state of an application program buffer and an idle processor, and propose a one-dimensional search algorithm to minimize the delay of task execution. There is also literature that applies queuing theory to the modeling of edge compute nodes with the goal of minimizing average task offload time. These documents offload all tasks during the offloading process, ignoring the execution capabilities of the local device. Moreover, the above research on task offloading only considers single-user task offloading, and the research on multi-user task offloading needs to consider competition of shared resources, and scheduling is more complex.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides an unloading and cache placement method and system for an MAR task in multi-access edge calculation, wherein a multi-target artificial bee colony algorithm is adopted to optimize a cache placement strategy and a task unloading strategy, and the speed of task unloading and cache placement is improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a first aspect of the present invention provides an offload and cache placement method for MAR tasks in multiple access edge computing, comprising:
dividing the MAR task into a plurality of subtasks;
performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;
placing and initializing cache files in a cache set to obtain a cache placing strategy;
initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;
and optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization.
Further, the subtasks include two types: the first type can only be executed at the local device; the second class can be executed either on the local device or the edge server, and the computation results can be cached at the edge server.
Further, judging whether the front thread task of the subtask is completed or not according to the priority queue;
if the front thread task is completed and the subtasks belong to the first class, the front thread task is directly executed in the local equipment;
if the front thread task is completed and the subtask belongs to the second class, the mobile equipment accesses the edge server to determine whether a cache file required by the subtask exists, and if so, the cache file is directly transmitted to an execution point of the next subtask as a result; otherwise, the mobile device offloads the subtasks to the nearest edge server for execution.
Further, the objective of the multi-objective swarm optimization algorithm is as follows: maximizing hit rate and minimizing total service delay.
Further, the constraint of maximizing hit rate includes:
the space occupied by the cache files cached on each edge server cannot exceed the cache space of the edge server itself.
Further, the constraint of minimizing the total service delay includes:
each subtask completion time cannot exceed the maximum completion time;
the bandwidth resource allocated to each downlink cannot exceed the bandwidth resource of the mobile device downlink;
the bandwidth resources allocated to each uplink cannot exceed the bandwidth resources of the mobile device uplink;
the computational resources allocated to each subtask cannot exceed the total computational resources of the mobile device;
the computational resources allocated to each subtask cannot exceed the total computational resources of the edge server.
Furthermore, when the cache file is placed and initialized, an analytic hierarchy process is adopted;
the criteria layer of the analytic hierarchy process takes into account two factors: the ratio of the size of each cache file to the cache space of the placed edge server, and the execution time of the subtask unloading to a certain edge server when the cache file is placed to the server.
A second aspect of the present invention provides an offload and cache placement system for MAR tasks in multiple access edge computing, comprising:
a task partitioning module configured to: dividing the MAR task into a plurality of subtasks;
a priority queuing module configured to: performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;
a placement initialization module configured to: placing and initializing cache files in a cache set to obtain a cache placing strategy;
a task offload initialization module configured to: initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;
an optimization module configured to: and optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization.
A third aspect of the present invention provides a computer readable storage medium, on which a computer program is stored, which program, when being executed by a processor, performs the steps of the method for offloading and caching MAR tasks in a multiple access edge computation as described above.
A fourth aspect of the present invention provides a computer device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of the method for offloading and storing MAR tasks in multi-access edge computing as described above.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides an unloading and cache placement method of MAR tasks in multi-access edge calculation, which designs two indexes of hit rate and service delay to evaluate the task unloading and cache placement on an edge server, and proposes the problem of task unloading and cache placement by taking the maximum hit rate and the minimum delay as targets under the constraint of edge server calculation resources and cache space; aiming at the problems of task unloading and cache placement, a multi-target artificial bee colony algorithm is adopted; introducing Pareto optimal relation in the optimization process to find an optimal solution; extensive evaluation proves that the algorithm has better performance and the speed of task unloading and cache placement is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a flowchart of the unloading and cache placement of the MAR task in the multi-access edge computation according to the first embodiment of the present invention;
fig. 2 is a diagram illustrating MAR task partitioning according to a first embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Interpretation of terms:
artificial bee colony Algorithm (ABC): an intelligent optimization algorithm for simulating a honey collection process of bees. In the algorithm, the entire colony contains three different bees, namely, employment bees (employer bes), observation bees (onhook bes) and scout bees (scout bes). In implementation, each food source is encoded into a "solution" and given a fitness measure. First, a new solution is generated by each hiring bee based on its corresponding solution (food source) and its "neighbors". If the fitness of the new solution is higher than the original solution, the old solution is replaced by the new solution, otherwise the new solution is discarded, and then each observer bee selects a food source by roulette based on the food source information (i.e., fitness) provided by the hiring bee, and tries to improve the food source using a similar mechanism as the hiring bee. Finally, if a food source is not improved in consecutive "limit" iterations, its corresponding hiring bee is converted into a scout bee. Scout bees will generate a feasible random solution based on the search space. The above process is repeated until the termination condition is satisfied. In the ABC algorithm, the iteration number "limit" is a very important control parameter, which has a large influence on the performance of the algorithm.
Example one
The embodiment provides an unloading and buffer storage method for MAR tasks in multi-access edge computing.
As shown in fig. 2, a MAR task can be divided into five subtasks with dependency relationship according to its working characteristics, and the subtasks are respectively Video resource acquisition (Video resource) z 1 Tracking (Tracker) z 2 Map (Mapper) z 3 Object Detection (Object Detection) z 4 And rendering (render) z 5 . The video resource acquisition and rendering can be only executed on the local device, the other three subtasks can be executed on the local device or the edge server, and the calculation results can be cached on the edge server.
Defined as the set Z = { Z = { [ Z ] 1 ,z 2 ,z 3 ,z 4 ,z 5 }; the video source acquires and renders the two subtasks which must be executed in the local device, and other three subtasks which can be executed locally or unloaded to the edge server are defined as a set Z' = { Z = } 2 ,z 3 ,z 4 }. In addition, cache files of the three subtasks of tracking, mapping and object identification are cached on the edge server.
In the MEC system, edge servers work cooperatively, assuming that I cache files in the cache file library are placed and MAR subtasks generated in the MEC system are unloaded. By optimizing the cache placement strategy and the task offloading strategy, the highest hit rate and the lowest total service latency are guaranteed.
Therefore, the objective function is designed, the task offload and buffer placement problem is modeled, and the problem is formulated to maximize hit rate and minimize total service latency, according to the workflow of MAR mobile devices and edge systems (MEC systems).
To better evaluate whether cache placement is reasonable, the first objective function, namely maximize hit ratio, P 0 Expressed as:
Figure BDA0003735758880000071
Figure BDA0003735758880000072
Figure BDA0003735758880000073
Figure BDA0003735758880000074
wherein, P hit Represents the hit rate in the MEC system;
Figure BDA0003735758880000081
whether the cache request of the subtask z of the mobile device d hits on the edge server n which is connected for the first time or not;
Figure BDA0003735758880000082
a cache representing a subtask z of mobile device d requests edge server n to be connected for the first time but is migrated to edge server k if the cache is hit; to represent
Figure BDA0003735758880000083
Indicating whether the cache r corresponding to MAR subtask z in local device d hits,
Figure BDA0003735758880000084
indicating that the cache r corresponding to MAR subtask z in local device d is hit,
Figure BDA0003735758880000085
indicating that the cache r corresponding to the MAR subtask z in the local device d is not hit; u represents the total number of requests; c. C n,r Indicating whether the cache r is cached on the edge server n;
Figure BDA0003735758880000086
representing a set of placements cached on an edge server n; s r Represents the size of the cache r; s n Representing the size of the cache space of the edge server n; constraint-C in the first objective function 1 Judging whether the cache is cached on the edge server or not for caching decision; constraint two C in the first objective function 2 Indicates that the space occupied by the cached file r cached on each edge server n cannot be representedBeyond the cache space of the edge server itself.
Second objective function P 1 To minimize the total service delay of MAR tasks in the system.
Figure BDA0003735758880000087
C 1 :
Figure BDA0003735758880000088
C 2 :
Figure BDA0003735758880000089
C 3 :
Figure BDA00037357588800000810
C 4 :
Figure BDA00037357588800000811
C 5 :
Figure BDA00037357588800000812
Wherein, T d,z Represents the completion time (service delay) of each MAR subtask z in the local device d;
Figure BDA00037357588800000813
representing the maximum execution time allowed by each MAR subtask Z in the local device d, Z representing the set of all subtasks Z, and Z' representing the set of subtasks Z that can be executed on the edge server;
Figure BDA00037357588800000814
representing the computational resources of the local device d,
Figure BDA0003735758880000091
represents a set of local devices d;
Figure BDA0003735758880000092
representing the computational resources of the edge server n,
Figure BDA0003735758880000093
represents a collection of edge servers n; constraining a representative of each subtask completion time not exceeding its maximum completion time; constraints two and three represent that the bandwidth resources assigned to each downlink/uplink cannot exceed the bandwidth resources assigned to each mobile device downlink/uplink,
Figure BDA0003735758880000094
indicating the bandwidth resources of the uplink u assigned to the local device d,
Figure BDA0003735758880000095
representing the set of uplinks u in the local device d,
Figure BDA0003735758880000096
representing the bandwidth resources of the uplink of the local device d,
Figure BDA0003735758880000097
indicating the bandwidth resource of the downlink u assigned to the local device d,
Figure BDA0003735758880000098
representing the set of downlinks u in the local device d,
Figure BDA0003735758880000099
bandwidth resources representing the downlink of the local device d; constraints four and five represent respectively that the computational resources assigned to each subtask cannot exceed the total computational resources of each mobile device \ edge server,
Figure BDA00037357588800000910
indicating that local device d is assigned to a computing resource of MAR subtask z,
Figure BDA00037357588800000911
representing the total computational resources of the local device d,
Figure BDA00037357588800000912
representing the computing resources that the edge server n allocates to the subtask z.
Wherein the service delay T d,z The calculating method comprises the following steps:
Figure BDA00037357588800000913
Figure BDA00037357588800000914
Figure BDA00037357588800000915
Figure BDA00037357588800000916
Figure BDA00037357588800000917
Figure BDA00037357588800000918
Figure BDA00037357588800000919
Figure BDA0003735758880000101
Figure BDA0003735758880000102
Figure BDA0003735758880000103
Figure BDA0003735758880000104
Figure BDA0003735758880000105
Figure BDA0003735758880000106
wherein the content of the first and second substances,
Figure BDA0003735758880000107
and
Figure BDA0003735758880000108
representing the transmission power of the mobile device d and the edge server n respectively,
Figure BDA0003735758880000109
and
Figure BDA00037357588800001010
representing the channel gains of the uplink and downlink, respectively; gamma ray 2 Representing the noise intensity;
Figure BDA00037357588800001011
and
Figure BDA00037357588800001012
indicates the transmission rate of the uplink and the downlink;
Figure BDA00037357588800001013
and
Figure BDA00037357588800001014
bandwidth resources representing an uplink and a downlink;
Figure BDA00037357588800001015
represents the uplink transmission delay of the subtask z; sigma d,z Represents the data size of the subtask z;
Figure BDA00037357588800001016
the output result of the subtask z is transmitted in the downlink; sigma' d,z The data size representing the completion of the execution of the subtask z; sigma' d,z-1 Representing the result size of the front thread task of the subtask z;
Figure BDA00037357588800001017
the propagation delay of the subtask z between edge server n and edge server k;
Figure BDA00037357588800001018
a transmission delay between edge servers of a result of a front thread task representing a front thread task of the subtask z;
Figure BDA00037357588800001019
representing the transmission delay of the subtask; h d,z-1 = n denotes that the execution point of the previous thread task of the subtask z is on the edge server n; h d,z-1 =0 indicates that the execution point of the front thread task of the subtask z is local; theta d,z =0 indicates that the subtask z is not unloaded locally; theta d,z = n denotes that the subtask offload point is edge server n; epsilon represents the transmission rate between edge servers;
Figure BDA0003735758880000111
representing the time delay of the sub-task z cache request being transferred; u represents the number of mobile device requests to transfer;
Figure BDA0003735758880000112
representing execution local to a taskDelaying;
Figure BDA0003735758880000113
computing resources representing the assignment of local devices to subtasks;
Figure BDA0003735758880000114
representing the execution time delay of the subtask z on the edge server n with the shortest transmission delay;
Figure BDA0003735758880000115
representing the computing resources that server n assigns to subtask z;
Figure BDA0003735758880000116
representing the execution delay of the subtask z on the other edge server n; t is d,z Representing the execution latency of the task.
Figure BDA0003735758880000117
Representing an offloading decision for the task;
Figure BDA0003735758880000118
indicating that the task is executing locally;
Figure BDA0003735758880000119
a migration decision for the task is represented,
Figure BDA00037357588800001110
indicating that the task has not migrated.
As shown in fig. 1, the unloading and buffering method for a MAR task in multi-access edge computing provided in this embodiment specifically includes the following steps:
step 1, dividing the MAR task into a plurality of (five) subtasks with dependency relationships according to the working characteristics of the MAR task, wherein the subtasks comprise two types: the first type can only be executed at the local device; the second type can be executed in the local device or the edge server, and the calculation result can be cached in the edge server; the subtasks in the first class constitute set Z, and the subtasks in the second class constitute set Z'. And then, performing priority queuing on all subtasks in the MEC system according to the latest execution time to obtain a priority queue.
Wherein the latest completion time
Figure BDA00037357588800001111
The calculation formula is as follows:
Figure BDA00037357588800001112
wherein z is 5 The last sub-task is represented and,
Figure BDA00037357588800001113
represents the latest completion time of the preceding task of the subtask z of the mobile device d; of tasks
Figure BDA00037357588800001114
Represents the maximum completion time allowed for completing the entire task for mobile device d;
Figure BDA00037357588800001115
represents the latest completion time of a subsequent subtask of the subtask z of the mobile device d; t is d,z+1 Representing the execution time of the next subtask that completes the subtask z of the mobile device d;
Figure BDA00037357588800001116
represents the maximum execution time allowed for completing the subtask z of the mobile device d; therefore, the calculation formula of the latest execution time is:
Figure BDA0003735758880000121
and generating a priority queue according to the latest execution time of each MAR subtask.
Step 2, placing and initializing the caches in the cache set; and (4) considering two angles of the completion time and the cache size of each task for each cache in the cache set, and placing the cache file by using an analytic hierarchy process.
When the cache file is placed by adopting an analytic hierarchy process, the target layer selects a proper server for placing the cache file. The criterion layer when the cache files are placed considers two factors, namely the ratio of the size of each cache file to the cache space of the placed edge server and the execution time of the subtasks unloaded to a certain server when the cache files are placed on the server. And selecting a proper edge server according to the probability cache file calculated by the analytic hierarchy process.
First, a decision matrix is defined:
Figure BDA0003735758880000122
wherein, a represents the importance degree of the ratio of the completion time of the task z on the server and the cache space of the cache task on the server when the edge server is selected. The importance matrices are designed for two factors of the criterion layer respectively:
Figure BDA0003735758880000123
Figure BDA0003735758880000124
wherein, K 1,z,i (b, y) represents an importance matrix of the factor that when the cache file i is cached on the edge servers b and y, the service delay of the subtask z of the terminal d on the edge server b is relative to the service delay on the edge server y; k is a radical of 2,z,i (b, y) is an importance matrix of the factor of the ratio of the size of each cache file i to the cache space of the edge servers b and y placed.
T(z,b)=(1-x rb )T d,z
Wherein x is rb Representing a child of mobile device dThe popularity of the cache r corresponding to the business z on the server b; t is d,z Represents the execution time of the subtask z of the mobile device d; t (z, b) represents the completion time when the subtask z of mobile device d is offloaded onto edge server b.
Judging the maximum eigenvalue lambda of the matrix A max The corresponding feature vectors are:
Figure BDA0003735758880000131
Figure BDA0003735758880000132
Figure BDA0003735758880000133
matrix K 1,z,i ,K 2,z,i Corresponding feature vectors are respectively
Figure BDA0003735758880000134
The weight of the nth server may be expressed as:
Figure BDA0003735758880000135
for task z, the probability that the ith cache file selects the nth edge server can be expressed as:
Figure BDA0003735758880000136
the probability of each cache file selecting an edge server n for all subtasks can be expressed as:
Figure BDA0003735758880000137
therefore, the probability of the cache file selecting the edge server n is:
Figure BDA0003735758880000138
wherein the content of the first and second substances,
Figure BDA0003735758880000139
representing the probability that cache file i will be selected for each task to be placed on edge server n.
And 3, initializing the unloading point and the execution point of each subtask according to the priority queue. And unloading the tasks of the MAR subtasks which cannot find the cache in the edge server according to the sequence in the priority queue and the priority relation between the subtasks.
When the subtasks are to be executed according to the priority order, whether the front thread task exists or not and whether the front thread task is completed or not are firstly determined, and if the front thread task is completed, the subtasks are directly executed locally when the subtask is one of the two subtasks for acquiring and rendering the video resource.
Specifically, according to the order in the priority queue and the priority relationship between the subtasks, task unloading is performed on the MAR subtasks for which the cache cannot be found in the edge server. Firstly, according to a generated priority queue, judging whether a front thread task of a subtask is completed or not, if the subtask is not a video source acquisition and rendering subtask, a mobile device firstly accesses an edge server whether a cache file required by the subtask exists or not, if so, the mobile device directly transmits a result to an execution point of the next subtask, otherwise, the mobile device unloads the task to the edge server closest to the edge for execution. If the video source acquires and renders the two subtasks are executed locally.
When the subtask is one of other three subtasks (tracking, mapping and target detection), firstly connecting an edge server with the shortest request time, searching a cache on the server, and if a cache file exists, transmitting the cache file serving as a result to the next subtask; when the cache file does not exist, the server needs to forward the cache file request to other edge servers in the MEC system. When the cache is not found in the MEC system, firstly checking whether the local resources are sufficient, and if the local computing resources are sufficient, executing locally; otherwise, the edge server to the initial connection executes. When the computing resources of the initially connected edge server are insufficient, the initially connected edge server can be migrated to other edge servers for execution. And removing the subtask from the queue after the execution is finished. When both queues are empty, the initialization is complete.
And 4, optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization. The solution is searched by hiring bees and then the solution information is shared by the follower bees, the follower bees update the solution until the solution is not updated any more, and the follower bees are converted into scout bees to return to the initialization stage and then initialize the scout bees to generate a new solution.
Step 401, generating Y solutions w through initialization of step 3 and step 4 ij I.e. a honey source, wherein i =1, 2.. Y; j =2 xd + I; d represents the number of the mobile devices in the system, and I represents the number of the caches in the cache set; each codec is a cache placement strategy and a task offloading strategy;
step 402, employing bees to search for honey sources according to a solved search formula, wherein the search formula is as follows:
Figure BDA0003735758880000151
wherein w ij Represents the old solution; w is a kj Denotes w ij One solution in the neighborhood; w' i,j Representing the newly generated solution;
Figure BDA0003735758880000152
w 'can be controlled by random selection' i,j And (4) updating.
Generating a new solution according to the search formula if w' i,j Governing old solution w ij The new solution is used to replace the old solution and added to the external file set, otherwise, the old solution is added to the external file set. If the two solutions do not interact with each otherDominating, namely adding all solutions into an external archive set, and judging whether the solutions are kept in the external archive set by using a fitness function (congestion function), wherein the fitness function is as follows:
Figure BDA0003735758880000153
wherein the content of the first and second substances,
Figure BDA0003735758880000154
T max ,T min representing the boundary values of the two objective functions of the grid in which the solution is located in the external archive set.
The follower bee selects the position of the solution that can be further developed according to the roulette method, the probability formula of which is:
Figure BDA0003735758880000155
wherein p is i Denotes the probability of the i-th solution in the external archive set, PY i Representing the fitness value of the ith solution.
And step 404, if one solution reaches the limit times and is not updated, the follower bees are converted into scout bees, the solution is abandoned, and the scout bees are initialized again to generate a new solution.
And step 405, repeating the steps 401-404 until the iteration is finished. A suitable solution is selected from the pareto optimal set (the external archive set).
The invention designs two indexes of hit rate and service delay to evaluate the task unloading and caching positions on the edge server. Under the constraint of the edge server computing resource and the cache space, the problems of task unloading and cache storage are solved by taking the maximum hit rate and the minimum delay as targets. Aiming at the problem, a multi-target artificial bee colony algorithm is adopted. And introducing Pareto optimal relation in the optimization process to find an optimal solution. Extensive evaluation proves that the algorithm has better performance.
Example two
The embodiment provides an unloading and cache placement system of a MAR task in multi-access edge computing, which specifically includes the following modules:
a task partitioning module configured to: dividing the MAR task into a plurality of subtasks;
a priority queuing module configured to: performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;
a placement initialization module configured to: placing and initializing the cache files in the cache set to obtain a cache placing strategy;
a task offload initialization module configured to: initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;
an optimization module configured to: and optimizing by using a multi-objective swarm optimization algorithm according to the initialized and generated cache placement strategy and task unloading strategy.
It should be noted that, each module in the present embodiment corresponds to each step in the first embodiment one to one, and the specific implementation process is the same, which is not described herein again.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method for offloading and caching MAR tasks in multiple access edge computing as described in the above embodiment.
Example four
The present embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps of the method for offloading and storing MAR tasks in the multiple access edge computing as described in the above embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

  1. An unloading and buffer storage method of a MAR task in multi-access edge calculation is characterized by comprising the following steps:
    dividing the MAR task into a plurality of subtasks;
    performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;
    placing and initializing cache files in a cache set to obtain a cache placing strategy;
    initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;
    and optimizing by using a multi-objective swarm optimization algorithm according to the initialized and generated cache placement strategy and task unloading strategy.
  2. 2. The method for offload and cache placement in multi-access edge computing (MAR) task of claim 1, wherein the subtasks include two types: the first type can only be executed at the local device; the second class can be executed either on the local device or the edge server, and the computation results can be cached at the edge server.
  3. 3. The method for offloading and buffering storage in multi-access edge computing (MAR) task of claim 1, wherein according to the priority queue, it is determined whether a previous thread task of a sub-task is completed;
    if the front thread task is completed and the subtasks belong to the first class, the front thread task is directly executed in the local equipment;
    if the front thread task is completed and the subtask belongs to the second class, the mobile equipment accesses the edge server to determine whether a cache file required by the subtask exists, and if so, the cache file is directly transmitted to an execution point of the next subtask as a result; otherwise, the mobile device offloads the subtasks to the nearest edge server for execution.
  4. 4. The method for offloading and cache placement in multi-access edge computing (MAR) tasks of claim 1, wherein the objective of the multi-objective swarm optimization algorithm is: maximizing hit rate and minimizing total service delay.
  5. 5. The method of offloading and cache placement of MAR tasks in multiple access edge computation of claim 1, wherein the constraint of maximizing hit rate comprises:
    the space occupied by the cache files cached on each edge server cannot exceed the cache space of the edge server itself.
  6. 6. The method of offloading and caching for MAR tasks in multi-access edge computation of claim 1, wherein the constraint of minimizing total service delay comprises:
    each subtask completion time cannot exceed the maximum completion time;
    the bandwidth resource allocated to each downlink cannot exceed the bandwidth resource of the mobile device downlink;
    the bandwidth resources allocated to each uplink cannot exceed the bandwidth resources of the mobile device uplink;
    the computational resources allocated to each subtask cannot exceed the total computational resources of the mobile device;
    the computational resources allocated to each subtask cannot exceed the total computational resources of the edge server.
  7. 7. The method for unloading and buffering cache files in multi-access edge computing (MAR) of claim 1, wherein an analytic hierarchy process is used when cache files are placed and initialized;
    the criteria layer of the analytic hierarchy process takes into account two factors: the ratio of the size of each cache file to the cache space of the placed edge server, and the execution time of the subtask unloading to a certain edge server when the cache file is placed to the server.
  8. An offload and cache placement system for MAR tasks in multi-access edge computing, comprising:
    a task partitioning module configured to: dividing the MAR task into a plurality of subtasks;
    a priority queuing module configured to: performing priority queuing on all subtasks according to the latest execution time to obtain a priority queue;
    a placement initialization module configured to: placing and initializing the cache files in the cache set to obtain a cache placing strategy;
    a task offload initialization module configured to: initializing the unloading point and the execution point of each subtask according to the priority queue and the priority queue to obtain a task unloading strategy;
    an optimization module configured to: and optimizing by using a multi-objective swarm optimization algorithm according to the cache placement strategy and the task unloading strategy which are generated by initialization.
  9. 9. A computer readable storage medium, having stored thereon a computer program, which, when being executed by a processor, carries out the steps of the method for offloading and cache placement in a multiple access edge computation of a MAR task as claimed in any of the claims 1-7.
  10. 10. A computer apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps in the method for offloading and caching of MAR tasks in a multiple access edge calculation according to any one of claims 1-7.
CN202210795740.3A 2022-07-07 2022-07-07 Unloading and buffer storage method and system for MAR task in multi-access edge calculation Pending CN115185660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210795740.3A CN115185660A (en) 2022-07-07 2022-07-07 Unloading and buffer storage method and system for MAR task in multi-access edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210795740.3A CN115185660A (en) 2022-07-07 2022-07-07 Unloading and buffer storage method and system for MAR task in multi-access edge calculation

Publications (1)

Publication Number Publication Date
CN115185660A true CN115185660A (en) 2022-10-14

Family

ID=83517932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210795740.3A Pending CN115185660A (en) 2022-07-07 2022-07-07 Unloading and buffer storage method and system for MAR task in multi-access edge calculation

Country Status (1)

Country Link
CN (1) CN115185660A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117806806A (en) * 2024-02-28 2024-04-02 湖南科技大学 Task part unloading scheduling method, terminal equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117806806A (en) * 2024-02-28 2024-04-02 湖南科技大学 Task part unloading scheduling method, terminal equipment and storage medium
CN117806806B (en) * 2024-02-28 2024-05-17 湖南科技大学 Task part unloading scheduling method, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN109885397B (en) Delay optimization load task migration algorithm in edge computing environment
CN109788046B (en) Multi-strategy edge computing resource scheduling method based on improved bee colony algorithm
CN111930436A (en) Random task queuing and unloading optimization method based on edge calculation
CN110069341B (en) Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing
CN114338504A (en) Micro-service deployment and routing method based on network edge system
CN113115252B (en) Delay sensitive task distributed mobile edge computing resource scheduling method and system
CN109656713B (en) Container scheduling method based on edge computing framework
CN111953547B (en) Heterogeneous base station overlapping grouping and resource allocation method and device based on service
CN113573363A (en) MEC calculation unloading and resource allocation method based on deep reinforcement learning
CN115185660A (en) Unloading and buffer storage method and system for MAR task in multi-access edge calculation
CN116263681A (en) Mobile edge computing task unloading method, device, equipment and storage medium
CN112256413A (en) Scheduling method and device for edge computing task based on Internet of things
Chen et al. Joint optimization of task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge network
CN114691372A (en) Group intelligent control method of multimedia end edge cloud system
CN113741999A (en) Dependency-oriented task unloading method and device based on mobile edge calculation
Xu et al. A meta reinforcement learning-based virtual machine placement algorithm in mobile edge computing
CN117579701A (en) Mobile edge network computing and unloading method and system
CN113342504A (en) Intelligent manufacturing edge calculation task scheduling method and system based on cache
CN112596910A (en) Cloud computing resource scheduling method in multi-user MEC system
CN115361453A (en) Load fair unloading and transferring method for edge service network
CN114980160A (en) Unmanned aerial vehicle-assisted terahertz communication network joint optimization method and device
CN113747504A (en) Method and system for multi-access edge computing combined task unloading and resource allocation
Yadav E-MOGWO Algorithm for Computation Offloading in Fog Computing.
Channappa et al. Multi-Objective Optimization Method for Task Scheduling and Resource Allocation in Cloud Environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination