CN111552564A - Task unloading and resource optimization method based on edge cache - Google Patents

Task unloading and resource optimization method based on edge cache Download PDF

Info

Publication number
CN111552564A
CN111552564A CN202010326117.4A CN202010326117A CN111552564A CN 111552564 A CN111552564 A CN 111552564A CN 202010326117 A CN202010326117 A CN 202010326117A CN 111552564 A CN111552564 A CN 111552564A
Authority
CN
China
Prior art keywords
task
user
calculation
cloudlet
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010326117.4A
Other languages
Chinese (zh)
Inventor
邓晓衡
刘锦
孙子惠
刘梦杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202010326117.4A priority Critical patent/CN111552564A/en
Publication of CN111552564A publication Critical patent/CN111552564A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Abstract

The invention provides a task unloading and resource optimizing method based on edge cache, which comprises the following steps: step 1, collecting positioning and task request information of user nodes; step 2, calculating and clustering similarity matrixes of the positioning of the user nodes and the task request information; step 3, constructing a task popularity list according to the calculation of the similarity matrix and the clustering result; and 4, caching the calculation result of the popular task for the Cloudlet in each class according to the task popularity list. The invention studies the ultra-reliable and low-delay communication problem in the edge network by actively caching the calculation results, ensures the reliability by allowing the user node UN to unload the calculation tasks to a plurality of edge calculation nodes according to the concept of the hedge request, and can utilize the calculation and storage resources of the Cloudlet to reduce the calculation delay to the maximum extent by combining the task unloading and the active caching of the popular cacheable task calculation results.

Description

Task unloading and resource optimization method based on edge cache
Technical Field
The invention relates to the technical field of communication, in particular to a task unloading and resource optimization method based on edge cache.
Background
The advent of the Internet of things (IoT), requiring large-scale connections of resource-limited devices, poses an unprecedented challenge to data communication and computing demand, availability requirement system design including latency and reliability aspects, and it is difficult to rely on local computing resources to meet strict computing and processing latency requirements due to the limited computing and energy capabilities of the Internet of things devices, so Mobile Cloud Computing (MCC) solutions are considered to provide computing services to Internet of things nodes, in which regard the User Node (UN) offloads its computing tasks to a remote cloud center, the cloud server performs the tasks and sends the results back, while this solution provides high resource capacity, it cannot handle delay-sensitive computing tasks due to high transmission delays between UNs and the cloud data center. To this end, the concept of edge computing is introduced, bringing computing resources to the edge of the network. Under the condition of short transmission delay, the edge network can balance between high computing power and short transmission distance, and provides efficient computing service for delay-sensitive applications. However, more intelligent resource utilization schemes are needed to efficiently allocate communication and computing resources to more efficiently utilize good edge networks.
Most of the current work on joint optimization of computation and communication resources relies on centralized solutions, where the MEC network has information about all user requests and Channel State Information (CSI), and most of the work on edge network environment only considers the passive response computing paradigm, and starts task computation only after offloading task data to an edge node or server, and these do not strictly consider the constraints of latency and reliability in the edge network.
Based on the distributed nature of the MEC network, the computing resources can be closer to the network edge, more personalized computing services can be provided for the terminal user, the computing delay can be reduced to the maximum extent by active pre-distribution type computing and utilizing the correlation between the requests of the terminal user, the introduction of the pre-fetching idea is considered as the first step leading to the active computing, part of the upcoming task data is predicted and pre-fetched in the process of computing the current task data so as to minimize the acquisition time, and the edge network can track the popularity of the tasks of a set UNs (usernodes) of user nodes and pre-store the computing result thereof by the active pre-distribution type computing, so that the same task data is prevented from being requested for multiple times, and the burden of the task unloading and transmitting process is reduced.
Disclosure of Invention
The invention provides a task unloading and resource optimization method based on edge cache, and aims to solve the problems that a traditional method cannot process delay-sensitive calculation tasks and does not strictly consider the constraints of waiting time and reliability in an edge network.
In order to achieve the above object, an embodiment of the present invention provides a method for task offloading and resource optimization based on edge caching, including:
step 1, collecting positioning and task request information of user nodes;
step 2, calculating and clustering similarity matrixes of the positioning of the user nodes and the task request information;
step 3, constructing a task popularity list according to the calculation of the similarity matrix and the clustering result;
step 4, caching popular task calculation results for the Cloudlets in each class according to the task popularity list;
and step 5, matching the tasks of the user nodes with the Cloudlet on the premise that the calculation result of the popular tasks is cached until the tasks of all the user nodes are bidirectionally and stably matched with all the Cloudlet.
Wherein, the step 2 specifically comprises:
measuring the coupling degree of each pair of user nodes based on the geographical positions of the user nodes, quantifying the similarity between the two user nodes by using a Gaussian similarity matrix, and defining the distance Gaussian similarity matrix as Sd=[dij],dijThe calculation formula is as follows:
Figure BDA0002463264510000021
wherein, ViRepresenting the geographical coordinates, V, of the user node ijRepresenting the geographical coordinates, σ, of the user node jdA similarity parameter is represented that controls the size of the neighborhood.
Wherein, the step 2 further comprises:
during the network training, the task request generation situation of each user u is recorded and a task request generation vector of each user u is formed
Figure BDA0002463264510000031
Obtaining task popularity patterns among different user nodes, ideally, a task request generation vector can capture the generation rate of each task of the user nodes and is beneficial to establishing similarity among users, and a task popularity similarity matrix is defined as Sp=[pij],pijThe calculation formula is as follows:
Figure BDA0002463264510000032
wherein n isiAnd njThe task request occurrence vectors for each task on behalf of user node i and user node j.
Wherein, the step 2 further comprises:
the method comprises the following steps of user clustering and popularity similarity matrix calculation, wherein a network cluster comprises a group of user nodes which are very close to each other and follow a similar task popularity mode, a distance Gaussian similarity matrix and a task popularity similarity matrix are fused together to obtain a comprehensive popularity similarity matrix which is expressed as S, and the calculation is as follows:
S=θSd+(1-θ)Sp(3)
where θ is a trade-off parameter for adjusting the degree of influence of the distance similarity and the task similarity.
Wherein, the step 3 specifically comprises:
the Cloudlet minimizes the service delay requested by the user during the operation of the network by actively caching the computation results of popular tasks received.
Wherein, the step 5 specifically comprises:
finding one Cloudlet for the request task of each user to perform one-to-one matching, calculating the expected delay of each task request, matching the user node with the matched Cloudlet if the reliability index is met, matching the user node with the matched Cloudlet if the reliability index is not met, and searching for matching with other Cloudlets in the matching list by the user according to preference sorting until all the user nodes are bidirectionally and stably matched with all the Cloudlets.
The scheme of the invention has the following beneficial effects:
the method for task offloading and resource optimization based on edge cache according to the above embodiments of the present invention combines active cache capable of caching a computation task result with task offloading to fully utilize computation and storage resources, so as to reduce computation delay to the maximum extent, uses time delay as a key index, and imposes constraints on computation delay and reliability, so as to ensure that, under the constraint of reliability, total delay in task execution is minimized, and stable matching between a task request of a user node and Cloudlet or local devices thereof is achieved, thereby minimizing task computation delay, ensuring reliable service latency, and minimizing total delay in task offloading on the premise of ensuring computation reliability.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a diagram of the model architecture of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The invention provides a task unloading and resource optimization method based on edge cache, aiming at the problems that the existing method can not process delay-sensitive calculation tasks and does not strictly consider the constraints of waiting time and reliability in an edge network.
As shown in fig. 1 to fig. 2, an embodiment of the present invention provides a method for task offloading and resource optimization based on edge caching, including: step 1, collecting positioning and task request information of user nodes; step 2, calculating and clustering similarity matrixes of the positioning of the user nodes and the task request information; step 3, constructing a task popularity list according to the calculation of the similarity matrix and the clustering result; step 4, caching popular task calculation results for the Cloudlets in each class according to the task popularity list; and step 5, matching the tasks of the user nodes with the Cloudlet on the premise that the calculation result of the popular tasks is cached until the tasks of all the user nodes are bidirectionally and stably matched with all the Cloudlet.
In the method for task offloading and resource optimization based on edge cache according to the above embodiments of the present invention, the user node UN and the Cloudlet served by the user node UN are grouped into a plurality of disjoint clusters according to the spatial proximity and the common interest in the hot task, so that the Cloudlet can actively cache the calculation result of the running task in the cluster, thereby ensuring the minimum calculation delay, the problem of assigning the task to the Cloudlet is modeled as the matching problem between Cloudlets and the user node UN, the problem of assigning the user task request to the Cloudlet is solved through effective distribution, and the stable matching is achieved, thereby minimizing the task calculation delay and ensuring the reliable service waiting time.
Wherein, the step 2 specifically comprises: measuring the coupling degree of each pair of user nodes based on the geographical positions of the user nodes, quantifying the similarity between the two user nodes by using a Gaussian similarity matrix, and defining the distance Gaussian similarity matrix as Sd=[dij],dijThe calculation formula is as follows:
Figure BDA0002463264510000051
wherein, ViRepresenting the geographical coordinates, V, of the user node ijRepresenting the geographical coordinates, σ, of the user node jdA similarity parameter is represented that controls the size of the neighborhood.
Wherein, the step 2 further comprises: during network training, the task request of each user u generates the situationThe status will be recorded and a task request occurrence vector for each user u will be formed
Figure BDA0002463264510000052
Obtaining task popularity patterns among different user nodes, ideally, a task request generation vector can capture the generation rate of each task of the user nodes and is beneficial to establishing similarity among users, and a task popularity similarity matrix is defined as Sp=[pij],pijThe calculation formula is as follows:
Figure BDA0002463264510000053
wherein n isiAnd njThe task request occurrence vectors for each task on behalf of user node i and user node j.
Wherein, the step 2 further comprises: the method comprises the following steps of user clustering and popularity similarity matrix calculation, wherein a network cluster comprises a group of user nodes which are very close to each other and follow a similar task popularity mode, a distance Gaussian similarity matrix and a task popularity similarity matrix are fused together to obtain a comprehensive popularity similarity matrix which is expressed as S, and the calculation is as follows:
S=θSd+(1-θ)Sp(3)
where θ is a trade-off parameter for adjusting the degree of influence of the distance similarity and the task similarity.
Wherein, the step 3 specifically comprises: the Cloudlet minimizes the service delay requested by the user during the operation of the network by actively caching the computation results of popular tasks received.
In the method for task offloading and resource optimization based on edge cache according to the above embodiment of the present invention, an active cache policy design of task computation results assumes that a cache space of each Cloudlet is empty at an initial stage of network operation, and as soon as offloading and distributing computation tasks is started, the Cloudlet will cache as many computation results as allowed by a storage capacity of the Cloudlet itself, and once the cache storage space of the Cloudlet is full, a task request result with the lowest popularity in the cache will be replaced by a newly arrived task request result with a higher popularity.
Wherein, the step 5 specifically comprises: finding one Cloudlet for the request task of each user to perform one-to-one matching, calculating the expected delay of each task request, matching the user node with the matched Cloudlet if the reliability index is met, matching the user node with the matched Cloudlet if the reliability index is not met, and searching for matching with other Cloudlets in the matching list by the user according to preference sorting until all the user nodes are bidirectionally and stably matched with all the Cloudlets.
In the method for task offloading and resource optimization based on edge cache according to the above embodiment of the present invention, the offloading distribution problem of the task of the user node is represented as the matching problem between the set of user nodes UNs and Cloudlets, and the preference priority levels of the set of user nodes UNs and Cloudlets (representing the set of Cloudlets) are respectively expressed by the symbol >, whereuAnd >eTo indicate. [ solution ] A method for producing a polymeruAnd >eIndicating how each member in a subset of parties ranks the members in the subset of parties according to preference. According to the objective optimization function and the constraint formula in the formula (4) -formula (18), if the first matched Cloudlet of the user node does not meet the reliability index, the user node will seek to match with multiple cloudlets. The task unloading and resource optimizing method based on the edge cache firstly finds one Cloudlet for the request task of each user node to carry out one-to-one matching, then calculates the predicted delay of each task request, if the reliability constraint inequality in the step (17) is not satisfied, the task unloading and resource optimizing method based on the edge cache again operates to match the user node UN with other cloudlets, and when all the user nodes UN and all the cloudlets realize bidirectional stable matching, the task unloading and resource optimizing method based on the edge cache ends operation.
In the method for task offloading and resource optimization based on edge cache according to the above embodiment of the present invention, as shown in fig. 2, which is a cacheable edge network scenario model, an edge network including E cloudlets and U user nodes is considered, and a Cloudlet set is defined as{1, 2.... E } and user node U ═ 1, 2.... U }, all user nodes UN are uniformly distributed in the edge network area, and the computing power of each Cloudlet is assumed to be ce(cycles/s, CPU clock speed) with a storage capacity of seAll cloudlets share the same frequency band, the bandwidth is B, and the Cloudlet works in a time division multiplexing mode, the user node transmits data in an orthogonal time slot (namely, the frequency spectrum is orthogonally allocated, there is no interference in a cell), and it is assumed that all user nodes UN set a including a calculation tasks as { a ═ a1,a2,.....,aAOf interest, and the compliance of the arrival of the computing task is expected to be λuThe size of the data volume defining the task is expected to be LaThe task computation process is defined as A, a cacheable task subset is A, the CPU period required for processing each bit of task data is k (cycles/bit), a user U (U ∈ U) sending a task computation request can unload the task to any Cloudlet in the communication transmission coverage range of the user U, the communication coverage range of the user U is determined according to a path loss threshold, the Cloudlets list which can unload the task by the user U is not changed frequently, the user U can unload the task data to the Cloudlet which responds to the request firstly in the coverage range, the Cloudlet can process the task and return the computation result after the unloading is finished, the downlink delay returned by the task processing result is ignored because the data volume of the returned computation result is very small, the model focuses on the uplink transmission delay and the task computation delay, and the computation processing result of the cacheable popular task is actively cached in the Cloudlet in advance, and the whole task unloading execution process delay can be reduced to the greatest extentc
Figure BDA0002463264510000071
The uncacheable subset of tasks is Anc
Figure BDA0002463264510000072
And A isc∪Anc=A。
Constructing a minimum delay problem: a first contemplated implementation is local implementation: the user u selects to execute the task a locally at the user node UNLine calculation; a second contemplated implementation is offload execution: user u offloads task a to the Cloudlet to perform the computation. Synthesizing two conditions of local execution and uninstalling execution, and executing the time delay D generated by the task aa(t) can be expressed as:
Figure BDA0002463264510000073
wherein x isea(t) is a binary variable indicating whether task a is assigned to Cloudlete, x at time teaIf (t) is 1, it means that task a is offloaded to Cloudlet e execution,
Figure BDA0002463264510000074
and
Figure BDA0002463264510000075
the total delay for selecting the local execution task and the total delay for selecting the uninstalled execution are represented by the user respectively.
Figure BDA0002463264510000076
And
Figure BDA0002463264510000077
the calculation formulas of (A) and (B) are respectively as follows:
Figure BDA0002463264510000078
Figure BDA0002463264510000079
wherein the content of the first and second substances,
Figure BDA00024632645100000710
indicating the queuing delay, τ, required for local executionLPIndicating the processing delay of the local execution,
Figure BDA00024632645100000711
representing task offloadingQueue latency of execution, τEPRepresents the Cloudlet processing latency, yea(t) is a binary variable indicating whether the calculation result of task a is cached in Cloudlet e at time t. The calculation formula of the queuing delay of the local execution and the queuing delay of the unloading execution are respectively as follows:
Figure BDA00024632645100000712
Figure BDA00024632645100000713
wherein Q isuRepresenting the local task queue of user u (i.e. the local pending task queue),
Figure BDA00024632645100000714
representing the amount of pending task data remaining in the local queue at time t,
Figure BDA0002463264510000081
indicating time slot t task aiIn queue QeThe amount of task data remaining in.
The total time delay of task completion is reduced to the maximum extent under the constraint of reliability by effectively distributing and actively caching the result of the calculation task, the reliability is represented by a probability constraint model, and a task distribution matrix between the user node UN and the Cloudlet is represented as X ═ Xea]And the task result cache matrix is expressed as Y ═ Yea]The minimum latency problem is modeled as follows:
Figure BDA0002463264510000082
Figure BDA0002463264510000083
Figure BDA0002463264510000084
Figure BDA0002463264510000085
wherein equation (10) is a probabilistic delay constraint for ensuring that the total latency for task offload execution is less than the limiting threshold D with a probability of 1- ηthConstraint equation (11) indicates that a task request can be offloaded to X at mostmaxAnd (c) cloudets. In the experimental part which follows, XmaxWill be given a constant value. Constraint equation (12) indicates that the task amount of the Cloudlet e cache cannot exceed the storage capacity s of the Cloudlete
To make the problem easier to handle, the probability constraint in equation (10) is converted to a linear constraint using the markov inequality, expressed as:
Figure BDA0002463264510000086
wherein E {. represents the expectation of time. Since the latency of computing the cached tasks is very small, the latency of the non-cached tasks is kept below a predetermined threshold. Then for a single Cloudlet, the constraint can be expressed as:
Figure BDA0002463264510000087
will calculate
Figure BDA0002463264510000088
Is obtained by substituting the formula (14):
Figure BDA0002463264510000089
further splitting equation (15) by the definition of the markov inequality yields equation (16) below:
Figure BDA00024632645100000810
finally, by ensuring that the above inequality is satisfied at each time slot t, the ultra-reliable low-latency communication constraint can be satisfied, and the final constraint inequality can be expressed as:
Figure BDA0002463264510000091
equation (17) shows that to achieve the desired reliability, the task queue Q for the new application to join Cloudlet eeThe maximum transmission delay allowed does not exceed the maximum transmission delay of
Figure BDA0002463264510000092
That is, if the transmission delay between task a and the currently requested Cloudlet e does not satisfy the constraint in equation (17), the user node can be matched with another Cloudlet, thereby satisfying the reliability constraint. The transmission delay available from equation (17) depends not only on the size of task a at time instant t, but also on the transmission channel conditions between user u and the matched Cloudlet. For each Cloudlee in the coverage range of the user u, a time average estimation method is used for estimating the average value of the service rate at the moment t
Figure BDA0002463264510000093
The calculation formula is as follows:
Figure BDA0002463264510000094
wherein v (t) is a learning parameter with a value range of v (t) ∈ (0, 1)]And is and
Figure BDA0002463264510000095
the method for task offloading and resource optimization based on edge caching according to the embodiments of the present invention combines active caching capable of caching a computation task result with task offloading to fully utilize computation and storage resources, so as to reduce computation delay to the maximum extent, uses time delay as a key index, and imposes constraints on computation delay and reliability, so as to ensure that the total delay of task execution is minimized under the constraint of reliability. In the edge network model, the user node UN and the edge computing node Cloudlet providing service for the user node UN form a cluster based on spatial similarity and common interest in the popular tasks, accordingly, the Cloudlet actively caches the computation results of the popular tasks in advance in the cluster where the Cloudlet is located, the cached computation results of the popular tasks can be reused by other multiple user devices, so that delay is minimized, distribution of the tasks is modeled as a matching problem between the Cloudlet and the user node UN, stable matching between the task request of the user node UN and the Cloudlet or local devices of the user node UN is realized, task computation delay is minimized, and reliable service waiting time is ensured.
The task offloading and resource optimization method based on edge cache according to the above embodiments of the present invention integrates the concept of active cache into task offloading work, the ultra-reliable and low-latency task offloading scheme based on cache is very significant in the context of mobile edge computing, meanwhile, a clustering concept is introduced, clusters of cloudlets and user nodes UN are formed based on user spatial proximity and similarity of interested tasks, the cloudlets in each cluster actively cache related computation results according to the task popularity similarity matrix of the cluster, so as to reduce computation delay to the maximum extent, balance between minimizing service waiting time and maximizing service reliability under different network conditions, and implement computation result caching of popular tasks by the method of task offloading and resource optimization based on edge cache, and offload allocation of tasks not performed by the user.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. A method for task unloading and resource optimization based on edge cache is characterized by comprising the following steps:
step 1, collecting positioning and task request information of user nodes;
step 2, calculating and clustering similarity matrixes of the positioning of the user nodes and the task request information;
step 3, constructing a task popularity list according to the calculation of the similarity matrix and the clustering result;
step 4, caching popular task calculation results for the Cloudlets in each class according to the task popularity list;
and step 5, matching the tasks of the user nodes with the Cloudlet on the premise that the calculation result of the popular tasks is cached until the tasks of all the user nodes are bidirectionally and stably matched with all the Cloudlet.
2. The method for task offloading and resource optimization based on edge caching according to claim 1, wherein the step 2 specifically comprises:
measuring the coupling degree of each pair of user nodes based on the geographical positions of the user nodes, quantifying the similarity between the two user nodes by using a Gaussian similarity matrix, and defining the distance Gaussian similarity matrix as Sd=[dij],dijThe calculation formula is as follows:
Figure FDA0002463264500000011
wherein, ViRepresenting the geographical coordinates, V, of the user node ijRepresenting the geographical coordinates, σ, of the user node jdA similarity parameter is represented that controls the size of the neighborhood.
3. The method for task offloading and resource optimization based on edge caching as claimed in claim 2, wherein the step 2 further comprises:
during the network training, the task request generation situation of each user u is recorded and a task request generation vector of each user u is formed
Figure FDA0002463264500000012
Obtaining task popularity patterns among different user nodes, ideally, a task request generation vector can capture the generation rate of each task of the user nodes and is beneficial to establishing similarity among users, and a task popularity similarity matrix is defined as Sp=[pij],pijThe calculation formula is as follows:
Figure FDA0002463264500000013
wherein n isiAnd njThe task request occurrence vectors for each task on behalf of user node i and user node j.
4. The method for task offloading and resource optimization based on edge caching as claimed in claim 3, wherein the step 2 further comprises:
the method comprises the following steps of user clustering and popularity similarity matrix calculation, wherein a network cluster comprises a group of user nodes which are very close to each other and follow a similar task popularity mode, a distance Gaussian similarity matrix and a task popularity similarity matrix are fused together to obtain a comprehensive popularity similarity matrix which is expressed as S, and the calculation is as follows:
S=θSd+(1-θ)Sp(3)
where θ is a trade-off parameter for adjusting the degree of influence of the distance similarity and the task similarity.
5. The method for task offloading and resource optimization based on edge caching according to claim 4, wherein the step 3 specifically comprises:
the Cloudlet minimizes the service delay requested by the user during the operation of the network by actively caching the computation results of popular tasks received.
6. The method for task offloading and resource optimization based on edge caching according to claim 5, wherein the step 5 specifically comprises:
finding one Cloudlet for the request task of each user to perform one-to-one matching, calculating the expected delay of each task request, matching the user node with the matched Cloudlet if the reliability index is met, matching the user node with the matched Cloudlet if the reliability index is not met, and searching for matching with other Cloudlets in the matching list by the user according to preference sorting until all the user nodes are bidirectionally and stably matched with all the Cloudlets.
CN202010326117.4A 2020-04-23 2020-04-23 Task unloading and resource optimization method based on edge cache Pending CN111552564A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010326117.4A CN111552564A (en) 2020-04-23 2020-04-23 Task unloading and resource optimization method based on edge cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010326117.4A CN111552564A (en) 2020-04-23 2020-04-23 Task unloading and resource optimization method based on edge cache

Publications (1)

Publication Number Publication Date
CN111552564A true CN111552564A (en) 2020-08-18

Family

ID=72003902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010326117.4A Pending CN111552564A (en) 2020-04-23 2020-04-23 Task unloading and resource optimization method based on edge cache

Country Status (1)

Country Link
CN (1) CN111552564A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217940A (en) * 2020-08-28 2021-01-12 深圳市修远文化创意有限公司 Memory release method and related device
CN112612601A (en) * 2020-12-07 2021-04-06 苏州大学 Intelligent model training method and system for distributed image recognition
CN112883526A (en) * 2021-03-15 2021-06-01 广西师范大学 Workload distribution method under task delay and reliability constraints
CN112988275A (en) * 2021-03-26 2021-06-18 河海大学 Task perception-based mobile edge computing multi-user computing unloading method
CN113037872A (en) * 2021-05-20 2021-06-25 杭州雅观科技有限公司 Caching and prefetching method based on Internet of things multi-level edge nodes
CN113282409A (en) * 2021-05-13 2021-08-20 广东电网有限责任公司广州供电局 Edge calculation task processing method and device and computer equipment
CN113452566A (en) * 2021-07-05 2021-09-28 湖南大学 Cloud edge side cooperative resource management method and system
CN113626107A (en) * 2021-08-20 2021-11-09 中南大学 Mobile computing unloading method, system and storage medium
CN115134410A (en) * 2022-05-18 2022-09-30 北京邮电大学 Edge collaboration service field dividing method and device, electronic equipment and storage medium
CN115878227A (en) * 2023-03-02 2023-03-31 江西师范大学 Edge calculation task unloading method based on crowd classification

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109474961A (en) * 2018-12-05 2019-03-15 安徽大学 A kind of network energy efficiency optimization method of mobile edge calculations server, system
CN110602173A (en) * 2019-08-20 2019-12-20 广东工业大学 Content cache migration method facing mobile block chain
EP3605329A1 (en) * 2018-07-31 2020-02-05 Commissariat à l'énergie atomique et aux énergies alternatives Connected cache empowered edge cloud computing offloading

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3605329A1 (en) * 2018-07-31 2020-02-05 Commissariat à l'énergie atomique et aux énergies alternatives Connected cache empowered edge cloud computing offloading
CN109474961A (en) * 2018-12-05 2019-03-15 安徽大学 A kind of network energy efficiency optimization method of mobile edge calculations server, system
CN110602173A (en) * 2019-08-20 2019-12-20 广东工业大学 Content cache migration method facing mobile block chain

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MOHAMMED S.ELBAMBY等: ""Proactive edge computing in latency-constrained fog networks"" *
邓晓衡;关培源;万志文;刘恩陆;罗杰;赵智慧;刘亚军;张洪刚;: "基于综合信任的边缘计算资源协同研究" *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217940A (en) * 2020-08-28 2021-01-12 深圳市修远文化创意有限公司 Memory release method and related device
CN112612601A (en) * 2020-12-07 2021-04-06 苏州大学 Intelligent model training method and system for distributed image recognition
CN112883526A (en) * 2021-03-15 2021-06-01 广西师范大学 Workload distribution method under task delay and reliability constraints
CN112883526B (en) * 2021-03-15 2023-04-07 广西师范大学 Workload distribution method under task delay and reliability constraint
CN112988275A (en) * 2021-03-26 2021-06-18 河海大学 Task perception-based mobile edge computing multi-user computing unloading method
CN113282409A (en) * 2021-05-13 2021-08-20 广东电网有限责任公司广州供电局 Edge calculation task processing method and device and computer equipment
CN113037872B (en) * 2021-05-20 2021-08-10 杭州雅观科技有限公司 Caching and prefetching method based on Internet of things multi-level edge nodes
CN113037872A (en) * 2021-05-20 2021-06-25 杭州雅观科技有限公司 Caching and prefetching method based on Internet of things multi-level edge nodes
CN113452566A (en) * 2021-07-05 2021-09-28 湖南大学 Cloud edge side cooperative resource management method and system
CN113626107A (en) * 2021-08-20 2021-11-09 中南大学 Mobile computing unloading method, system and storage medium
CN113626107B (en) * 2021-08-20 2024-03-26 中南大学 Mobile computing unloading method, system and storage medium
CN115134410A (en) * 2022-05-18 2022-09-30 北京邮电大学 Edge collaboration service field dividing method and device, electronic equipment and storage medium
CN115134410B (en) * 2022-05-18 2023-11-10 北京邮电大学 Edge collaboration service domain division method and device, electronic equipment and storage medium
CN115878227A (en) * 2023-03-02 2023-03-31 江西师范大学 Edge calculation task unloading method based on crowd classification

Similar Documents

Publication Publication Date Title
CN111552564A (en) Task unloading and resource optimization method based on edge cache
Fadlullah et al. HCP: Heterogeneous computing platform for federated learning based collaborative content caching towards 6G networks
CN110413392B (en) Method for formulating single task migration strategy in mobile edge computing scene
CN111031102B (en) Multi-user, multi-task mobile edge computing system cacheable task migration method
Elbamby et al. Proactive edge computing in latency-constrained fog networks
Xiong et al. Resource allocation based on deep reinforcement learning in IoT edge computing
WO2022121097A1 (en) Method for offloading computing task of mobile user
CN107995660B (en) Joint task scheduling and resource allocation method supporting D2D-edge server unloading
CN111586696B (en) Resource allocation and unloading decision method based on multi-agent architecture reinforcement learning
CN109151864B (en) Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network
CN112105062B (en) Mobile edge computing network energy consumption minimization strategy method under time-sensitive condition
Elbamby et al. Proactive edge computing in fog networks with latency and reliability guarantees
Li et al. Capacity-aware edge caching in fog computing networks
CN109639833A (en) A kind of method for scheduling task based on wireless MAN thin cloud load balancing
Lee et al. Online optimization for low-latency computational caching in fog networks
CN114938381B (en) D2D-MEC unloading method based on deep reinforcement learning
Xiong et al. Learning augmented index policy for optimal service placement at the network edge
Chen et al. Dynamic task caching and computation offloading for mobile edge computing
CN113407249B (en) Task unloading method facing to position privacy protection
CN112689296B (en) Edge calculation and cache method and system in heterogeneous IoT network
CN116828534B (en) Intensive network large-scale terminal access and resource allocation method based on reinforcement learning
CN116347522A (en) Task unloading method and device based on approximate computation multiplexing under cloud edge cooperation
CN113115362B (en) Cooperative edge caching method and device
Li Optimization of task offloading problem based on simulated annealing algorithm in MEC
Mi et al. Joint caching and transmission in the mobile edge network: An multi-agent learning approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination