CN117632298B - Task unloading and resource allocation method based on priority list indexing mechanism - Google Patents

Task unloading and resource allocation method based on priority list indexing mechanism Download PDF

Info

Publication number
CN117632298B
CN117632298B CN202311663171.8A CN202311663171A CN117632298B CN 117632298 B CN117632298 B CN 117632298B CN 202311663171 A CN202311663171 A CN 202311663171A CN 117632298 B CN117632298 B CN 117632298B
Authority
CN
China
Prior art keywords
dag
priority
task
resource allocation
subtask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311663171.8A
Other languages
Chinese (zh)
Other versions
CN117632298A (en
Inventor
陈益杉
罗显淞
王碧
李伟
徐中辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi University of Science and Technology
Original Assignee
Jiangxi University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi University of Science and Technology filed Critical Jiangxi University of Science and Technology
Priority to CN202311663171.8A priority Critical patent/CN117632298B/en
Publication of CN117632298A publication Critical patent/CN117632298A/en
Application granted granted Critical
Publication of CN117632298B publication Critical patent/CN117632298B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a task unloading and resource allocation method based on a priority list indexing mechanism, which comprises the following steps: collecting resource information of an edge server in a designated area through a monitor, constructing a DAG model according to a task request of local equipment, and generating a priority index list; based on the priority index list, performing target optimization by adopting a greedy algorithm and a resource allocation algorithm based on priority to obtain an associated task unloading strategy and a resource allocation method; and executing user task unloading operation according to the associated task unloading strategy and the resource allocation method. The invention effectively reduces the overall delay, improves the utilization rate of resources and realizes the efficient execution of the associated tasks and the reasonable allocation of the resources by predicting and optimizing the task unloading and the resource allocation.

Description

Task unloading and resource allocation method based on priority list indexing mechanism
Technical Field
The invention relates to the technical field of offloading of DAG tasks and resource allocation in an edge computing environment, in particular to a task offloading and resource allocation method based on a priority list indexing mechanism.
Background
The edge calculation distributes calculation and data processing to a plurality of edge servers, so that more efficient and rapid data processing and distribution are realized, dependence on a cloud computing center is reduced, and delay and bandwidth consumption of data transmission are reduced. In such an environment, tasks need to be offloaded to multiple edge servers for processing to reduce latency and bandwidth consumption and improve quality of service.
However, in a multi-user multi-edge server environment, how to reasonably offload and allocate resources to related tasks remains a challenging problem. The local device-generated associative tasks are typically modeled as DAG tasks, and a DAG task may be composed of multiple subtasks, where there are precedence and dependency relationships between the subtasks, and thus a policy is needed that can reasonably allocate resources and coordinate the execution order between the tasks. Meanwhile, since there is a difference in performance and resource utilization between edge servers, a scheme capable of dynamically adjusting resource allocation and task offloading is required.
In practical applications, resource limitation is an important issue. Because of the limited computing power of edge servers, a solution is needed that optimizes resource utilization and task execution efficiency. On the one hand, it is desirable to utilize the computing resources of the edge servers as much as possible to increase the processing power and efficiency of the overall system. On the other hand, the problems of resource waste, task blockage and the like need to be avoided so as to ensure the stability and the reliability of the system.
To solve these problems, it may be considered to introduce a priority mechanism to rationally formulate a task offloading scheme and a resource allocation scheme.
The priority mechanism in task unloading mainly divides different tasks into different priorities, and the priority mechanism can reasonably enable the tasks with priority levels to be capable of preempting better unloading positions on the premise of guaranteeing the task sequence and the execution efficiency, so that the overall performance of the system is improved. Meanwhile, in order to more accurately perform resource allocation, an effective resource allocation method is also required to be designed. The resource allocation method needs to be capable of taking a plurality of factors such as the overall calculated amount of the DAG task, the calculation resource condition of each device, the resource utilization rate of the edge server, the dependency relationship among the tasks and the like into consideration, so that more reasonable decisions are made in the aspects of task unloading and resource allocation, and the method is a practical and challenging work.
Disclosure of Invention
Aiming at the problem of unloading and resource allocation of DAG tasks in an edge computing environment, the invention provides a task unloading and resource allocation method based on a priority list indexing mechanism.
In order to achieve the above object, the present invention provides a method for task offloading and resource allocation based on a priority list indexing mechanism, including:
collecting resource information of an edge server in a designated area, constructing a DAG model according to a task request of local equipment, and generating a priority index list;
based on the priority index list, performing target optimization by adopting a greedy algorithm and a resource allocation algorithm based on priority to obtain an associated task unloading strategy and a resource allocation method;
And executing user task unloading operation according to the associated task unloading strategy and the resource allocation method.
Preferably, constructing a DAG model according to the task request of the local device, generating a priority index list, including:
the method comprises the steps of respectively setting the priority of each subtask in a DAG and the priority of each DAG, sequencing each subtask in the DAG to obtain a subtask priority list of the DAG, and sequencing the subtask priority lists of the DAGs to obtain a DAG priority list, namely the priority index list.
Preferably, the method for defining the priority of each subtask in the DAG is as follows:
In the method, in the process of the invention, Subtasks generated for local device I x, I is the serial number of the subtask, x is the serial number of the local device,Representation/>Priority of/>Representation/>Calculated amount size of,/>For the integrated computing power of the local device I x and the edge servers in the system,/>Representing the need/>Sending a set of dependent data subtasks,/>For/>Is one of the sub-tasks,/>Representation/>Direction/>Transmitted dependent data,/>For/>Data size of,/>To rely on the average transmission rate of data in the system.
Preferably, the method of defining the priority of each DAG is:
Where G x is the DAG task generated by local device I x, For the start subtask of G x, cost (G x) is the system cost of G x, and ψ (x) is the priority of G x.
Preferably, the method for obtaining the associated task offloading policy and the resource allocation method includes:
Searching an unloading decision which enables each subtask to have the minimum earliest completion time through a greedy algorithm according to the priority index list, and obtaining DAG unloading records of each edge server;
And taking out each subtask according to the index of the priority index list, selecting equipment with earliest completion time for the subtask to unload, updating a subtask unloading strategy and a DAG unloading record of an edge server, and calculating the computing resource duty ratio of the DAG in the edge server according to the DAG unloading record of the edge server to obtain the associated task unloading strategy and the resource allocation method.
Preferably, the method for calculating the computing resource duty ratio of the DAG at the edge server according to the DAG unloading record of the edge server comprises the following steps:
In the method, in the process of the invention, For resource allocation proportion,/>To record whether the local device-generated DAG tasks offload subtasks to the binary variables of the edge server, ψ (N) is the priority of the DAG and N is the total number of DAG tasks.
Preferably, the objective function of the associated task offloading policy and resource allocation method is the average response time of the system to complete all DAG tasks:
Where ft is the completion time of the subtask, For the ending task of G x, G x is the DAG task generated by local device I x, and N is the total number of DAG tasks.
On the other hand, in order to achieve the above objective, the present invention further provides a task offloading and resource allocation system based on a priority list indexing mechanism, including:
and a system monitoring unit: for monitoring task information from the device and resource information of the system;
And (3) unloading the planning unit: planning a task unloading strategy and a resource allocation method through a DAG priority list indexing mechanism according to the task information and the resource information;
Policy enforcement unit: and executing user task unloading operation according to the formulated task unloading strategy and the resource allocation method.
Compared with the prior art, the invention has the following advantages and technical effects:
According to the invention, through comprehensive consideration of the task priority and the user priority, the problem of unloading the DAG task in the distributed computing environment can be better solved, each subtask can obtain the earliest ending time under the current resource condition, and the computing resource of the distributed computing environment is utilized to the maximum extent, so that the execution efficiency and performance of the DAG task are improved; by predicting and optimizing task unloading and resource allocation, overall delay is effectively reduced, resource utilization rate is improved, and efficient execution of associated tasks and reasonable allocation of resources are realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. In the drawings:
FIG. 1 is a schematic diagram of a task offloading and resource allocation system architecture based on a priority list indexing mechanism according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for offloading DAG tasks for resource allocation according to an embodiment of the present invention;
FIG. 3 is a diagram showing a relationship between task response time and a parameter α according to an embodiment of the present invention;
FIG. 4 is a graph showing the relationship between task response time and algorithm according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a relationship between task response time and the number of edge servers according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a relationship between task response time and the number of local devices according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a task response time versus a maximum number of logic cores according to an embodiment of the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The invention provides a task unloading and resource allocation method based on a priority list indexing mechanism, as shown in fig. 1, which specifically comprises the following steps:
In general, task offloading takes smaller offloading delay as an optimization target, while computing resources in the environment are limited, so that some important or computationally intensive tasks cannot be allocated with more computing resources, which is especially deadly for related tasks, and the delay of one task becomes long, which often directly or indirectly affects the progress of a post-task, and this increases the overall delay of the system. In order to enable a task with larger calculation amount or a task with a special position to enjoy better calculation resources, the task with a special calculation amount or the task with a special position should preempt a better unloading position, and therefore, a task unloading and resource allocation method based on a priority list indexing mechanism for a user is provided.
Collecting resource information of an edge server in a designated area through a monitor, constructing a DAG model according to a task request of local equipment, and generating a priority index list;
Defining the priority of each subtask in the DAG and the priority of each DAG respectively, sequencing each subtask in the DAG to obtain a subtask priority list of the DAG, and sequencing the subtask priority list of each DAG to obtain a DAG priority list, namely the priority index list.
The priority of each subtask in the DAG is defined as:
the priority of each DAG is defined as:
Wherein:
1) G x represents DAG tasks generated by local device I x, For sub-task with sequence number i in G x,/>For the start subtask of G x,/>Representing the need/>A set of dependent data subtasks is sent.
2)For/>Calculated amount size of,/>The comprehensive computing power of the I x and the edge server in the system is expressed as follows:
Where α represents a weight parameter, F x is the computing power of I x, F k is the computing power of edge servers P k, and M is the total number of edge servers.
3)Representation/>Direction/>Transmission dependent data,/>To depend on the size of the data,/>To rely on the average transmission rate of data in the system.
4) Cost (G x) is the system cost of G x, expressed as follows:
Wherein I x is the set of subtasks in G x, and E x is the set of dependent data in G x.
5) Sequencing each subtask in the DAG from big to small according to the rank to obtain a subtask priority list Q of the DAG; and then sequencing the Q of each DAG according to the sequence from big to small of the psi to obtain a DAG priority list PL.
Based on the priority index list, performing target optimization by adopting a greedy algorithm and a resource allocation algorithm based on priority to obtain an associated task unloading strategy and a resource allocation method;
FIG. 2 is a flow chart of a method for offloading DAG tasks for resource allocation, in a distributed computing environment, a monitor first collects resource information of edge servers in an area, builds a DAG model based on task requests of local devices, and generates a priority index list for the DAGs. According to the list order, the best offload locations are selected for each subtask in combination with the computing resources of the different devices and the DAG priority and the appropriate computing resources are allocated.
In the process of obtaining the offloading decision, the optimization objective of further construction is the average response time of the system to complete all DAG tasks:
wherein,
1) Ft represents the completion time of the subtask,The ending task of G x, therefore/>Also denoted as G x is the response time, and ft is calculated as follows:
F x is the computing power of the local device I x, F k is the computing power of the edge server P k, The computing resource duty cycle on edge server P k is G x.
2) Execution is received at P k The execution cannot begin/>, until the required dependent data and P k are availableThus,/>The start time of (2) is expressed as follows:
wherein, Representation of the direction/>Sending a set of subtasks dependent on the data, ava (G x,Pk) representing the start availability time of P k for the subtask of G x, which is related to the subtask of the last G x processed by P k, and ava (G x,P0) representing the start availability time of local device I x,/>Is dependent on the transmission time of the data.
3) Computing resource allocation ratio of G x on edge server P k In relation to DAG priority, expressed as:
wherein, To record whether G n offloads subtasks to the binary variable of P k, N is the total number of DAG tasks.
Aiming at the problem of unloading multi-user DAG tasks, a greedy algorithm and a resource allocation algorithm based on priority are adopted and target optimization is carried out. Mapping the associated task unloading problem into a DAG task model, and obtaining an ordered priority index list among DAGs through calculating the sub-task priority and the DAG priority after the DAG task is generated. And obtaining the unloading decision of each subtask through a greedy algorithm according to the list index sequence. And finally, obtaining a DAG unloading record of the edge server through the unloading decision, and distributing the computing resource duty ratio of each subtask. The whole priority list indexing mechanism process is as follows:
1) Initialization of
And collecting and processing data elements of all the associated tasks, modeling a DAG task graph corresponding to the data elements, and collecting the resource information of the edge server under the current condition.
2) Index operation
The priority of each DAG and its subtasks is calculated, building a priority list PL.
3) Greedy selection
And taking out each subtask according to the PL index, selecting the equipment with the earliest completion time for unloading the subtask, and updating the subtask unloading strategy and the edge server DAG unloading record.
4) Resource allocation
Computing the computing resource duty ratio of the DAG at the edge server according to the edge server DAG unloading record:
through the greedy algorithm and the resource allocation, the method can efficiently solve the problem of unloading the multi-user DAG task, and realize the optimal utilization of the resources, thereby improving the task execution efficiency in the edge computing environment.
Unloading operation
And executing user task unloading operation according to the formulated unloading strategy and resource allocation method.
The task unloading and resource allocation process by adopting the invention is as follows:
1) An initialization stage: the processor receives task processing requests from various devices;
2) Task offloading decision-making stage: and constructing a DAG priority list according to the DAG priority and the subtask priority, searching an unloading decision which enables each subtask to have the minimum earliest completion time through a greedy algorithm, and obtaining the DAG unloading record of each edge server.
3) Computing resource allocation phase: and performing edge server computing resource allocation according to the obtained DAG unloading record.
4) The execution stage: and unloading all DAG tasks according to the task unloading decision and the computing resource allocation strategy.
The embodiment also provides a task unloading and resource allocation system based on the priority list indexing mechanism, which is used for implementing the task unloading and resource allocation method based on the priority list indexing mechanism, and comprises the following steps:
and a system monitoring unit: for monitoring task information from the device and resource information of the system;
And (3) unloading the planning unit: planning a task unloading strategy and a resource allocation method through a DAG priority list indexing mechanism according to the task information and the resource information;
Policy enforcement unit: and executing user task unloading operation according to the formulated task unloading strategy and the resource allocation method.
Simulation of
The data results of the method (PBGTS) of the present invention are compared with the data results of the differential evolution algorithm (MDE), heterogeneous earliest completion time algorithm (MHEFT), multi-application multi-task scheduling algorithm (MAMTS), particle swarm algorithm (PSO), and Genetic Algorithm (GA). As can be seen from the comparison in fig. 3, the effect of α on the average response time of the system is controlled to be within 2s, and the α of the lightweight (Lw) DAG is set to 0.9, the α of the computationally intensive (Cpi) DAG is set to 0.875, and the α of the communications intensive (Cmi) DAG is set to 0.7 through the comparison analysis. As can be seen from the comparison in FIG. 4, the response time and the system average response time of the single DAG of the present invention are superior to other algorithms. As can be seen from a comparison of FIGS. 5 and 6, the advantages of the present invention are not affected when the computing environment changes, and the present invention achieves a good result both by changing the number of edge servers and by changing the number of local devices. As can be seen from FIG. 7, the number of maximum logic cores affects the effectiveness of the present invention and other algorithms, and by comparative analysis, the present invention sets the maximum logic cores of the lightweight DAG, the computationally intensive DAG, and the communications intensive DAG to 10, 7, and 6, respectively, for best results.
The invention mainly aims at the problem of unloading decision and resource allocation of DAG tasks in an edge computing environment, designs a task unloading decision system based on priority, aims at the earliest starting time of each subtask under the current condition on the premise of reducing overall delay for task unloading, finds a satisfactory task unloading scheme and resource allocation scheme in a distributed computing environment with limited computing resources, and provides a task unloading and resource allocation method based on a priority list indexing mechanism.
According to the invention, through comprehensive consideration of the task priority and the user priority, the problem of unloading the DAG task in the distributed computing environment can be better solved, each subtask can obtain the earliest ending time under the current resource condition, and the computing resource of the distributed computing environment is utilized to the maximum extent, so that the execution efficiency and performance of the DAG task are improved; by predicting and optimizing task unloading and resource allocation, overall delay is effectively reduced, resource utilization rate is improved, and efficient execution of associated tasks and reasonable allocation of resources are realized.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (5)

1. The task unloading and resource allocation method based on the priority list indexing mechanism is characterized by comprising the following steps:
collecting resource information of an edge server in a designated area, constructing a DAG model according to a task request of local equipment, and generating a priority index list;
Constructing a DAG model according to the task request of the local equipment, and generating a priority index list, wherein the method comprises the following steps:
The method comprises the steps of respectively setting the priority of each subtask in a DAG and the priority of each DAG, sequencing each subtask in the DAG to obtain a subtask priority list of the DAG, and sequencing the subtask priority lists of all DAGs to obtain a DAG priority list, namely the priority index list;
the method for defining the priority of each subtask in the DAG comprises the following steps:
In the method, in the process of the invention, Subtasks generated for local device I x, I is the serial number of the subtask, x is the serial number of the local device,/>Representation/>Priority of/>Representation/>Calculated amount size of,/>For the integrated computing power of the local device I x and the edge servers in the system,/>Representing the need/>Sending a set of dependent data subtasks,/>For/>Is one of the sub-tasks,/>Representation/>Direction/>Transmitted dependent data,/>For/>Data size of,/>To rely on the average transmission rate of data in the system;
the method of defining the priority of each DAG is:
Where G x is the DAG task generated by local device I x, For the start subtask of G x, cost (G x) is the system cost of G x, ψ (x) is the priority of G x;
based on the priority index list, performing target optimization by adopting a greedy algorithm and a resource allocation algorithm based on priority to obtain an associated task unloading strategy and a resource allocation method;
And executing user task unloading operation according to the associated task unloading strategy and the resource allocation method.
2. The method for task offloading and resource allocation of claim 1, wherein obtaining the associated task offloading policy and resource allocation method comprises:
Searching an unloading decision which enables each subtask to have the minimum earliest completion time through a greedy algorithm according to the priority index list, and obtaining DAG unloading records of each edge server;
And taking out each subtask according to the index of the priority index list, selecting equipment with earliest completion time for the subtask to unload, updating a subtask unloading strategy and a DAG unloading record of an edge server, and calculating the computing resource duty ratio of the DAG in the edge server according to the DAG unloading record of the edge server to obtain the associated task unloading strategy and the resource allocation method.
3. The task offloading and resource allocation method of claim 2, wherein the method for calculating a computing resource duty ratio of the DAG at the edge server according to the DAG offloading record of the edge server is:
In the method, in the process of the invention, For resource allocation proportion,/>To record whether the local device-generated DAG tasks offload subtasks to the binary variables of the edge server, ψ (N) is the priority of the DAG and N is the total number of DAG tasks.
4. The method for task offloading and resource allocation of claim 1, wherein the objective function of the associated task offloading policy and resource allocation method is an average response time for the system to complete all DAG tasks:
Where ft is the completion time of the subtask, For the ending task of G x, G x is the DAG task generated by local device I x, and N is the total number of DAG tasks.
5. A priority list indexing mechanism based task offloading and resource allocation system, comprising:
and a system monitoring unit: for monitoring task information from the device and resource information of the system;
and (3) unloading the planning unit: planning a task unloading strategy and a resource allocation method through a DAG priority list indexing mechanism according to the task information and the resource information,
The method specifically comprises the following steps:
The method comprises the steps of respectively setting the priority of each subtask in a DAG and the priority of each DAG, sequencing each subtask in the DAG to obtain a subtask priority list of the DAG, and sequencing the subtask priority lists of all DAGs to obtain a DAG priority list, namely a priority index list;
the method for defining the priority of each subtask in the DAG comprises the following steps:
In the method, in the process of the invention, Subtasks generated for local device I x, I is the serial number of the subtask, x is the serial number of the local device,/>Representation/>Priority of/>Representation/>Calculated amount size of,/>For the integrated computing power of the local device I x and the edge servers in the system,/>Representing the need/>Sending a set of dependent data subtasks,/>For/>Is one of the sub-tasks,/>Representation/>Direction/>Transmitted dependent data,/>For/>Data size of,/>To rely on the average transmission rate of data in the system;
the method of defining the priority of each DAG is:
Where G x is the DAG task generated by local device I x, For the start subtask of G x, cost (G x) is the system cost of G x, ψ (x) is the priority of G x;
Policy enforcement unit: and executing user task unloading operation according to the formulated task unloading strategy and the resource allocation method.
CN202311663171.8A 2023-12-06 2023-12-06 Task unloading and resource allocation method based on priority list indexing mechanism Active CN117632298B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311663171.8A CN117632298B (en) 2023-12-06 2023-12-06 Task unloading and resource allocation method based on priority list indexing mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311663171.8A CN117632298B (en) 2023-12-06 2023-12-06 Task unloading and resource allocation method based on priority list indexing mechanism

Publications (2)

Publication Number Publication Date
CN117632298A CN117632298A (en) 2024-03-01
CN117632298B true CN117632298B (en) 2024-05-31

Family

ID=90028618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311663171.8A Active CN117632298B (en) 2023-12-06 2023-12-06 Task unloading and resource allocation method based on priority list indexing mechanism

Country Status (1)

Country Link
CN (1) CN117632298B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661466A (en) * 2022-03-21 2022-06-24 东南大学 Task unloading method for intelligent workflow application in edge computing environment
CN116450241A (en) * 2023-04-20 2023-07-18 北京工业大学 Multi-user time sequence dependent service calculation unloading method based on graph neural network
CN116755882A (en) * 2023-06-16 2023-09-15 山东省计算中心(国家超级计算济南中心) Computing unloading method and system with dependency tasks in edge computing
CN116886703A (en) * 2023-03-24 2023-10-13 华南理工大学 Cloud edge end cooperative computing unloading method based on priority and reinforcement learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552161B2 (en) * 2017-06-21 2020-02-04 International Business Machines Corporation Cluster graphical processing unit (GPU) resource sharing efficiency by directed acyclic graph (DAG) generation
US10956211B2 (en) * 2019-02-25 2021-03-23 GM Global Technology Operations LLC Method and apparatus of allocating automotive computing tasks to networked devices with heterogeneous capabilities
US11436051B2 (en) * 2019-04-30 2022-09-06 Intel Corporation Technologies for providing attestation of function as a service flavors
CN113220356B (en) * 2021-03-24 2023-06-30 南京邮电大学 User computing task unloading method in mobile edge computing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661466A (en) * 2022-03-21 2022-06-24 东南大学 Task unloading method for intelligent workflow application in edge computing environment
CN116886703A (en) * 2023-03-24 2023-10-13 华南理工大学 Cloud edge end cooperative computing unloading method based on priority and reinforcement learning
CN116450241A (en) * 2023-04-20 2023-07-18 北京工业大学 Multi-user time sequence dependent service calculation unloading method based on graph neural network
CN116755882A (en) * 2023-06-16 2023-09-15 山东省计算中心(国家超级计算济南中心) Computing unloading method and system with dependency tasks in edge computing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
边缘计算中基于综合信任评价的任务卸载策略;熊小峰 等;电子学报;20220930;第50卷(第09期);第2134-2145页 *

Also Published As

Publication number Publication date
CN117632298A (en) 2024-03-01

Similar Documents

Publication Publication Date Title
Pham et al. Towards task scheduling in a cloud-fog computing system
US9268607B2 (en) System and method of providing a self-optimizing reservation in space of compute resources
US8918792B2 (en) Workflow monitoring and control system, monitoring and control method, and monitoring and control program
CN109788046B (en) Multi-strategy edge computing resource scheduling method based on improved bee colony algorithm
CN110069341B (en) Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing
CN114928607B (en) Collaborative task unloading method for polygonal access edge calculation
Natesha et al. GWOTS: Grey wolf optimization based task scheduling at the green cloud data center
Stavrinides et al. Cost-effective utilization of complementary cloud resources for the scheduling of real-time workflow applications in a fog environment
WO2015185938A1 (en) Network
Naik A processing delay tolerant workflow management in cloud-fog computing environment (DTWM_CfS)
Li et al. Collaborative content caching and task offloading in multi-access edge computing
CN117608840A (en) Task processing method and system for comprehensive management of resources of intelligent monitoring system
CN116302578B (en) QoS (quality of service) constraint stream application delay ensuring method and system
CN117579701A (en) Mobile edge network computing and unloading method and system
CN117632298B (en) Task unloading and resource allocation method based on priority list indexing mechanism
Patil et al. Resource allocation and scheduling in the cloud
Wang et al. Joint scheduling and offloading of computational tasks with time dependency under edge computing networks
Prasadhu et al. An efficient hybrid load balancing algorithm for heterogeneous data centers in cloud computing
Channappa et al. Multi-objective optimization method for task scheduling and resource allocation in cloud environment
Chiang et al. A load-based scheduling to improve performance in cloud systems
Mohamaddiah et al. Resource Selection Mechanism For Brokering Services In Multi-cloud Environment
Feng et al. Tango: Harmonious management and scheduling for mixed services co-located among distributed edge-clouds
Mohammadi et al. De-centralised dynamic task scheduling using hill climbing algorithm in cloud computing environments
Xu et al. The Task Scheduling Algorithm for Fog Computing in Intelligent Production Lines Based on DQN
Li et al. Edge–Cloud Collaborative Computation Offloading for Mixed Traffic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant