CN112306642A - Workflow scheduling method based on stable matching game theory - Google Patents

Workflow scheduling method based on stable matching game theory Download PDF

Info

Publication number
CN112306642A
CN112306642A CN202011329163.6A CN202011329163A CN112306642A CN 112306642 A CN112306642 A CN 112306642A CN 202011329163 A CN202011329163 A CN 202011329163A CN 112306642 A CN112306642 A CN 112306642A
Authority
CN
China
Prior art keywords
task
virtual machine
workflow
tasks
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011329163.6A
Other languages
Chinese (zh)
Other versions
CN112306642B (en
Inventor
贾兆红
潘磊
唐俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202011329163.6A priority Critical patent/CN112306642B/en
Publication of CN112306642A publication Critical patent/CN112306642A/en
Application granted granted Critical
Publication of CN112306642B publication Critical patent/CN112306642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a workflow scheduling method based on a stable matching game theory, which comprises the following steps: step A: DAG graph of input workflow, virtual machine pool V ═ { VM0,VM1,…,VMm‑1And CCR values; and B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP; and C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme; step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located. The invention has the advantages that: two local optimization strategies based on the key path and the task replication effectively reduce the maximum completion time of the workflow, comprehensively consider the task fairness problem and improve the customer satisfaction.

Description

Workflow scheduling method based on stable matching game theory
Technical Field
The invention relates to the technical field of workflow scheduling, in particular to a workflow scheduling method based on a stable matching game theory.
Background
Cloud computing provides a new resource delivery and service provision model that can provide a wide variety of computing and resource services such as servers, storage capacity, cpus, etc., as well as e-commerce, social networking, etc., application services running over the entire network. In order to take advantage of the resource advantages of the cloud computing service mode, greatly save investment cost and get rid of the limitation of the resource by region and time, any work task can be executed in the cloud computing environment, such as a widely researched workflow task. A workflow is a series of interrelated, automatically executed business activities or tasks. We refer to workflows placed in a cloud computing environment as cloud workflows. Tasks in scientific applications like high energy physics, gravitational wave, geography, bioinformatics, astronomy and the like are based on centralized control, and strong interdependent relationships exist among data. Because the requirement of Quality of Service (QoS) of a user needs to be met to the greatest extent in a cloud computing environment, the significance of researching a workflow task scheduling algorithm in the cloud environment is significant. The selection of the workflow task scheduling strategy has important influence on the efficiency and performance of cloud computing, and improper scheduling strategies not only cause resource waste but also cannot meet the requirements of users on QoS, so that cloud resource providers and cloud service users cannot achieve the targets of the cloud resource providers and the cloud service users.
Currently, most cloud workflow scheduling algorithms focus on the common goal of minimizing the total cost or maximum completion time of the entire workflow. However, in reality, for example, in workflows such as video monitoring, object tracking, face recognition, etc., each subtask has its own target, such as minimum response time or fastest processing speed, etc. For some scheduling algorithms, if the current optimal resources (such as maximum bandwidth, fastest processing speed, etc.) are always allocated to tasks according to priorities, a workflow scheduling method based on workflow throughput maximization disclosed in the invention patent with publication number CN103838627A, and a workflow scheduling method, a multi-workflow scheduling method and a system thereof disclosed in the invention patent with publication number CN103914754A all complete the scheduling of workflows based on priorities, and paying more attention to the processing order of multiple workflows, part of tasks may not meet the requirements of clients, thereby resulting in unfair allocation. Unfair allocation of resources can result in significant reduction in satisfaction of some task objectives, thereby affecting customer satisfaction with cloud services. Therefore, while the global target of the workflow is considered, the fairness among the tasks in the workflow is also considered. How to minimize the completion time of the workflow is of great significance on the premise of ensuring the task fairness.
Game Theory (GT) mainly studies strategic interaction among rational decision makers, and is widely applied to various fields such as logics and system science. In consideration of the reliability of balanced tasks, Yang and the like provide a task scheduling algorithm based on a cooperative game model, so that the efficiency is ensured, and the complexity of the algorithm is reduced. In order to solve the task scheduling problem in grid computing, Gao and the like regard the grid load balancing problem as a non-cooperative game model and provide a grid cost minimization algorithm based on GT. Experimental results show that the game-based algorithm has better capability of solving the task scheduling problem. Wang et al propose a multi-objective workflow scheduling algorithm based on a dynamic game model to minimize the maximum completion time and the total cost and maximize the system fairness of workload distribution among heterogeneous cloud virtual machines. Sujana et al define the multi-objective workflow scheduling problem as a dual-objective sequence cooperative game model that minimizes execution time and economic cost under two constraints. Although GT has certain advantages in solving the workflow scheduling problem, in the existing research, there are still few researches considering the task fairness problem, and the effect and processing speed of the existing algorithm still cannot meet the requirements of all users.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for dispatching the workflow based on the stable matching game theory in the GT model, so as to solve the problem that the prior art does not consider task fairness and simultaneously minimize the completion time of the workflow.
The invention solves the technical problems through the following technical scheme: a workflow scheduling method based on a stable matching game theory comprises the following steps:
step A: the DAG graph of the input workflow is expressed as DAG ═ (T, E), T ═ T0,t1,...,tn-1Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is not
Figure BDA0002795293450000021
Then the delegate is only at task tiExecuteCompletes and passes the data to tjUpper time, task tjCan be executed; task tjIs task tiSuccessor node of (1), task tiIs task tjThe predecessor node of (1);
virtual machine pool V ═ { VM ═ VM0,VM1,...,VMm-1Represents a set of m virtual machines;
and CCR values;
and B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
and C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The scheduling scheme is optimized based on the stable matching game theory and the task starting time, the problem that the task fairness is not considered in the prior art is solved, and the overall completion time of the workflow can be minimized.
Preferably, the task rank value calculating method in step B is:
Figure BDA0002795293450000031
wherein succ (t)i) As task tiSet of successor nodes of, texitAn egress task without a successor node;
Figure BDA0002795293450000032
representing a task tiThe average calculation time of (a) is,
Figure BDA0002795293450000033
representing a task tiAnd task tjAverage contact time of;
Figure BDA0002795293450000034
Figure BDA0002795293450000035
Figure BDA0002795293450000036
Figure BDA0002795293450000037
wherein s isiRepresenting a task tiSize of pkRepresenting virtual machines VMkProcessing capacity of, ET (t)i,VMk) Representing a task tiIn a virtual machine VMkCalculated time of (TT)ijRepresents a predecessor node tiTo successor node tjTransmitted data size, BVMk,VMlRepresenting virtual machines VMkTo virtual machine VMlBandwidth of the transmitted data, task tjIn a virtual machine VMlWhen p is k, BVMk,VMl=0,Ttran(ti,tj)=0。
Preferably, the DAG layering method in step B is:
Figure BDA0002795293450000038
wherein, tiLevel represents the task tiLayer of, pre (t)i) Representing a task tiSet of predecessor nodes of, tentryIndicating that there is no predecessor node's ingress task.
Preferably, the method for allocating tasks in step C comprises the following steps:
step i: let l equal to 0;
step ii: if it is not
Figure BDA0002795293450000039
Go to step x, otherwise add the task at layer l to the set task (l) { t }i|tiLevel ═ l };
step iii: acquiring the key task t of the l layerx,txE (task (l) # CP) at each virtual machine VMkThe completion time of the processing is sequenced from morning to evening according to the completion time to obtain the task txPreference queue taskpference (x);
step iv: the key task t of the l layerxVirtual machine VM assigned to earliest completion timekUp, update task txAnd a start processing time ST, an execution time ET, and a completion time FT of the task txDeleted from set task (l);
step v: if it is not
Figure BDA0002795293450000041
Making l ═ l +1, and returning to the step ii; otherwise, let j equal to 0, go to step vi;
step vi: acquiring 1 st task task.get (0) in the set task (l), and generating a preference queue task preference (0) of the task task.get (0);
step vii: calculating the j virtual machine VM in the preference queue taskPreference (0)uIf u.waiting.size < threshold (u, l), assign task task.get (0) to virtual machine VMuGet (0) and delete task from set task (l); if u.waiting.size is threshold (u, l), execute step viii, where u.waiting.size is virtual machine VMuA number of tasks waiting to be performed;
step viii: for each task in set task (l), based on VM in virtual machineuGet the virtual machine VM from the morning to the eveninguPreference queue VM for layer I taskuReference (l) for obtaining task in virtual machine VM (0)uPreference queue VM ofuLocation p in preference (l), find virtual machine VMuThe upper preference value is maximum and is at any position of the l layerTask b, acquiring the preference queue VM of the task buReference (l), wherein all tasks on the virtual machine are numbered as preference values in sequence from 0 to u.waiting.size-1 in processing order;
step ix: if p is less than q, replacing the task b with the task (0), updating ST, ET and FT of the task (0), deleting the task (0) from the set task (l), adding the task b into the task (l), and returning to the step v; otherwise, let j ═ j +1, return to step vii;
step x: and outputting the scheduling scheme S.
Preferably, task tiIn a virtual machine VMkThe time ST and the completion time FT of the upper start processing are respectively
Figure BDA0002795293450000042
FT(ti,VMk)=ST(ti,VMk)+ET(ti,VMk)。
Preferably, the virtual machine VMkThe threshold value is calculated by
Figure BDA0002795293450000051
Wherein n isvIndicating the number of tasks at level v.
Preferably, the method for optimizing the scheduling scheme in step D comprises:
step 1: let k equal to 0;
step 2: if k is less than or equal to m-1, acquiring the virtual machine VMkWait queue VMkWaiting for the first task t, otherwise, jumping to step 7;
and step 3: if the start time ST (t, VM) of the task tk) Making k equal to k +1, and returning to the step 2; otherwise, let p be 0, minST ═ infinity, minpredcesor ═ minST, go to step 4;
and 4, step 4: if p ≦ pre (t) 1, copy task p to virtual machine VMkIn the above, the start time ST 'of the task t at this time is calculated (ST')t,VMk) Go to step 5, otherwise go to step 6;
and 5: if ST' (t, VM)k) If minST is less than minST, let minST be ST' (t, VM)k) If p is p +1, returning to step 4, otherwise, returning to step 4;
step 6: if minST < ST (t, VM)k) Copying task minPredepcessor to virtual machine VMkIf not, making k equal to k +1 and returning to the step 2;
and 7: and outputting the optimized scheduling method S'.
The invention also provides a workflow scheduling system based on the stable matching game theory, which comprises the following components:
an input module: the DAG graph of the input workflow is expressed as DAG ═ (T, E), T ═ T0,t1,...,tn-1Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is not
Figure BDA0002795293450000052
Then the delegate is only at task tiExecution completes and passes data to tjUpper time, task tjCan be executed; task tjIs task tiSuccessor node of (1), task tiIs task tjThe predecessor node of (1);
virtual machine pool V ═ { VM ═ VM0,VM1,...,VMm-1Represents a set of m virtual machines;
and CCR values;
the key path extraction module: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
a scheduling module: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
an optimization module: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The invention also provides an electronic processing device, which comprises at least one processor and a storage device for storing at least one executive program, wherein when the at least one executive program is executed by the at least one processor, the at least one processor realizes the workflow scheduling method.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, is capable of implementing the workflow scheduling method.
The workflow scheduling method based on the stable matching game theory has the advantages that: two local optimization strategies based on the key path and the task replication effectively reduce the maximum completion time of the workflow, comprehensively consider the task fairness problem and improve the customer satisfaction.
Drawings
FIG. 1 is a DAG diagram of a CyberShake workflow provided by an embodiment of the present invention;
FIG. 2 is a DAG diagram of a workflow used in a comparative experiment provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram comparing SLRs in CCR and small-scale workflow provided by embodiments of the present invention;
FIG. 4 is a comparative schematic of SLR in CCR and Medium Scale workflow provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram comparing SLR in CCR and large-scale workflow provided by embodiments of the present invention;
FIG. 6 is a comparative schematic of AVU in CCR and small scale workflow provided by embodiments of the present invention;
FIG. 7 is a comparative schematic of AVU in a CCR and Medium Scale workflow as provided by an embodiment of the present invention;
FIG. 8 is a comparative schematic of AVU in a CCR and Medium Scale workflow as provided by an embodiment of the present invention;
FIG. 9 is a diagram illustrating SLR comparison among different VM numbers and large-scale workflows according to an embodiment of the present invention;
FIG. 10 is a comparative illustration of AVU in different VM numbers and large-scale workflows as provided by an embodiment of the invention;
fig. 11 is a diagram illustrating comparison of VF provided by an embodiment of the present invention in different large-scale workflows.
Detailed Description
To make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention are described below in detail and completely with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First, we introduce workflow models, one usually using a Directed Acyclic Graph (DAG) representation, denoted DAG ═ T, E with a bigram. Wherein T ═ { T ═ T0,t1,...,tn-1And E represents a dependency relationship set among the n tasks. If it is not
Figure BDA0002795293450000071
Then it represents task tiAnd task tjDependencies exist, only at task tiExecution completes and passes data to tjUpper time, task tjCan be executed; and weigh task tjIs task tiSuccessor node of (1), task tiIs task tjThe predecessor node of (1). Task tiIs denoted as pre (t) and the predecessor and successor node sets ofi) And succ (t)i). A task without predecessor nodes is called an ingress task tentryTasks without successor nodes are called egress tasks texit. There may be multiple ingress or egress tasks in a DAG, and the amount of data transferred between tasks may be represented by a TT matrix:
Figure BDA0002795293450000072
wherein TTi,jRepresenting a task tiTo task tjThe amount of data transferred. TT when there is no dependency between tasksi,j=0。
Suppose a cloud model exists in a Virtual Machine (VM) pool formed by m VMs, and the VM pool is represented by a set as V ═ VM0,VM1,...,VMm-1}. The processing capacity of each virtual machine is different and independent, the bandwidth between different virtual machines is represented by a matrix B, and B is not necessarily a symmetric matrix. That is, B VMk,VMlNot necessarily equal to B VMl,VMk. When k is l, B VMk,VMl0. Task tiTo task tjThe time for transmitting data can be calculated by the data transmission quantity between the virtual machines and the bandwidth between the virtual machines, and the specific steps are as follows:
Figure BDA0002795293450000073
wherein VMkAnd VMlAre respectively task tiAnd task tjThe virtual machine is located. When two tasks are executed on the same virtual machine, the data transmission time between the two tasks can be ignored, namely Ttran(ti,tj)=0。
Task tiIn VMkStart time of ST (t)i,VMk) Can be calculated from the following formula:
Figure BDA0002795293450000081
task tiIn VMkExecution time on ET (t)i,VMk) Can be calculated by
Figure BDA0002795293450000082
Wherein s isiRepresenting a task tiSize of pkRepresenting a VMkThe processing power of (1).
Task tiIn VMkCompletion time FT (t) ofi,VMk) Can be calculated by
FT(ti,VMk)=ST(ti,VMk)+ET(ti,VMk)
The optimization objectives for this embodiment are as follows:
the completion time makespan of the workflow is the maximum completion time in all exit tasks, and is also the final time for all virtual machines to complete all tasks, and can be calculated by the following formula.
makespan=max{MS(VMk)},0≤k<m
Wherein MS (VM)k) Representing a VMkThe time to complete all tasks.
The quality of the solution obtained by the algorithm is evaluated by using three indexes, namely a Scheduling Length Ratio (SLR), an average virtual machine resource utilization rate (AVU) and a fairness Variance (VF), and a calculation formula is shown as follows.
1) And (7) SLR. To avoid parameter differences, resulting in excessive differences in makespan, it is necessary to normalize makespan to a lower bound. Therefore, the embodiment normalizes the makespan obtained by the algorithm according to the following formula.
Figure BDA0002795293450000083
Where | CP | represents the length of the critical path, it can be seen that the smaller SLR, the shorter the completion time of the workflow.
2) AVU are provided. This index is used to evaluate the average resource utilization of all virtual machines, i.e. the ratio of the busy time of the virtual machine to makespan, as shown in the following formula.
Figure BDA0002795293450000084
Where waiting is VMkI.e. the queue of tasks to be executed.
3) And VF. This index is to evaluate the fairness gap between tasks. The satisfaction value S of each task is calculated by the following formula i.e. the ratio of the actual execution time AET of the task to the desired execution time EET,
Figure BDA0002795293450000091
Figure BDA0002795293450000092
the EFT is the execution time of the task on the virtual machine with the fastest execution capacity. The larger the S, the higher the satisfaction of the task. The variance of all tasks is then calculated, where M is the average of all task satisfaction. The smaller the VF, the better the algorithm can balance fairness between tasks.
Based on the above model, the embodiment provides a workflow scheduling method based on stable matching game interest rate, which includes the following steps:
step A: the DAG graph of the input workflow is expressed as DAG ═ (T, E), T ═ T0,t1,...,tn-1Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is not
Figure BDA0002795293450000093
Then the delegate is only at task tiExecution completes and passes data to tjUpper time, task tjCan be executed; task tjIs task tiSuccessor node of (1), task tiIs task tjThe predecessor node of (1);
virtual machine pool V ═ { VM ═ VM0,VM1,...,VMm-1Represents a set of m virtual machines;
and a CCR value, wherein CCR is an empirical value representing the ratio of the average computation time to the average contact time of the workflow, the value of CCR determining whether the type of workflow is computation intensive or data intensive; the determination may be made based on the experience of a worker, or a test.
And B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP; the method for calculating the rank value comprises the following steps:
Figure BDA0002795293450000094
wherein succ (t)i) As task tiSet of successor nodes of, texitAn egress task without a successor node;
Figure BDA0002795293450000095
representing a task tiThe average calculation time of (a) is,
Figure BDA0002795293450000096
representing a task tiAnd task tjAverage contact time of;
Figure BDA0002795293450000097
Figure BDA0002795293450000098
Figure BDA0002795293450000101
Figure BDA0002795293450000102
wherein s isiRepresenting a task tiSize of pkRepresenting virtual machines VMkProcessing capacity of, ET (t)i,VMk) Representing a task tiIn a virtual machine VMkOnCalculating time, TTijRepresents a predecessor node tiTo successor node tjTransmitted data size, BVMk,VMlRepresenting virtual machines VMkTo virtual machine VMlBandwidth of the transmitted data, task tjIn a virtual machine VMlThe upper process, when p is k, B VMk,VMl=0,Ttran(ti,tj)=0。
The DAG layering method comprises the following steps:
Figure BDA0002795293450000103
wherein, tiLevel represents the task tiLayer of, pre (t)i) Representing a task tiSet of predecessor nodes of, tentryIndicating that there is no predecessor node's ingress task.
And C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme; the method specifically comprises the following steps:
step i: let l equal to 0;
step ii: if it is not
Figure BDA0002795293450000104
Go to step x, otherwise add the task at layer l to the set task (l) { t }i|tiLevel ═ l };
step iii: acquiring the key task t of the l layerx,txE (task (l) # CP) at each virtual machine VMkThe completion time of the processing is sequenced from morning to evening according to the completion time to obtain the task txPreference queue taskpference (x);
wherein, the task tiIn a virtual machine VMkThe completion time FT of (c) is:
FT(ti,VMk)=ST(ti,VMk)+ET(ti,VMk)
Figure BDA0002795293450000105
ST(ti,VMk) As task tiIn a virtual machine VMkThe time to start the process;
step iv: the key task t of the l layerxVirtual machine VM assigned to earliest completion timekUp, update task txAnd a start processing time ST, an execution time ET, and a completion time FT of the task txDeleted from set task (l);
step v: if it is not
Figure BDA0002795293450000111
Making l ═ l +1, and returning to the step ii; otherwise, let j equal to 0, go to step vi;
step vi: acquiring 1 st task task.get (0) in the set task (l), and generating a preference queue task preference (0) of the task task.get (0);
step vii: calculating the j virtual machine VM in the preference queue taskPreference (0)uIf u.waiting.size < threshold (u, l), assign task task.get (0) to virtual machine VMuGet (0) and delete task from set task (l); if u.waiting.size is threshold (u, l), execute step viii, where u.waiting.size is virtual machine VMuA number of tasks waiting to be performed;
virtual machine VMkThe threshold value calculating method comprises the following steps:
Figure BDA0002795293450000112
wherein n isvIndicating the number of tasks of the v-th layer; the threshold value can balance the load of the virtual machines, excessive tasks are prevented from being distributed to the same virtual machine, and one threshold value is set for each virtual machine according to the number of tasks on the current layer and the processing capacity of the virtual machine, so that the virtual machine with strong processing capacity can execute more tasks.
Step viii: for each task in the set task (l)Service based on VM in virtual machineuGet the virtual machine VM from the morning to the eveninguPreference queue VM for layer I taskuReference (l) for obtaining task in virtual machine VM (0)uPreference queue VM ofuLocation p in preference (l), find virtual machine VMuAcquiring the task b with the largest upper preference value and at the l-th layer, namely b.level ═ l, in the preference queue VMuReference (l), wherein all tasks on the virtual machine are numbered as preference values in sequence from 0 to u.waiting.size-1 in processing order;
step ix: if p is less than q, replacing the task b with the task (0), updating ST, ET and FT of the task (0), deleting the task (0) from the set task (l), adding the task b into the task (l), and returning to the step v; otherwise, let j ═ j +1, return to step vii;
step x: and outputting the scheduling scheme S.
Step D: optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located;
in the scheduling of the workflow, it is possible that the current task is not executed on the same virtual machine as its predecessor node, and the current task needs to be executed, and the predecessor node must wait for the data to be transmitted to its virtual machine. Therefore, the appropriate predecessor node is selected to be copied to the virtual machine of the task, so that the data transmission time between tasks can be reduced, and the start time of the current task is advanced.
Based on the above principle, the principle of the optimization performed by the present embodiment is as follows:
1) task tiCan only be copied to the virtual machine where the subsequent node is located.
2) Task tiCopying to a certain virtual machine cannot increase the completion time of other tasks on the virtual machine.
The method specifically comprises the following steps:
step 1: let k equal to 0;
step 2: if k is less than or equal to m-1, acquiring the virtual machine VMkWait queue VMk.wThe first task t in the accessing, otherwise, jumping to step 7;
and step 3: if the start time ST (t, VM) of the task tk) Making k equal to k +1, and returning to the step 2; otherwise, let p be 0, minST ═ infinity, minpredcesor ═ minST, go to step 4;
and 4, step 4: if p ≦ pre (t) 1, copy task p to virtual machine VMkIn the above, the start time ST' (t, VM) of the task t at this time is calculatedk) Go to step 5, otherwise go to step 6;
and 5: if ST' (t, VM)k) If minST is less than minST, let minST be ST' (t, VM)k) If p is p +1, returning to step 4, otherwise, returning to step 4;
step 6: if minST < ST (t, VM)k) Copying task minPredepcessor to virtual machine VMkIf not, making k equal to k +1 and returning to the step 2;
and 7: and outputting the optimized scheduling method S'.
The workflow scheduling method is described below by using the example of the cybersake workflow shown in fig. 1, the cybersake workflow includes 30 tasks and 52 edges, and the sizes of the tasks and the calculated rank values are shown in table 1:
Figure BDA0002795293450000121
Figure BDA0002795293450000131
table 1: task size and rank value of CyberShake workflow
The data transfer matrix TT between tasks is calculated according to:
TT(ti,tj)=si×CCR
in this embodiment, CCR is 0.4, and the computing capacities of 5 virtual machines are 5,8,7,9, and 6; the bandwidth B between the virtual machines is shown in Table 2:
VM0 VM1 VM2 VM3 VM4
VM0 0 7 8 9 6
VM 1 5 0 8 7 5
VM 2 7 6 0 8 4
VM 3 7 8 6 0 5
VM 4 9 7 6 4 0
table 2: bandwidth between virtual machines of cybersheke workflow
As can be seen from Table 1, the key path of the CyberShake workflow is { t }2,t5,t6,t0Explaining by taking task allocation of a first layer and a second layer as an example;
VM.preference VM.threshold
VM
0 2,13 1
VM 1 2,13 1
VM 2 2,13 1
VM 3 2,13 1
VM 4 2,13 1
table 3: preference queue for cybersheke workflow first tier virtual machines
And in the first layer, the preference queue of the virtual machine is obtained according to the completion time of the task on the virtual machine. But since the first layer task has no predecessor nodes, the start time of all tasks is 0, and all virtually generated preference queues are the same considering only the task completion time. For the first distributed critical path task, the priority is distributed to the virtual with the highest processing speed, so that the preference queue does not need to be calculated. At task t2After the allocation is completed, the task t is calculated13The queue of available preferences for elapsed time on each virtual machine is shown in Table 4; then task t13Is assigned to a virtual machine VM1The above.
Unassigned task Task Task.preference VM VM.waiting Task.ST Task.FT
2,13 2 2 2 0 5.11
13 13 2,3,5,1,4 1 13 0 5.25
Table 4: preference queue for cybersheke workflow first layer tasks
Because the second layer of tasks have precursor nodes and the completion time sequence of the tasks on different virtual machines is different, the preference queues of the virtual machines are different, and firstly, the critical path task t is processed5Assigned to the VM that made it finish earliest3And sequentially distributing each task in sequence. When assigning task t22Due to the current VM3Is full, by comparing task t22And VM3Task(s) above, task(s) t with lower preference value9Removed and redistributed.
VM.preference VM.threshold
VM
0 22,16,14,26,18,20,11,24,7,5,3,9,28 3
VM 1 11,7,5,3,9,22,14,16,26,18,20,24,28 4
VM 2 11,22,7,16,14,5,3,26,18,20,9,24,28 3
VM 3 22,16,14,26,18,20,24,11,7,5,3,9,28 4
VM 4 22,16,14,11,26,7,18,5,20,3,24,9,28 3
Table 5: preference queue of cybersheke workflow second-tier virtual machine
Figure BDA0002795293450000141
Figure BDA0002795293450000151
Table 6: preference queue for cybersheke workflow second layer tasks
In this way, the scheduling result of the workflow scheduling method provided by this embodiment to the DAG can be obtained, as shown in table 7, where "+" and "-" respectively represent idle time of the virtual machine and execution time of the task; in addition, the task of replication is shown in bold.
Figure BDA0002795293450000152
Figure BDA0002795293450000161
Table 7: gantt chart of cybersheke workflow
The embodiment also aims at the four reference workflow structures of cybersheke, Epigenomics, LIGO and Montage shown in fig. 2 to compare the workflow scheduling method (SM-CPTD) provided by the embodiment with the existing TDA, GSS, NMMWS and Min-Min algorithms, where TDA is an algorithm for minimizing the total completion time of a workflow based on task replication and task grouping. GSS minimizes completion time and maximizes virtual machine average resource utilization based on task granularity in the workflow. The NMMWS calculates the dynamic threshold of the task through a minimum-maximum normalization method so as to ensure the maximum completion time of each workflow and the utilization rate of cloud resources. Min-Min is a commonly used workflow scheduling algorithm, generally considered to be one of the most efficient scheduling algorithms for benchmarking, and may directly reflect the performance of different algorithms. The time complexity of the four algorithms is respectively: o (n)3),O(n2m),O(n3ml) andO(n2m). While the time complexity of our proposed SM-CPTD algorithm is O (n)2l), similar to Min-Min. In addition, in order to verify the two local optimization strategies proposed in the present application, the most primitive stable matching algorithm is also referred to as a comparison algorithm and is referred to as SM.
The parameters and their value ranges used in the present application are shown in table 8. The task size, the processing speed of the virtual machines, and the bandwidth between the virtual machines are all randomly generated according to a uniform distribution. Each example was run 10 times with an average. All algorithms are implemented on Eclipse by using java language programming, and the running environment is Intel core i7-9750H CPU @2.60GHz, 8GB RAM and Microsoft Windows 10Professional 64-bit operating system.
Figure BDA0002795293450000162
Table 8: parameters and values
In order to study the influence of the CCR and the number m of virtual machines on the algorithm, the number of virtual machines of the small, medium and large workflows is respectively set to 5, 10 and 50 when the influence of the CCR on the algorithm is considered. Similarly, when considering the effect of m on the algorithm, the value of CCR is set to 1. Meanwhile, in order to study the fairness of different algorithms to different workflows, CCR is set to 1, and m is set to 50.
As is clear from FIGS. 3-5, as CCR increases from 0.4 to 2, the data transfer time between tasks increases, causing the SLR values of all algorithms to increase, and thus the completion time of the workflow to increase. However, in the case of workflows of different sizes, the SM-CPTD can obtain the minimum SLR compared with the other four comparison algorithms, i.e., the SM-CPTD can obtain the minimum workflow completion time. In addition, GSS performs similarly to NMMWS and in most cases both algorithms and Min-Min outperform TDA. Furthermore, from the results of SM-CPTD and SM, two local optimization strategies can effectively reduce the completion time of a workflow, especially for data intensive workflows.
In TDA, in order to reduce the transmission time between tasks, a large number of tasks need to be duplicated, resulting in data redundancy and extra execution time for the tasks. Thus, it performs better in data-intensive workflows than in compute-intensive workflows. The performance of an NMMWS depends on the size of the task and the processing power of the virtual machine. However, NMMWS has poor performance in the case of small-scale workflows, since it is difficult to obtain good batch processing results.
As can be seen from FIGS. 6-8, as the CCR value is increased, the task needs to wait for a longer time to transmit data, so that the AVU values of all algorithms are decreased. Furthermore, unlike the results of SLR, TDA works best at AVU because the large number of replication tasks in TDA take full advantage of the virtual machine's idle time, increasing the virtual machine's utilization. While SM-CPTD is inferior to TDA in AVU, it is superior to GSS, NMMWS and Min-Min. Compared with SM, similar to TDA, SM-CPTD utilizes the idle time of the virtual machine, and improves the resource utilization rate of the virtual machine.
As can be seen from FIGS. 9-10, for large workflows, SM-CPTD can get the smallest SLR regardless of the number of virtual machines. In addition, as the number of virtual machines increases, the parallel processing capacity of the virtual machines increases, and although the completion time of the workflow is reduced, the utilization rate of the virtual machines is also reduced. In addition, while SM-CPTD performed less than TDA at AVU, it outperformed GSS, NMMWS and Min-Min on all the example groups.
In addition, it can be seen from fig. 11 that the VF of the SM is the smallest among the different large-scale workflows. Meaning that the stable matching algorithm can effectively balance the fairness of each task compared to the other four. However, the addition of two local optimization strategies affects the fairness of partial tasks, thus leading the VF value of SM-CPTD to be slightly larger than SM.
In summary, the performance of the algorithm SM-CPTD proposed in this embodiment is better than that of all other algorithms. In addition, two local optimization strategies based on critical path and task replication effectively reduce the maximum completion time of the workflow. In addition, when CCR is 1, the average operating time ranges of SM-CPTD are 10ms to 20ms, 30ms to 40ms, 1200ms to 1400ms for four different configurations of small, medium and large-scale workflows, respectively. Therefore, the SM-CPTD has better distribution efficiency and can be applied to an online workflow scheduling scene.
This embodiment also provides a workflow dispatch system based on stably match game theory, includes:
an input module: the DAG graph of the input workflow is expressed as DAG ═ (T, E), T ═ T0,t1,...,tn-1Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is not
Figure BDA0002795293450000181
Then the delegate is only at task tiExecution completes and passes data to tjUpper time, task tjCan be executed; task tjIs task tiSuccessor node of (1), task tiIs task tjThe predecessor node of (1);
virtual machine pool V ═ { VM ═ VM0,VM1,...,VMm-1Represents a set of m virtual machines;
and CCR values;
the key path extraction module: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
a scheduling module: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
an optimization module: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The present embodiment also provides an electronic processing device, including at least one processor and a storage device storing at least one execution program, where when the at least one execution program is executed by the at least one processor, the at least one processor implements the following method:
step A: the DAG graph of the input workflow is expressed as DAG ═ (T, E), T ═ T0,t1,...,tn-1Represents the workThe method comprises the steps that n task sets in a flow, and E represents a dependency relationship set among the n tasks; if it is not
Figure BDA0002795293450000182
Then the delegate is only at task tiExecution completes and passes data to tjUpper time, task tjCan be executed; task tjIs task tiSuccessor node of (1), task tiIs task tjThe predecessor node of (1);
virtual machine pool V ═ { VM ═ VM0,VM1,...,VMm-1Represents a set of m virtual machines;
and CCR values;
and B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
and C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The present embodiments also provide a computer-readable storage medium storing a computer program which, when executed by a processor, is capable of implementing the method of:
step A: the DAG graph of the input workflow is expressed as DAG ═ (T, E), T ═ T0,t1,...,tn-1Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is not
Figure BDA0002795293450000191
Then the delegate is only at task tiExecution completes and passes data to tjUpper time, task tjCan be executed; task tjIs task tiSuccessor node of (1), task tiIs task tjThe predecessor node of (1);
virtual machine pool V ═ { VM ═ VM0,VM1,...,VMm-1Represents a set of m virtual machines;
and CCR values;
and B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
and C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A workflow scheduling method based on a stable matching game theory is characterized by comprising the following steps: the method comprises the following steps:
step A: the DAG graph of the input workflow is expressed as DAG ═ (T, E), T ═ T0,t1,...,tn-1Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is not
Figure FDA0002795293440000011
Then the delegate is only at task tiExecution completes and passes data to tjUpper time, task tjCan be executed; task tjIs task tiSuccessor node of (1), task tiIs task tjThe predecessor node of (1);
virtual machine pool V ═ { VM ═ VM0,VM1,...,VMm-1Represents a set of m virtual machines;
and CCR values;
and B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
and C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
2. The workflow scheduling method based on the stable matching game theory as claimed in claim 1, wherein: the task rank value calculating method in the step B comprises the following steps:
Figure FDA0002795293440000012
wherein succ (t)i) As task tiSet of successor nodes of, texitAn egress task without a successor node;
Figure FDA0002795293440000013
representing a task tiThe average calculation time of (a) is,
Figure FDA0002795293440000014
representing a task tiAnd task tjAverage contact time of;
Figure FDA0002795293440000015
Figure FDA0002795293440000016
Figure FDA0002795293440000017
Figure FDA0002795293440000018
wherein s isiRepresenting a task tiSize of pkRepresenting virtual machines VMkProcessing capacity of, ET (t)i,VMk) Representing a task tiIn a virtual machine VMkCalculated time of (TT)ijRepresents a predecessor node tiTo successor node tjData size of transfer, BVMk,VMlRepresenting virtual machines VMkTo virtual machine VMlBandwidth of the transmitted data, task tjIn a virtual machine VMlThe upper process, when p is k, B VMk,VMl=0,Ttran(ti,tj)=0。
3. The workflow scheduling method based on the stable matching game theory as claimed in claim 2, wherein: the DAG layering method in the step B comprises the following steps:
Figure FDA0002795293440000021
wherein, tiLevel denotes the layer at which the task is located, pre (t)i) Representing a task tiSet of predecessor nodes of, tentryIndicating that there is no predecessor node's ingress task.
4. The workflow scheduling method based on the stable matching game theory as claimed in claim 3, wherein: the method for distributing tasks in the step C comprises the following steps:
step i: let l equal to 0;
step ii: if it is not
Figure FDA0002795293440000022
Go to step x, otherwise add the task at layer l to the set task (l) { t }i|tiLevel ═ l };
step iii: acquiring the key task t of the l layerx,txE (task (l) # CP) at each virtual machine VMkThe completion time of the processing is sequenced from morning to evening according to the completion time to obtain the task txPreference queue taskpference (x);
step iv: the key task t of the l layerxVirtual machine VM assigned to earliest completion timekUp, update task txAnd a start processing time ST, an execution time ET, and a completion time FT of the task txDeleted from set task (l);
step v: if it is not
Figure FDA0002795293440000023
Making l ═ l +1, and returning to the step ii; otherwise, let j equal to 0, go to step vi;
step vi: acquiring 1 st task task.get (0) in the set task (l), and generating a preference queue task preference (0) of the task task.get (0);
step vii: calculating the j virtual machine VM in the preference queue taskPreference (0)uIf u.waiting.size < threshold (u, l), assign task task.get (0) to virtual machine VMuGet (0) and delete task from set task (l); if u.waiting.size is threshold (u, l), execute step viii, where u.waiting.size is virtual machine VMuA number of tasks waiting to be performed;
step viii: for each task in set task (l), based on VM in virtual machineuGet the virtual machine VM from the morning to the eveninguPreference queue VM for layer I taskuReference (l) for obtaining task in virtual machine VM (0)uPreference queue VM ofuLocation p in preference (l), find virtual machine VMuAcquiring a task b with the maximum upper preference value and positioned at the l-th layer in a preference queue VMuPreference(l), wherein all tasks on the virtual machine are numbered in sequence from 0 to u.waiting.size-1 as preference values in processing order;
step ix: if p is less than q, replacing the task b with the task (0), updating ST, ET and FT of the task (0), deleting the task (0) from the set task (l), adding the task b into the task (l), and returning to the step v; otherwise, let j ═ j +1, return to step vii;
step x: and outputting the scheduling scheme S.
5. The workflow scheduling method based on the stable matching game theory as claimed in claim 4, wherein: task tiIn a virtual machine VMkThe time ST and the completion time FT of the upper start processing are respectively
Figure FDA0002795293440000031
FT(ti,VMk)=ST(ti,VMk)+ET(ti,VMk)。
6. The workflow scheduling method based on the stable matching game theory as claimed in claim 5, wherein: virtual machine VMkThe threshold value is calculated by
Figure FDA0002795293440000032
Wherein n isvIndicating the number of tasks at level v.
7. The workflow scheduling method based on the stable matching game theory as claimed in claim 6, wherein: the method for optimizing the scheduling scheme in the step D comprises the following steps:
step 1: let k equal to 0;
step 2: if k is less than or equal to m-1, acquiring the virtual machine VMkWait queue VMk.waThe first task t in iting, otherwise, jumping to the step 7;
and step 3: if the start time ST (t, VM) of the task tk) Making k equal to k +1, and returning to the step 2; otherwise, let p be 0, minST ═ infinity, minpredcesor ═ minST, go to step 4;
and 4, step 4: if p ≦ pre (t) 1, copy task p to virtual machine VMkIn the above, the start time ST' (t, VM) of the task t at this time is calculatedk) Go to step 5, otherwise go to step 6;
and 5: if ST' (t, VM)k) If minST is less than minST, let minST be ST' (t, VM)k) If p is p +1, returning to step 4, otherwise, returning to step 4;
step 6: if minST < ST (t, VM)k) Copying task minPredepcessor to virtual machine VMkIf not, making k equal to k +1 and returning to the step 2;
and 7: and outputting the optimized scheduling method S'.
8. The utility model provides a workflow dispatch system based on stable matching game theory which characterized in that: the method comprises the following steps:
an input module: the DAG graph of the input workflow is expressed as DAG ═ (T, E), T ═ T0,t1,...,tn-1Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is not
Figure FDA0002795293440000041
Then the delegate is only at task tiExecution completes and passes data to tjUpper time, task tjCan be executed; task tjIs task tiSuccessor node of (1), task tiIs task tjThe predecessor node of (1);
virtual machine pool V ═ { VM ═ VM0,VM1,...,VMm-1Represents a set of m virtual machines;
and CCR values;
the key path extraction module: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
a scheduling module: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
an optimization module: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
9. An electronic processing device, characterized by: comprising at least one processor and a storage device having at least one executable program stored thereon, the at least one processor implementing the method according to any one of claims 1-7 when the at least one executable program is executed by the at least one processor.
10. A computer-readable storage medium storing a computer program, characterized in that: the computer program is capable of implementing the method of any one of claims 1-7 when executed by a processor.
CN202011329163.6A 2020-11-24 2020-11-24 Workflow scheduling method based on stable matching game theory Active CN112306642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011329163.6A CN112306642B (en) 2020-11-24 2020-11-24 Workflow scheduling method based on stable matching game theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011329163.6A CN112306642B (en) 2020-11-24 2020-11-24 Workflow scheduling method based on stable matching game theory

Publications (2)

Publication Number Publication Date
CN112306642A true CN112306642A (en) 2021-02-02
CN112306642B CN112306642B (en) 2022-10-14

Family

ID=74335639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011329163.6A Active CN112306642B (en) 2020-11-24 2020-11-24 Workflow scheduling method based on stable matching game theory

Country Status (1)

Country Link
CN (1) CN112306642B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609170A (en) * 2021-07-21 2021-11-05 上海德衡数据科技有限公司 Online office work data processing method and system based on neural network
CN114385337A (en) * 2022-01-10 2022-04-22 杭州电子科技大学 Task grouping scheduling method for distributed workflow system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678000A (en) * 2013-09-11 2014-03-26 北京工业大学 Computational grid balance task scheduling method based on reliability and cooperative game
CN107193658A (en) * 2017-05-25 2017-09-22 重庆工程学院 Cloud computing resource scheduling method based on game theory
CN107301500A (en) * 2017-06-02 2017-10-27 北京工业大学 A kind of workflow schedule method looked forward to the prospect based on critical path task
US20180121311A1 (en) * 2016-10-28 2018-05-03 Linkedin Corporation Identifying request-level critical paths in multi-phase parallel tasks
CN108108225A (en) * 2017-12-14 2018-06-01 长春工程学院 A kind of method for scheduling task towards cloud computing platform
US20190347603A1 (en) * 2018-05-14 2019-11-14 Msd International Gmbh Optimizing turnaround based on combined critical paths
CN110609736A (en) * 2019-07-30 2019-12-24 中国人民解放军国防科技大学 Deadline constraint scientific workflow scheduling method in cloud environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678000A (en) * 2013-09-11 2014-03-26 北京工业大学 Computational grid balance task scheduling method based on reliability and cooperative game
US20180121311A1 (en) * 2016-10-28 2018-05-03 Linkedin Corporation Identifying request-level critical paths in multi-phase parallel tasks
CN107193658A (en) * 2017-05-25 2017-09-22 重庆工程学院 Cloud computing resource scheduling method based on game theory
CN107301500A (en) * 2017-06-02 2017-10-27 北京工业大学 A kind of workflow schedule method looked forward to the prospect based on critical path task
CN108108225A (en) * 2017-12-14 2018-06-01 长春工程学院 A kind of method for scheduling task towards cloud computing platform
US20190347603A1 (en) * 2018-05-14 2019-11-14 Msd International Gmbh Optimizing turnaround based on combined critical paths
CN110609736A (en) * 2019-07-30 2019-12-24 中国人民解放军国防科技大学 Deadline constraint scientific workflow scheduling method in cloud environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUANDOU WANG, JIAJIA JIANG, YUNNI XIA, QUANWANG WU, XIN LUO: "A multi-stage dynamic game-theoretic approach for multi-workflow scheduling on heterogeneous virtual machines from multiple infrastructure-as-a-service clouds", 《SPRINGER》 *
祝家钰等: "云计算环境下基于路径优先级的任务调度算法", 《计算机工程与设计》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609170A (en) * 2021-07-21 2021-11-05 上海德衡数据科技有限公司 Online office work data processing method and system based on neural network
CN114385337A (en) * 2022-01-10 2022-04-22 杭州电子科技大学 Task grouping scheduling method for distributed workflow system
CN114385337B (en) * 2022-01-10 2023-10-20 杭州电子科技大学 Task grouping scheduling method for distributed workflow system

Also Published As

Publication number Publication date
CN112306642B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
Karthick et al. An efficient multi queue job scheduling for cloud computing
Selvarani et al. Improved cost-based algorithm for task scheduling in cloud computing
Chunlin et al. Hybrid cloud adaptive scheduling strategy for heterogeneous workloads
Liu et al. Resource preprocessing and optimal task scheduling in cloud computing environments
CN109582448B (en) Criticality and timeliness oriented edge calculation task scheduling method
WO2019179250A1 (en) Scheduling method, scheduler, storage medium, and system
CN111431961B (en) Energy-saving task allocation method for cloud data center
CN104657221A (en) Multi-queue peak-alternation scheduling model and multi-queue peak-alteration scheduling method based on task classification in cloud computing
Tantalaki et al. Pipeline-based linear scheduling of big data streams in the cloud
Thaman et al. Green cloud environment by using robust planning algorithm
CN112306642B (en) Workflow scheduling method based on stable matching game theory
CN109815009B (en) Resource scheduling and optimizing method under CSP
Soni et al. A bee colony based multi-objective load balancing technique for cloud computing environment
CN114610474A (en) Multi-strategy job scheduling method and system in heterogeneous supercomputing environment
Li et al. Endpoint-flexible coflow scheduling across geo-distributed datacenters
Singh et al. A comparative study of various scheduling algorithms in cloud computing
Maurya Resource and task clustering based scheduling algorithm for workflow applications in cloud computing environment
Dubey et al. QoS driven task scheduling in cloud computing
Chatterjee et al. A multi-objective deadline-constrained task scheduling algorithm with guaranteed performance in load balancing on heterogeneous networks
Hicham et al. Deadline and energy aware task scheduling in cloud computing
Alatawi et al. Hybrid load balancing approach based on the integration of QoS and power consumption in cloud computing
Edavalath et al. MARCR: Method of allocating resources based on cost of the resources in a heterogeneous cloud environment
Khanli et al. Grid_JQA: a QoS guided scheduling algorithm for grid computing
Rahman et al. Group based resource management and pricing model in cloud computing
Rajeshwari et al. Efficient task scheduling and fair load distribution among federated clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant