CN112306642B - Workflow scheduling method based on stable matching game theory - Google Patents

Workflow scheduling method based on stable matching game theory Download PDF

Info

Publication number
CN112306642B
CN112306642B CN202011329163.6A CN202011329163A CN112306642B CN 112306642 B CN112306642 B CN 112306642B CN 202011329163 A CN202011329163 A CN 202011329163A CN 112306642 B CN112306642 B CN 112306642B
Authority
CN
China
Prior art keywords
task
virtual machine
workflow
representing
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011329163.6A
Other languages
Chinese (zh)
Other versions
CN112306642A (en
Inventor
贾兆红
潘磊
唐俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202011329163.6A priority Critical patent/CN112306642B/en
Publication of CN112306642A publication Critical patent/CN112306642A/en
Application granted granted Critical
Publication of CN112306642B publication Critical patent/CN112306642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Abstract

The invention provides a workflow scheduling method based on a stable matching game theory, which comprises the following steps: step A: DAG graph of input workflow, virtual machine pool V = { VM 0 ,VM 1 ,…,VM m‑1 And CCR values; and B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP; step C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme; step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located. The invention has the advantages that: two local optimization strategies based on the key path and the task replication effectively reduce the maximum completion time of the workflow, comprehensively consider the task fairness problem and improve the customer satisfaction.

Description

Workflow scheduling method based on stable matching game theory
Technical Field
The invention relates to the technical field of workflow scheduling, in particular to a workflow scheduling method based on a stable matching game theory.
Background
Cloud computing provides a new resource delivery and service provision model that can provide a wide variety of computing and resource services such as servers, storage capacity, cpus, etc., as well as application services such as e-commerce, social networks, etc., running over the entire network. In order to take advantage of the resource advantages of the cloud computing service mode, greatly save investment cost and get rid of the limitation of the resource by region and time, any work task can be executed in the cloud computing environment, such as a widely researched workflow task. A workflow is a series of interconnected, automatically executed business activities or tasks. We refer to workflows placed in a cloud computing environment as cloud workflows. Tasks in scientific applications such as high energy physics, gravitational wave, geography, bioinformatics, astronomy and the like are based on centralized control, and strong interdependent relationships exist among data. Because the requirement of Quality of Service (QoS) of a user needs to be met to the greatest extent in a cloud computing environment, the significance of researching a workflow task scheduling algorithm in the cloud environment is significant. The selection of the workflow task scheduling strategy has important influence on the efficiency and performance of cloud computing, and improper scheduling strategies not only cause resource waste but also cannot meet the requirements of users on QoS, so that cloud resource providers and cloud service users cannot achieve the targets of the cloud resource providers and the cloud service users.
Currently, most cloud workflow scheduling algorithms focus on the common goal of minimizing the total cost or maximum completion time of the entire workflow. However, in reality, in workflows such as video monitoring, object tracking, face recognition and the like, each subtask has its own target, such as minimum response time or fastest processing speed. For some scheduling algorithms, if the current optimal resources (such as the maximum bandwidth, the fastest processing speed, etc.) are always allocated to tasks according to priorities, for example, a workflow scheduling method based on workflow throughput maximization disclosed in the invention patent with publication number CN103838627A, and a workflow scheduling method, a multi-workflow scheduling method and a system thereof disclosed in the invention patent with publication number CN103914754A all complete the scheduling of workflows based on priorities, and further pay more attention to the processing order of multiple workflows, some tasks may not meet the requirements of clients, thereby causing unfair allocation. Unfair allocation of resources can result in significant reduction in satisfaction of some task objectives, thereby affecting customer satisfaction with cloud services. Therefore, while the global goal of the workflow is considered, the fairness among the tasks in the workflow is also considered. How to minimize the completion time of the workflow is of great significance on the premise of ensuring the task fairness.
The Game Theory (GT) mainly studies strategic interaction among rational decision makers, and is widely applied to various fields such as logics and system science. In consideration of the reliability of balanced tasks, yang and the like provide a task scheduling algorithm based on a cooperative game model, so that the efficiency is ensured, and the complexity of the algorithm is reduced. In order to solve the task scheduling problem in grid computing, gao and the like regard the grid load balancing problem as a non-cooperative game model, and provide a grid cost minimization algorithm based on GT. Experimental results show that the game-based algorithm has better capability of solving the task scheduling problem. Wang et al propose a multi-objective workflow scheduling algorithm based on a dynamic game model to minimize the maximum completion time and the total cost and maximize the system fairness of workload distribution among heterogeneous cloud virtual machines. Sujana et al define a multi-target workflow scheduling problem as a dual-target sequence cooperative game model that minimizes execution time and economic cost under two constraint conditions. Although GT has certain advantages in solving the workflow scheduling problem, in the existing research, there are still few researches considering the task fairness problem, and the effect and processing speed of the existing algorithm still cannot meet the requirements of all users.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for scheduling the workflow based on a stably-matched game theory in a GT (augmented reality) model, so as to solve the problem that the prior art does not consider task fairness and minimize the completion time of the workflow.
The invention solves the technical problems through the following technical scheme: a workflow scheduling method based on a stable matching game theory comprises the following steps:
step A: DAG graph of input workflow, denoted DAG = (T, E), T = { T = { (T, E) } 0 ,t 1 ,...,t n-1 Represents a set of n tasks in the workflow, and E represents a set of dependencies among the n tasks; if it is not
Figure BDA0002795293450000021
Then the delegate is only at task t i Execution completes and passes data to t j Upper time, task t j Can be executed; task t j Is task t i Is the successor node of task t i Is task t j The predecessor node of (a);
virtual machine pool V = { VM 0 ,VM 1 ,...,VM m-1 }, representing a set of m virtual machines;
and CCR values;
and B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
and C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The scheduling scheme is optimized based on the stable matching game theory and the task starting time, the problem that the task fairness is not considered in the prior art is solved, and the overall completion time of the workflow can be minimized.
Preferably, the task rank value calculating method in step B includes:
Figure BDA0002795293450000031
wherein, succ (t) i ) As task t i Set of successor nodes of, t exit An egress task without a successor node;
Figure BDA0002795293450000032
representing a task t i The average calculation time of (a) is,
Figure BDA0002795293450000033
representing a task t i And task t j Average contact time of (d);
Figure BDA0002795293450000034
Figure BDA0002795293450000035
Figure BDA0002795293450000036
Figure BDA0002795293450000037
wherein s is i Representing a task t i Size of p k Representing virtual machines VM k Processing capacity of (c), ET (t) i ,VM k ) Representing a task t i In a virtual machine VM k Calculated time of (TT) ij Represents a predecessor node t i To successor node t j Transmitted data size, BVM k ,VM l Representing virtual machines VM k To virtual machine VM l Bandwidth of the transmitted data, task t j In a virtual machine VM l Upper process, when p = k, BVM k ,VM l =0,T tran (t i ,t j )=0。
Preferably, the method for layering the DAG in step B is:
Figure BDA0002795293450000038
wherein, t i Level represents task t i At layer, pre (t) i ) Representing a task t i Set of predecessor nodes of, t entry Indicating that there is no predecessor node's ingress task.
Preferably, the method for allocating tasks in step C comprises the following steps:
step i: let l =0;
step ii: if it is not
Figure BDA0002795293450000039
Go to step x, otherwise add the task of the l-th layer to the set task (l) = { t = { (l) }) i |t i Level = l };
step iii: acquiring the key task t of the l layer x ,t x E (task (l) # CP) at each virtual machine VM k Is processed byThe completion time of the task t is obtained by sequencing the completion time from morning to evening x Preference queue taskpference (x);
step iv: the key task t of the l layer x Virtual machine VM assigned to earliest completion time k Up, update task t x Start processing time ST, execution time ET, and completion time FT of (1), and will task t x Delete from set task (l);
step v: if it is not
Figure BDA0002795293450000041
Let l = l +1, return to step ii; otherwise, let j =0, go to step vi;
step vi: acquiring the 1 st task task.get (0) in the set task (l), and generating a preference queue task preference (0) of the task task.get (0);
step vii: calculating the j virtual machine VM in the preference queue taskPreference (0) u If u.waiting.size < threshold (u, l), assign task task.get (0) to virtual machine VM u Get (0) and delete task task.get (0) from set task (l); if u.waiting.size = threshold (u, l), performing step viii, wherein u.waiting.size is the virtual machine VM u The number of tasks waiting to be performed;
step viii: for each task in set task (l), based on VM in virtual machine u The completion time of the virtual machine VM is obtained by sequencing from morning to evening u Preference queue VM for layer I task u Reference (l) for obtaining task (0) in virtual machine VM u Preference queue VM of u Position p in reference (l), find virtual machine VM u Acquiring a task b with the maximum upper preference value and positioned at the l-th layer in a preference queue VM u A position q in Preference (l), wherein all tasks on the virtual machine are numbered as Preference values in sequence from 0 to u.waiting.size-1 in processing order;
step ix: if p is less than q, replacing the task b with the task task.get (0), updating ST, ET and FT of the task task.get (0), deleting the task task.get (0) from the set task (l), adding the task b into the task (l), and returning to the step v; otherwise, let j = j +1, return to step vii;
step x: and outputting the scheduling scheme S.
Preferably, task t i In a virtual machine VM k The time ST and the completion time FT of the upper start processing are respectively
Figure BDA0002795293450000042
FT(t i ,VM k )=ST(t i ,VM k )+ET(t i ,VM k )。
Preferably, the virtual machine VM k The threshold value is calculated by
Figure BDA0002795293450000051
Wherein n is v Indicating the number of tasks at level v.
Preferably, the method for optimizing the scheduling scheme in step D comprises:
step 1: let k =0;
step 2: if k is less than or equal to m-1, acquiring the virtual machine VM k Wait queue VM k Waiting for the first task t, otherwise, jumping to step 7;
and 3, step 3: if the start time ST (t, VM) of task t k ) =0, let k = k +1, return to step 2; otherwise let p =0, minST = ∞, minPredecessor = minST, go to step 4;
and 4, step 4: if p ≦ pre (t) | -1, copy task p to virtual machine VM k Above, the start time ST' (t, VM) of the task t at this time is calculated k ) Turning to the step 5, otherwise, turning to the step 6;
and 5: if ST' (t, VM) k ) If < minST, let minST = ST' (t, VM) k ) If minPredessor = p and p = p +1, returning to the step 4, otherwise, letting p = p +1 and returning to the step 4;
and 6: if minsT < ST (t, VM) k ) Will beTask minPredescessor is copied to virtual machine VM k If not, making k = k +1 and returning to the step 2;
and 7: and outputting the optimized scheduling method S'.
The invention also provides a workflow scheduling system based on the stable matching game theory, which comprises the following components:
an input module: DAG graph of input workflow, denoted DAG = (T, E), T = { T = { (T, E) } 0 ,t 1 ,...,t n-1 Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is not
Figure BDA0002795293450000052
Then the delegate is only at task t i Execution completes and passes data to t j Upper time, task t j Can be executed; task t j Is task t i Is the successor node of task t i Is task t j The predecessor node of (a);
virtual machine pool V = { VM 0 ,VM 1 ,...,VM m-1 }, representing a set of m virtual machines;
and CCR values;
the key path extraction module: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
a scheduling module: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
an optimization module: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The invention also provides an electronic processing device, which comprises at least one processor and a storage device for storing at least one execution program, wherein when the at least one execution program is executed by the at least one processor, the at least one processor realizes the workflow scheduling method.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, is capable of implementing the workflow scheduling method.
The workflow scheduling method based on the stable matching game theory has the advantages that: two local optimization strategies based on the key path and the task replication effectively reduce the maximum completion time of the workflow, comprehensively consider the task fairness problem and improve the customer satisfaction.
Drawings
FIG. 1 is a DAG diagram for a CyberShake workflow provided by an embodiment of the present invention;
FIG. 2 is a DAG diagram of a workflow used in a comparative experiment provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram showing a comparison of SLRs in CCR and small-scale workflow provided by embodiments of the present invention;
FIG. 4 is a comparative schematic of SLR in CCR and Medium Scale workflow provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram comparing SLR in CCR and large-scale workflow provided by embodiments of the present invention;
FIG. 6 is a schematic diagram comparing the AVUs provided by embodiments of the present invention in CCR and small-scale workflow;
FIG. 7 is a schematic comparison of AVUs provided by embodiments of the present invention in CCR and Medium Scale workflows;
FIG. 8 is a schematic comparison of AVUs provided by embodiments of the present invention in CCR and Medium Scale workflows;
FIG. 9 is a comparative illustration of SLRs in different VM numbers and large-scale workflows provided by an embodiment of the invention;
FIG. 10 is a schematic diagram comparing AVUs provided by embodiments of the present invention in different VM numbers and large-scale workflows;
fig. 11 is a diagram illustrating comparison of VF provided by an embodiment of the present invention in different large-scale workflows.
Detailed Description
To make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First, we introduce a workflow model, which is usually represented by a Directed Acyclic Graph (DAG), and represented by a tuple as DAG = (T, E). Wherein, T = { T 0 ,t 1 ,...,t n-1 And E represents a dependency relationship set among the n tasks. If it is not
Figure BDA0002795293450000071
Then it represents task t i And task t j Dependencies exist, only at task t i Execution completes and passes data to t j Upper time, task t j Can be executed; and weigh task t j Is task t i Successor node of (1), task t i Is task t j The predecessor node of (1). Task t i Is denoted as pre (t) and the predecessor and successor node sets of i ) And succ (t) i ). A task without predecessor nodes is called an ingress task t entry Tasks without successor nodes are called egress tasks t exit . There may be multiple ingress or egress tasks in a DAG, and the amount of data transferred between tasks may be represented by a TT matrix:
Figure BDA0002795293450000072
wherein TT i,j Representing a task t i To task t j The amount of data transferred. TT when there is no dependency between tasks i,j =0。
Suppose a cloud model exists with a Virtual Machine (VM) pool consisting of m VMs, denoted by a set as V = { VM = { VM = } 0 ,VM 1 ,...,VM m-1 }. The processing capacity of each virtual machine is different and independent, the bandwidth between different virtual machines is represented by a matrix B, and B is not necessarily a symmetric matrix. That is, B VM k ,VM l Not necessarily equal to B VM l ,VM k . When k = l, B VM k ,VM l And =0. Task t i To task t j The time for transmitting data can be calculated by the data transmission quantity between the virtual machines and the bandwidth between the virtual machines, and the specific steps are as follows:
Figure BDA0002795293450000073
wherein VM k And VM l Are respectively task t i And task t j The virtual machine is located. When two tasks are executed on the same virtual machine, the data transmission time between the two tasks can be ignored, namely T tran (t i ,t j )=0。
Task t i In VM k Start time ST (t) of i ,VM k ) Can be calculated from the following formula:
Figure BDA0002795293450000081
task t i In VM k Execution time on ET (t) i ,VM k ) Can be calculated by
Figure BDA0002795293450000082
Wherein s is i Representing a task t i Size of p k Representing a VM k The processing power of (1).
Task t i In VM k Completion time FT (t) of i ,VM k ) Can be calculated by
FT(t i ,VM k )=ST(t i ,VM k )+ET(t i ,VM k )
The optimization objectives for this embodiment are as follows:
the completion time makespan of the workflow is the maximum completion time in all exit tasks, and is also the final time for all virtual machines to complete all tasks, and can be calculated by the following formula.
makespan=max{MS(VM k )},0≤k<m
Wherein MS (VM) k ) Representing a VM k Time to complete all tasks.
The quality of the solution obtained by the algorithm is evaluated by using three indexes, namely a Scheduling Length Ratio (SLR), an average virtual machine resource utilization rate (AVU) and a fairness Variance (VF), and a calculation formula is shown as follows.
1) And (7) SLR. To avoid parameter differences, resulting in excessive differences in makespan, it is necessary to normalize makespan to a lower bound. Therefore, the embodiment normalizes the makespan obtained by the algorithm according to the following formula.
Figure BDA0002795293450000083
Where | CP | represents the length of the critical path, it can be seen that the smaller SLR, the shorter the completion time of the workflow.
2) An AVU. This index is used to evaluate the average resource utilization of all virtual machines, i.e. the ratio of the busy time of the virtual machine to makespan, as shown in the following formula.
Figure BDA0002795293450000084
Where waiting is VM k I.e. the queue of tasks to be executed.
3) And VF. This index is to evaluate the fairness gap between tasks. The satisfaction value S of each task is calculated by the following formula i.e. the ratio of the actual execution time AET of the task to the desired execution time EET,
Figure BDA0002795293450000091
Figure BDA0002795293450000092
the EFT is the execution time of a task on the fastest executing virtual machine. The larger S, the higher the satisfaction of the task. The variance of all tasks is then calculated, where M is the average of all task satisfaction. The smaller the VF, the better the algorithm can balance fairness between tasks.
Based on the above model, the embodiment provides a workflow scheduling method based on stable matching game interest rate, which includes the following steps:
step A: DAG graph of input workflow, denoted DAG = (T, E), T = { T = { (T, E) } 0 ,t 1 ,...,t n-1 Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is not
Figure BDA0002795293450000093
Then the delegate is only at task t i Execution completes and passes data to t j Upper time, task t j Can be executed; task t j Is task t i Successor node of (1), task t i Is task t j The predecessor node of (1);
virtual machine pool V = { VM 0 ,VM 1 ,...,VM m-1 Represents a set of m virtual machines;
and a CCR value, wherein CCR is an empirical value representing the ratio of the average computation time to the average contact time of a workflow, the value of CCR determining whether the workflow is computationally intensive or data intensive; the determination may be made based on the experience of a worker, or a test.
And B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP; the method for calculating the rank value comprises the following steps:
Figure BDA0002795293450000094
wherein succ (t) i ) As task t i Set of successor nodes of (c), t exit An exit task without a successor node;
Figure BDA0002795293450000095
representing a task t i The average calculation time of (a) is,
Figure BDA0002795293450000096
representing a task t i And task t j Average contact time of;
Figure BDA0002795293450000097
Figure BDA0002795293450000098
Figure BDA0002795293450000101
Figure BDA0002795293450000102
wherein s is i Representing a task t i Size of p k Representing virtual machines VM k Processing capacity of (c), ET (t) i ,VM k ) Representing a task t i In a virtual machine VM k Calculated time of (TT) ij Represents a predecessor node t i To successor node t j Transmitted data size, BVM k ,VM l Representing virtual machines VM k To virtual machine VM l Bandwidth of the transmitted data, task t j In a virtual machine VM l Upper process, when p = k, B VM k ,VM l =0,T tran (t i ,t j )=0。
The DAG layering method comprises the following steps:
Figure BDA0002795293450000103
wherein, t i Level represents task t i At layer, pre (t) i ) Representing a task t i Set of predecessor nodes of, t entry Indicating that there is no predecessor node's ingress task.
And C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme; the method specifically comprises the following steps:
step i: let l =0;
step ii: if it is not
Figure BDA0002795293450000104
Go to step x, otherwise add the task of the l-th layer to the set task (l) = { t = { (l) }) i |t i Level = l };
step iii: acquiring the key task t of the l layer x ,t x E (task (l) # CP) at each virtual machine VM k The completion time of the processing is sequenced from morning to evening according to the completion time to obtain the task t x Preference queue taskpference (x);
wherein, the task t i In a virtual machine VM k The completion time FT on is:
FT(t i ,VM k )=ST(t i ,VM k )+ET(t i ,VM k )
Figure BDA0002795293450000105
ST(t i ,VM k ) As task t i In a virtual machine VM k Time to start processing;
step iv: the key task t of the l layer x Allocating virtual machine VM with earliest completion time k Update taskt x Start processing time ST, execution time ET, and completion time FT of (1), and will task t x Delete from set task (l);
step v: if it is not
Figure BDA0002795293450000111
Let l = l +1, return to step ii; otherwise, let j =0, go to step vi;
step vi: acquiring the 1 st task task.get (0) in the set task (l), and generating a preference queue task preference (0) of the task task.get (0);
step vii: calculating the jth virtual machine VM in the preference queue taskprence (0) u If u.waiting.size < threshold (u, l), assign task task.get (0) to virtual machine VM u Get (0) and delete task task.get (0) from set task (l); if u.waiting.size = threshold (u, l), step viii is executed, where u.waiting.size is the virtual machine VM u The number of tasks waiting to be performed;
virtual machine VM k The threshold value calculating method comprises the following steps:
Figure BDA0002795293450000112
wherein n is v Indicating the number of tasks of the v-th layer; the threshold value can balance the load of the virtual machines, excessive tasks are prevented from being distributed to the same virtual machine, and one threshold value is set for each virtual machine according to the number of tasks on the current layer and the processing capacity of the virtual machine, so that the virtual machine with strong processing capacity can execute more tasks.
Step viii: for each task in set task (l), based on VM in virtual machine u Get the virtual machine VM from the morning to the evening u Preference queue VM for layer I task u Reference (l) for acquiring task in virtual machine VM (0) u Preference queue VM of u Location p in Preference (l), find virtual machine VM u Task b with the highest upper preference value and at layer IThat is, b.level = l, acquiring that task b is in preference queue VM u Position q in reference (l), wherein all tasks on the virtual machine are numbered as Preference values in sequence from 0 to u.waiting.size-1 in processing order;
step ix: if p < q, replace task b with task task.get (0), update ST, ET, and FT of task task.get (0), get (0) is deleted from the set task (l), and task b is added into the task (l), and the step v is returned; otherwise, let j = j +1, return to step vii;
step x: and outputting the scheduling scheme S.
Step D: optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located;
in the scheduling of the workflow, it is possible that the current task is not executed on the same virtual machine as its predecessor node, and the current task needs to be executed, and the predecessor node must wait for the data to be transmitted to its virtual machine. Therefore, the appropriate predecessor node is selected to be copied to the virtual machine of the task, so that the data transmission time between tasks can be reduced, and the start time of the current task is advanced.
Based on the above principle, the principle of the optimization performed by the present embodiment is as follows:
1) Task t i Can only be copied to the virtual machine where the subsequent node is located.
2) Task t i Copying to a virtual machine cannot increase the completion time of other tasks on the virtual machine.
The method specifically comprises the following steps:
step 1: let k =0;
step 2: if k is less than or equal to m-1, acquiring the virtual machine VM k Wait queue VM k The first task t in waiting, otherwise jump to step 7;
and 3, step 3: if the start time ST (t, VM) of task t k ) =0, let k = k +1, return to step 2; otherwise, let p =0, minST = ∞, minPredescessor = minST go to step 4;
and 4, step 4: if p ≦ pre (t) | -1, copy task p to virtualMachine VM k Above, the start time ST' (t, VM) of the task t at this time is calculated k ) Go to step 5, otherwise go to step 6;
and 5: if ST' (t, VM) k ) If < minST, let minST = ST' (t, VM) k ) minPrecessor = p, p = p +1, returning to step 4, otherwise letting p = p +1, returning to step 4;
and 6: if minsT < ST (t, VM) k ) Copying task minPredepcessor to virtual machine VM k If not, making k = k +1 and returning to the step 2;
and 7: and outputting the optimized scheduling method S'.
The workflow scheduling method is described below by using the example of the cybersake workflow shown in fig. 1, the cybersake workflow includes 30 tasks and 52 edges, and the sizes of the tasks and the calculated rank values are shown in table 1:
Figure BDA0002795293450000121
Figure BDA0002795293450000131
table 1: task size and rank value of CyberShake workflow
The data transfer matrix TT between tasks is calculated according to:
TT(t i ,t j )=s i ×CCR
in this embodiment, the computing capacities of CCR =0.4,5 virtual machines are 5,8,7,9,6; the bandwidth B between the virtual machines is shown in Table 2:
VM 0 VM 1 VM 2 VM 3 VM 4
VM 0 0 7 8 9 6
VM 1 5 0 8 7 5
VM 2 7 6 0 8 4
VM 3 7 8 6 0 5
VM 4 9 7 6 4 0
table 2: bandwidth between virtual machines of cybersheke workflow
As can be seen from Table 1, the key path of the cybersShake workflow is t 2 ,t 5 ,t 6 ,t 0 Explaining by taking task allocation of a first layer and a second layer as an example;
VM.preference VM.threshold
VM
0 2,13 1
VM 1 2,13 1
VM 2 2,13 1
VM 3 2,13 1
VM 4 2,13 1
table 3: preference queue for cybersheke workflow first tier virtual machines
And in the first layer, the preference queue of the virtual machine is obtained according to the completion time of the task on the virtual machine. But since the first layer task has no predecessor nodes, the start time of all tasks is 0, and all virtually generated preference queues are the same considering only the task completion time. For the first distributed critical path task, the virtual with the highest processing speed is preferentially distributed, so that the preference queue does not need to be calculated. At task t 2 After the allocation is completed, the task t is calculated 13 The queue of available preferences for elapsed time on each virtual machine is shown in Table 4; then task t 13 Is assigned to a virtual machine VM 1 The above.
Unassigned task Task Task.preference VM VM.waiting Task.ST Task.FT
2,13 2 2 2 0 5.11
13 13 2,3,5,1,4 1 13 0 5.25
Table 4: preference queue for first layer tasks of cybersake workflow
Because the second layer of tasks have precursor nodes and the completion time sequence of the tasks on different virtual machines is different, the preference queues of the virtual machines are different, and firstly, the critical path task t is processed 5 Assigned to the VM that made it the earliest completed 3 And sequentially distributing each task in sequence. When task t is assigned 22 Due to current VM 3 Is full, by comparing task t 22 And VM 3 Task(s) above, task(s) t with lower preference value 9 Removed and redistributed.
VM.preference VM.threshold
VM
0 22,16,14,26,18,20,11,24,7,5,3,9,28 3
VM 1 11,7,5,3,9,22,14,16,26,18,20,24,28 4
VM 2 11,22,7,16,14,5,3,26,18,20,9,24,28 3
VM 3 22,16,14,26,18,20,24,11,7,5,3,9,28 4
VM 4 22,16,14,11,26,7,18,5,20,3,24,9,28 3
Table 5: preference queue of cybersheke workflow second-tier virtual machine
Figure BDA0002795293450000141
Figure BDA0002795293450000151
Table 6: preference queue for cybersShake workflow second layer tasks
In this way, the scheduling result of the workflow scheduling method provided by this embodiment to the DAG can be obtained, as shown in table 7, where "+" and "-" respectively represent idle time of the virtual machine and execution time of the task; in addition, the task of replication is shown in bold.
Figure BDA0002795293450000152
Figure BDA0002795293450000161
Table 7: gantt chart of cybersheke workflow
The embodiment also aims at the four reference workflow structures of cybersheke, epigenomics, LIGO and Montage shown in fig. 2 to compare the workflow scheduling method (SM-CPTD) provided by the embodiment with the existing TDA, GSS, NMMWS and Min-Min algorithms, where TDA is an algorithm for minimizing the total completion time of a workflow based on task replication and task grouping. GSS minimizes completion time and maximizes virtual machine average resource utilization based on task granularity in the workflow. The NMMWS calculates the dynamic threshold of the task through a minimum-maximum normalization method so as to ensure the maximum completion time of each workflow and the utilization rate of cloud resources. Min-Min is a commonly used workflow scheduling algorithm, generally considered to be one of the most efficient scheduling algorithms for benchmarking, and may directly reflect the performance of different algorithms. The time complexity of the four algorithms is respectively: o (n) 3 ),O(n 2 m),O(n 3 ml) and O (n) 2 m). While the time complexity of our proposed SM-CPTD algorithm is O (n) 2 l), similar to Min-Min. In addition, in order to verify the two local optimization strategies proposed in the present application, the most primitive stable matching algorithm is also referred to as a comparison algorithm and is referred to as SM.
The parameters and their value ranges used in the present application are shown in table 8. The task size, processing speed of the virtual machines, and bandwidth between the virtual machines are all randomly generated according to a uniform distribution. An average was taken 10 runs per example. All algorithms are realized by java language programming on Eclipse, and the operating environment is Intel core i7-9750H CPU @2.60GHz, 8GB RAM, microsoft Windows 10professional 64-bit operating system.
Figure BDA0002795293450000162
Table 8: parameters and values
In order to study the influence of the CCR and the number m of virtual machines on the algorithm, the number of virtual machines of the small, medium and large workflows is respectively set to 5, 10 and 50 when the influence of the CCR on the algorithm is considered. Similarly, when considering the effect of m on the algorithm, the value of CCR is set to 1. Meanwhile, to study the fairness of different algorithms to different workflows, CCR was set to 1, m was set to 50.
As is clear from fig. 3 to 5, as CCR increases from 0.4 to 2, the data transmission time between tasks increases, so that the SLR values of all algorithms increase, and therefore the workflow completion time increases. However, in the case of workflows of different sizes, the SM-CPTD can obtain the minimum SLR compared with the other four comparison algorithms, i.e., the SM-CPTD can obtain the minimum workflow completion time. In addition, GSS performs similarly to NMMWS and in most cases both algorithms and Min-Min outperform TDA. Furthermore, from the results of SM-CPTD and SM, two local optimization strategies can effectively reduce the completion time of a workflow, especially for data intensive workflows.
In TDA, in order to reduce the transmission time between tasks, a large number of tasks need to be duplicated, resulting in data redundancy and additional execution time for the tasks. Thus, it performs better in data-intensive workflows than in compute-intensive workflows. The performance of an NMMWS depends on the size of the task and the processing power of the virtual machine. However, NMMWS has poor performance in the case of small-scale workflows, since it is difficult to obtain good batch processing results.
As can be seen from fig. 6 to 8, as the CCR value is increased, the task needs to wait longer for transmitting data, so that the AVU values of all algorithms are decreased. Furthermore, unlike the results of SLR, TDA works best in AVU, because the large number of replication tasks in TDA takes full advantage of the virtual machine's idle time, increasing the virtual machine's utilization. Although AVU of SM-CPTD is inferior to TDA, it is superior to GSS, NMMWS and Min-Min. Compared with SM, similar to TDA, SM-CPTD utilizes the idle time of the virtual machine, and improves the resource utilization rate of the virtual machine.
As can be seen from fig. 9-10, for large workflows, SM-CPTD can achieve the smallest SLR regardless of the number of virtual machines. In addition, as the number of virtual machines increases, the parallel processing capacity of the virtual machines increases, and although the completion time of the workflow is reduced, the utilization rate of the virtual machines is also reduced. In addition, while SM-CPTD performed less well than TDA in terms of AVU, it outperformed GSS, NMMWS and Min-Min on all example groups.
In addition, it can be seen from fig. 11 that the VF of the SM is smallest in different large-scale workflows. Meaning that the stable matching algorithm can effectively balance the fairness of each task compared to the other four. However, the addition of two local optimization strategies affects the fairness of partial tasks, thus leading the VF value of SM-CPTD to be slightly larger than SM.
In summary, the performance of the algorithm SM-CPTD proposed in this embodiment is better than that of all other algorithms. In addition, two local optimization strategies based on critical path and task replication effectively reduce the maximum completion time of the workflow. In addition, when CCR is 1, the average operating time ranges of SM-CPTD are 10ms to 20ms, 30ms to 40ms, 1200ms to 1400ms for four different configurations of small, medium and large-scale workflows, respectively. Therefore, the SM-CPTD has better distribution efficiency and can be applied to an online workflow scheduling scene.
This embodiment also provides a workflow dispatch system based on stably match game theory, includes:
an input module: inputting a workflowIs denoted as DAG = (T, E), T = { T = } 0 ,t 1 ,...,t n-1 Represents a set of n tasks in the workflow, and E represents a set of dependencies among the n tasks; if it is not
Figure BDA0002795293450000181
Then the delegate is only at task t i Execution completes and passes data to t j Upper time, task t j Can be executed; task t j Is task t i Is the successor node of task t i Is task t j The predecessor node of (a);
virtual machine pool V = { VM 0 ,VM 1 ,...,VM m-1 Represents a set of m virtual machines;
and CCR values;
the key path extraction module: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
the scheduling module: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
an optimization module: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The present embodiment further provides an electronic processing device, including at least one processor and a storage device storing at least one executable program, where when the at least one executable program is executed by the at least one processor, the at least one processor implements the following method:
step A: DAG graph of input workflow, denoted DAG = (T, E), T = { T = { (T, E) } 0 ,t 1 ,...,t n-1 Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is used
Figure BDA0002795293450000182
Then the delegate is only at task t i Execution completes and passes data to t j Upper time, task t j Can be carried outA row; task t j Is task t i Successor node of (1), task t i Is task t j The predecessor node of (1);
virtual machine pool V = { VM 0 ,VM 1 ,...,VM m-1 Represents a set of m virtual machines;
and CCR values;
and B, step B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
step C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The present embodiments also provide a computer-readable storage medium storing a computer program which, when executed by a processor, is capable of implementing the method of:
step A: DAG graph of input workflow, denoted DAG = (T, E), T = { T = { (T, E) } 0 ,t 1 ,...,t n-1 Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is used
Figure BDA0002795293450000191
Then the delegate is only at task t i Execution completes and passes data to t j Upper time, task t j Can be executed; task t j Is task t i Successor node of (1), task t i Is task t j The predecessor node of (1);
virtual machine pool V = { VM 0 ,VM 1 ,...,VM m-1 Represents a set of m virtual machines;
and CCR values;
and B: calculating the rank value of each task, and selecting the task with the maximum rank value in each layer to be added into the CP;
step C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A workflow scheduling method based on a stable matching game theory is characterized by comprising the following steps: the method comprises the following steps:
step A: DAG graph of input workflow, denoted DAG = (T, E), T = { T = { (T, E) } 0 ,t 1 ,...,t n-1 Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is used
Figure FDA0003816164040000011
Then the delegate is only at task t i Execution completes and passes data to t j Upper time, task t j Can be executed; task t j Is task t i Is the successor node of task t i Is task t j The predecessor node of (1);
virtual machine pool V = { VM 0 ,VM 1 ,…,VM m-1 Represents a set of m virtual machines;
and a CCR value, said CCR value being an empirical value representing the ratio of the average calculated time to the average contact time for said workflow;
and B: calculating the rank value of each task, selecting the task with the maximum rank value in each layer to add into the CP, wherein the method for calculating the rank value of the task in the step B comprises the following steps:
Figure FDA0003816164040000012
wherein succ (t) i ) As task t i Set of successor nodes of (c), t exit An egress task without a successor node;
Figure FDA0003816164040000013
representing a task t i The average calculation time of (a) is,
Figure FDA0003816164040000014
representing a task t i And task t j Average contact time of (d);
Figure FDA0003816164040000015
Figure FDA0003816164040000016
Figure FDA0003816164040000017
Figure FDA0003816164040000018
wherein s is i Representing a task t i Size of p k Representing virtual machines VM k Processing capacity of (c), ET (t) i ,VM k ) Representing a task t i In a virtual machine VM k Calculated time of (TT) ij Represents a predecessor node t i To successor node t j Data size of transmission, B (VM) k ,VM l ) Representing virtual machines VM k To virtual machine VM l Bandwidth of the transmitted data, task t j In a virtual machine VM l Upper process, when p = k, B (VM) k ,VM l )=0,T tran (t i ,t j )=0;
Step C: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
step D: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
2. The workflow scheduling method based on the stable matching game theory as claimed in claim 1, wherein: the DAG layering method in the step B comprises the following steps:
Figure FDA0003816164040000021
wherein, t i Level denotes the layer at which the task is located, pre (t) i ) Representing a task t i Set of predecessor nodes of, t entry Indicating that there is no predecessor node's ingress task.
3. The workflow scheduling method based on the stable matching game theory as claimed in claim 2, wherein: the method for distributing tasks in the step C comprises the following steps:
step i: let l =0;
step ii: if it is not
Figure FDA0003816164040000022
Go to step x, otherwise add task of l layer to set task (l) = { t = } i |t i Level = l };
step iii: acquiring the key task t of the l layer x ,t x E (task (l) # CP) at each virtual machine VM k The completion time of the processing is sequenced from morning to evening according to the completion time to obtain the task t x Preference queue taskpference (x);
step iv: the key task t of the l layer x Virtual machine VM assigned to earliest completion time k Up, update task t x And a start processing time ST, an execution time ET, and a completion time FT of the task t x Delete from set task (l);
step v: if it is not
Figure FDA0003816164040000023
Let l = l +1, return to step ii; otherwise, let j =0, go to step vi;
step vi: acquiring a 1 st task task.get (0) in the set task (l), and generating a preference queue task preference (0) of the task task.get (0);
step vii: calculating the j virtual machine VM in the preference queue taskPreference (0) u If u.waiting.size, threshold of (u, l)<threshold (u, l), assign task (0) to virtual machine VM u Get (0) and delete task task.get (0) from set task (l); if u.waiting.size = threshold (u, l), performing step viii, wherein u.waiting.size is the virtual machine VM u The number of tasks waiting to be performed;
step viii: for each task in set task (l), based on VM in virtual machine u Get the virtual machine VM from the morning to the evening u Preference queue VM for layer I task u Reference (l) for acquiring task in virtual machine VM (0) u Preference queue VM of u Location p in Preference (l), find virtual machine VM u Acquiring a task b with the maximum upper preference value and positioned at the l-th layer in a preference queue VM u Position q in reference (l), wherein all tasks on the virtual machine are numbered as Preference values in sequence from 0 to u.waiting.size-1 in processing order;
step ix: if p is less than q, replacing the task b with the task task.get (0), updating ST, ET and FT of the task task.get (0), deleting the task task.get (0) from the set task (l), adding the task b into the task (l), and returning to the step v; otherwise, let j = j +1, return to step vii;
step x: and outputting the scheduling scheme S.
4. The workflow scheduling method based on the stable matching game theory as claimed in claim 3, wherein: task t i In a virtual machine VM k The time ST and the completion time FT of the upper start processing are respectively
Figure FDA0003816164040000031
FT(t i ,VM k )=ST(t i ,VM k )+ET(t i ,VM k )。
5. The workflow scheduling method based on the stable matching game theory as claimed in claim 4, wherein: virtual machine VM k The threshold value is calculated by
Figure FDA0003816164040000032
Wherein n is v Indicating the number of tasks at level v.
6. The workflow scheduling method based on the stable matching game theory as claimed in claim 5, wherein: the method for optimizing the scheduling scheme in the step D comprises the following steps:
step 1: let k =0;
step 2: if k is less than or equal to m-1, acquiring the virtual machine VM k Wait queue VM k Waiting for the first task t, otherwise, jumping to step 7;
and step 3: if the start time ST (t, VM) of the task t k ) =0, let k = k +1, return to step 2; otherwise, let p =0, minST = ∞, minPredescessor = minST go to step 4;
and 4, step 4: if p ≦ pre (t) | -1, copy task p to virtual machine VM k In the above, the start time of the task t at this time is calculatedST′(t,VM k ) Go to step 5, otherwise go to step 6;
and 5: if ST' (t, VM) k )<minST, let minST = ST' (t, VM) k ) minPrecessor = p, p = p +1, returning to step 4, otherwise letting p = p +1, returning to step 4;
and 6: if minST<ST(t,VM k ) Copying the task minPredescessor to the virtual machine VM k If k = k +1, returning to the step 2, otherwise, returning to the step 2 if k = k + 1;
and 7: and outputting the optimized scheduling method S'.
7. The utility model provides a workflow dispatch system based on stable matching game theory which characterized in that: the method comprises the following steps:
an input module: DAG graph of input workflow, denoted DAG = (T, E), T = { T = { (T, E) } 0 ,t 1 ,...,t n-1 Representing n task sets in the workflow, and E representing a dependency relationship set among the n tasks; if it is used
Figure FDA0003816164040000041
Then the delegate is only at task t i Execution completes and passes data to t j Upper time, task t j Can be executed; task t j Is task t i Successor node of (1), task t i Is task t j The predecessor node of (1);
virtual machine pool V = { VM 0 ,VM 1 ,...,VM m-1 Represents a set of m virtual machines;
and a CCR value, said CCR value being an empirical value representing the ratio of the average calculated time to the average contact time for said workflow;
the key path extraction module: calculating the rank value of each task, selecting the task with the maximum rank value in each layer to add into the CP, wherein the method for calculating the task rank value comprises the following steps:
Figure FDA0003816164040000042
wherein, succ (t) i ) As task t i Set of successor nodes of, t exit An egress task without a successor node;
Figure FDA0003816164040000043
representing a task t i The average calculation time of (a) is,
Figure FDA0003816164040000044
representing a task t i And task t j Average contact time of;
Figure FDA0003816164040000045
Figure FDA0003816164040000051
Figure FDA0003816164040000052
Figure FDA0003816164040000053
wherein s is i Representing a task t i Size of (p) k Representing virtual machines VM k Processing capacity of (c), ET (t) i ,VM k ) Representing a task t i In a virtual machine VM k Calculated time of (TT) ij Represents a predecessor node t i To successor node t j Data size of transmission, B (VM) k ,VM l ) Representing virtual machines VM k To virtual machine VM l Bandwidth of the transmitted data, task t j In a virtual machine VM l Upper process, when p = k, B (VM) k ,VM l )=0,T tran (t i ,t j )=0;
A scheduling module: allocating tasks to the virtual machines based on a stable matching game theory to obtain a scheduling scheme;
an optimization module: and optimizing a scheduling scheme, traversing all tasks, and copying a precursor node which leads the start time of the current task to a virtual machine where the current task is located.
8. An electronic processing device, characterized by: comprising at least one processor and a storage device having at least one executable program stored thereon, the at least one processor implementing the method according to any one of claims 1-6 when the at least one executable program is executed by the at least one processor.
9. A computer-readable storage medium storing a computer program, characterized in that: the computer program is capable of implementing the method of any one of claims 1-6 when executed by a processor.
CN202011329163.6A 2020-11-24 2020-11-24 Workflow scheduling method based on stable matching game theory Active CN112306642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011329163.6A CN112306642B (en) 2020-11-24 2020-11-24 Workflow scheduling method based on stable matching game theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011329163.6A CN112306642B (en) 2020-11-24 2020-11-24 Workflow scheduling method based on stable matching game theory

Publications (2)

Publication Number Publication Date
CN112306642A CN112306642A (en) 2021-02-02
CN112306642B true CN112306642B (en) 2022-10-14

Family

ID=74335639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011329163.6A Active CN112306642B (en) 2020-11-24 2020-11-24 Workflow scheduling method based on stable matching game theory

Country Status (1)

Country Link
CN (1) CN112306642B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385337B (en) * 2022-01-10 2023-10-20 杭州电子科技大学 Task grouping scheduling method for distributed workflow system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678000A (en) * 2013-09-11 2014-03-26 北京工业大学 Computational grid balance task scheduling method based on reliability and cooperative game
CN107193658A (en) * 2017-05-25 2017-09-22 重庆工程学院 Cloud computing resource scheduling method based on game theory
CN107301500A (en) * 2017-06-02 2017-10-27 北京工业大学 A kind of workflow schedule method looked forward to the prospect based on critical path task
CN108108225A (en) * 2017-12-14 2018-06-01 长春工程学院 A kind of method for scheduling task towards cloud computing platform
CN110609736A (en) * 2019-07-30 2019-12-24 中国人民解放军国防科技大学 Deadline constraint scientific workflow scheduling method in cloud environment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180121311A1 (en) * 2016-10-28 2018-05-03 Linkedin Corporation Identifying request-level critical paths in multi-phase parallel tasks
US20190347603A1 (en) * 2018-05-14 2019-11-14 Msd International Gmbh Optimizing turnaround based on combined critical paths

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678000A (en) * 2013-09-11 2014-03-26 北京工业大学 Computational grid balance task scheduling method based on reliability and cooperative game
CN107193658A (en) * 2017-05-25 2017-09-22 重庆工程学院 Cloud computing resource scheduling method based on game theory
CN107301500A (en) * 2017-06-02 2017-10-27 北京工业大学 A kind of workflow schedule method looked forward to the prospect based on critical path task
CN108108225A (en) * 2017-12-14 2018-06-01 长春工程学院 A kind of method for scheduling task towards cloud computing platform
CN110609736A (en) * 2019-07-30 2019-12-24 中国人民解放军国防科技大学 Deadline constraint scientific workflow scheduling method in cloud environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A multi-stage dynamic game-theoretic approach for multi-workflow scheduling on heterogeneous virtual machines from multiple infrastructure-as-a-service clouds;Yuandou Wang, Jiajia Jiang, Yunni Xia, Quanwang Wu, Xin Luo;《Springer》;20181231;全文 *
云计算环境下基于路径优先级的任务调度算法;祝家钰等;《计算机工程与设计》;20131016;第34卷(第10期);全文 *

Also Published As

Publication number Publication date
CN112306642A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
Karthick et al. An efficient multi queue job scheduling for cloud computing
Awad et al. Enhanced particle swarm optimization for task scheduling in cloud computing environments
Selvarani et al. Improved cost-based algorithm for task scheduling in cloud computing
Chunlin et al. Hybrid cloud adaptive scheduling strategy for heterogeneous workloads
WO2019179250A1 (en) Scheduling method, scheduler, storage medium, and system
CN114610474B (en) Multi-strategy job scheduling method and system under heterogeneous supercomputing environment
Tantalaki et al. Pipeline-based linear scheduling of big data streams in the cloud
Ashouraei et al. A new SLA-aware load balancing method in the cloud using an improved parallel task scheduling algorithm
Thaman et al. Green cloud environment by using robust planning algorithm
CN109815009B (en) Resource scheduling and optimizing method under CSP
CN112306642B (en) Workflow scheduling method based on stable matching game theory
Singh et al. A comparative study of various scheduling algorithms in cloud computing
Dubey et al. QoS driven task scheduling in cloud computing
Maurya Resource and task clustering based scheduling algorithm for workflow applications in cloud computing environment
Hicham et al. Deadline and energy aware task scheduling in cloud computing
Edavalath et al. MARCR: Method of allocating resources based on cost of the resources in a heterogeneous cloud environment
Khanli et al. Grid_JQA: a QoS guided scheduling algorithm for grid computing
Panwar et al. Analysis of various task scheduling algorithms in cloud environment
Wang et al. Cost-effective scheduling precedence constrained tasks in cloud computing
Zhang et al. Multi-user multi-provider resource allocation in cloud computing
Rahman et al. Group based resource management and pricing model in cloud computing
Arif A Hybrid MinMin & Round Robin Approach for task scheduling in cloud computing
Rajeshwari et al. Efficient task scheduling and fair load distribution among federated clouds
CN113722076B (en) Real-time workflow scheduling method based on QoS and energy consumption collaborative optimization
Silpa et al. A comparative analysis of scheduling policies in cloud computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant