CN113176933A - Dynamic cloud network interconnection method for massive workflow tasks - Google Patents
Dynamic cloud network interconnection method for massive workflow tasks Download PDFInfo
- Publication number
- CN113176933A CN113176933A CN202110375737.1A CN202110375737A CN113176933A CN 113176933 A CN113176933 A CN 113176933A CN 202110375737 A CN202110375737 A CN 202110375737A CN 113176933 A CN113176933 A CN 113176933A
- Authority
- CN
- China
- Prior art keywords
- node
- scheduling
- executable
- tasks
- host
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses a dynamic cloud network interconnection method for massive workflow tasks, which comprises the following steps: s1: constructing an executable node set S, wherein the executable node set S is maintained by a scheduling center; s2: sorting all the nodes in the S according to the importance degree by using a reinforcement learning algorithm, and selecting the node with the highest importance degree after sorting as the node to be scheduled at the next moment; s3: traversing all hosts which accord with resource constraint conditions in the current network, selecting the host to be executed by the current scheduling node according to the evaluation function, immediately sending a notification to a scheduling center after the node is executed on the host, and updating an executable node set S; s4: repeating S2-S3 until all nodes of all DAG tasks have been executed, resulting in a total time spent by parallel distributed job scheduling. The invention has better scheduling performance and less consumed computing resources, and solves the scheduling uncertainty problem under a high-concurrency scene.
Description
Technical Field
The invention relates to the technical field of parallel distributed systems, in particular to a dynamic cloud network interconnection method for massive workflow tasks.
Background
The distributed system is a logically centralized and physically distributed system composed of a plurality of independent computers through a communication network, and a plurality of computers are connected through the communication network. The scheduling method of various types of DAG tasks on the parallel distributed system mainly comprises the following steps: (1) a Markov decision process method, (2) a heuristic method, and (3) a random search method. The Markov decision process method has the main idea that the Markov decision process is utilized to divide the whole DAG into a plurality of task intervals, the plurality of task intervals are respectively and parallelly scheduled on a distributed system, and a task-resource mapping scheme is obtained on the premise of ensuring the completion of the whole DAG in the period; the heuristic method mainly calculates the priority of each task according to the attribute and the evaluation function of each task in the DAG, and carries out scheduling according to the priority of each task in the scheduling process. A task with high priority is firstly scheduled, all available computers in the distributed system are traversed, and the task is distributed to the computer which can complete the task at the earliest; the random search method uses the concept of 'survival of suitable persons' in biology as reference, comprises an evolutionary algorithm, an ant colony algorithm, a particle swarm algorithm and the like, and is commonly used for solving the problem of optimal combination in parallel distributed job scheduling. For multi-DAG hybrid scheduling in a parallel distributed system, the scheduling order among multiple DAG tasks needs to be considered, and the existing scheduling methods of several multi-DAG tasks mainly include the following methods: (1) a sequential approach, where all DAGs are scheduled in a certain order, e.g., processed according to multiple First Come First Served (FCFS); (2) a rotation method, which selects one task from a plurality of DAGs in turn to execute until all tasks in all DAGs are completely executed; (3) and the DAG merging method is characterized in that a plurality of DAGs are compounded into a new task by adding false inlet and outlet nodes, and then the task is completed by utilizing the scheduling method of a single DAG.
For a parallel distributed system job scheduling method, the prior art has the following problems: (1) in the process of scheduling the DAG, the parallel distributed system is regarded as an ideal environment, and the time delay of transmission of tasks on different computers is ignored; (2) in the process of scheduling multiple DAGs, the prior art only considers the influence of the respective attributes of different DAG tasks on the execution sequence, and ignores the real-time situation of the distributed system at a certain moment; (3) in the process of scheduling a single DAG, the scheduling performance of the existing method depends on the setting of an evaluation function or some threshold value, and the scheduling performance can be obtained after a lot of time and repeated work, so that the cost is high.
In the prior art, a chinese patent publication No. CN106293893A discloses a job scheduling method, a job scheduling device, and a distributed system in 2017, month 1 and 4, where the distributed system at least includes a central node, a plurality of control nodes connected to the central node, and a plurality of computing nodes respectively connected to each control node; the central node distributes the task of the operation to each control node; the control node schedules each task slice to run in the computing node connected with the control node, and the method comprises the following steps: when the operation of at least one task slice of a first task is finished, a first control node for scheduling the first task notifies a second control node for scheduling a second task to acquire operation data generated by at least one task slice of the first task; and the second control node acquires the running data generated by at least one task slice of the first task, distributes the running data to each task slice of the second task, schedules at least one task slice of the second task to run and processes the running data. The scheme does not consider the problems of transmission delay and scheduling sequence in the scheduling process.
Disclosure of Invention
The invention provides a dynamic cloud network interconnection method facing massive workflow tasks, aiming at overcoming the defect that the existing parallel distributed job scheduling method does not consider the time delay of task transmission on different computers and the real-time condition of a topological network.
The primary objective of the present invention is to solve the above technical problems, and the technical solution of the present invention is as follows:
the invention provides a dynamic cloud network interconnection method for massive workflow tasks in a first aspect, which comprises the following steps:
s1: constructing an executable node set S, wherein the executable node set S is maintained by a scheduling center;
s2: sorting all the nodes in the S according to the importance degree by using a reinforcement learning algorithm, and selecting the node with the highest importance degree after sorting as the node to be scheduled at the next moment;
s3: traversing all hosts which accord with resource constraint conditions in the current network, selecting the host to be executed by the current scheduling node according to the evaluation function, immediately sending a notification to a scheduling center after the node is executed on the host, and updating an executable node set S;
s4: repeating S2-S3 until all nodes of all DAG tasks have been executed, resulting in a total time spent by parallel distributed job scheduling.
Further, the set of tasks to be scheduled is denoted as J ═ J1,j2,…,jnThe task set comprises n DAG tasks, and each DAG task can be expressed asWherein m isiRepresenting the number of nodes contained in the ith DAG task and the computing resources consumed by each nodeThe topological network is denoted as G ═ (V, E, w (E)), where V ═ D1,D2,…,DwW (e) represents the bandwidth of the edge e between two hosts connected in the network.
Further, the specific process of step S2 is:
inputting current topological network information and an executable node set S into an Agent of a sequencing node, obtaining the selected probability of each node in the output S, selecting a node with the maximum probability, scheduling in the network, and updating the topological network information after the scheduling action is sent out.
Further, the evaluation function expression described in step S3 is:
whereinRepresenting a node niIs transmitted to DkThe elapsed transmission time, parent (n)i) Represents niAll parent nodes of trans (a, D)k) Indicating the host to which node a is located to DkThe time spent;pointing node niAt the host DkUpper execution of the computation time, vol (n)i) Is node niComputing resource required to be consumed, computer (D)k) Representing a host DkThe frequency of calculation of (2).
Furthermore, double-priority queues are adopted to update the executable node set S and select nodes in the executable sequence respectively, wherein the first queue updates the executable node set S, the second queue selects the nodes in the executable sequence, and the priority of the first queue is higher than that of the second queue.
A second aspect of the present invention provides a computer system, including a memory, a processor, and a program stored on the memory and executable on the processor for a massive workflow task dynamic cloud network interconnection method, wherein: when the dynamic cloud network interconnection method for the massive workflow tasks is executed by a processor, the following steps are realized:
s1: constructing an executable node set S, wherein the executable node set S is maintained by a scheduling center;
s2: sorting all the nodes in the S according to the importance degree by using a reinforcement learning algorithm, and selecting the node with the highest importance degree after sorting as the node to be scheduled at the next moment;
s3: traversing all hosts which accord with resource constraint conditions in the current network, selecting the host to be executed by the current scheduling node according to the evaluation function, immediately sending a notification to a scheduling center after the node is executed on the host, and updating an executable node set S;
s4: repeating S2-S3 until all nodes of all DAG tasks have been executed, resulting in a total time spent by parallel distributed job scheduling.
Further, the set of tasks to be scheduled is denoted as J ═ J1,j2,...,jnThe task set comprises n DAG tasks, and each DAG task can be expressed asWherein m isiRepresenting the number of nodes contained in the ith DAG task and the computing resources consumed by each nodeThe topological network is denoted as G ═ (V, E, w (E)), where V ═ D1,D2,...,DwW (e) represents the bandwidth of the edge e between two hosts connected in the network.
Further, the specific process of step S2 is:
inputting current topological network information and an executable node set S into an Agent of a sequencing node, obtaining the selected probability of each node in the output S, selecting a node with the maximum probability, scheduling in the network, and updating the topological network information after the scheduling action is sent out.
Further, the evaluation function expression described in step S3 is:
whereinRepresenting a node niIs transmitted to DkThe elapsed transmission time, parent (n)i) Represents niAll parent nodes of trans (a, D)k) Indicating the host to which node a is located to DkThe time spent;representing a node niAt the host DkUpper execution of the computation time, vol (n)i) Is node niComputing resource required to be consumed, computer (D)k) Representing a host DkThe frequency of calculation of (2).
A third aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program for: the computer program, when executed by a processor, implements the steps of the method for parallel distributed job scheduling for adaptive network topology as described above.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention fully considers the real-time situation of the topological network to obtain the scheduling sequence through the reinforcement learning algorithm, the scheduling performance is better, the consumed computing resources are less, the transmission time delay among different computers in the DAG scheduling process is reduced through the double-message queue, and the scheduling uncertainty problem under the high-concurrency scene is solved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a diagram of the system architecture in which the method of the present invention operates.
FIG. 3 is an intelligent workflow diagram of a sequencing node of the present invention.
FIG. 4 is a diagram of a dual priority queue according to the present invention.
Fig. 5 is a comparison graph of the scheduling results of the method of the present invention and the fair scheduling method.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Example 1
As shown in fig. 1, a first aspect of the present invention provides a dynamic cloud network interconnection method for massive workflow tasks, including the following steps:
s1: constructing an executable node set S, wherein the executable node set S is maintained by a scheduling center;
it should be noted that, before scheduling, the executable node set S includes entry nodes of all DAG tasks, as shown in fig. 2;
s2: sorting all the nodes in the S according to the importance degree by using a reinforcement learning algorithm, and selecting the node with the highest importance degree after sorting as the node to be scheduled at the next moment;
s3: traversing all hosts which accord with resource constraint conditions in the current network, selecting the host to be executed by the current scheduling node according to the evaluation function, immediately sending a notification to a scheduling center after the node is executed on the host, and updating an executable node set S;
s4: repeating S2-S3 until all nodes of all DAG tasks have been executed, resulting in a total time spent by parallel distributed job scheduling.
Further, it should be noted that each node in the executable node set is included in one DAG task, all DAG tasks form a task set to be scheduled, and the task set to be scheduled is denoted as J ═ J1,j2,...,jnThe task set comprises n DAG tasks, and each DAG task can be expressed as Wherein m isiRepresenting the number of nodes contained in the ith DAG task and the computing resources consumed by each nodeThe topological network is denoted as G ═ (V, E, w (E)), where V ═ D1,D2,...,DwW (e) represents the bandwidth of the edge e between two hosts connected in the network.
Further, the specific process of step S2 is:
inputting current topology network information and an executable node set S into Agent of a sequencing node, it should be noted that Agent is essentially a classification model, and after inputting executable node set S and current topology network information, Agent outputs the selected probability of each node in S, selects a node with the highest probability, and performs scheduling in the network, and after the scheduling action is sent, topology network information is updated, for example, some channel bandwidths in the network are occupied, some resources of a host are consumed, and the like, and the process is shown in fig. 3.
Further, the evaluation function expression described in step S3 is:
whereinRepresenting a node niIs transmitted to DkThe elapsed transmission time, parent (n)i) Represents niAll parent nodes of trans (a, D)k) Indicating the host to which node a is located to DkThe time spent;representing a node niAt the host DkUpper execution of the computation time, vol (n)i) Is node niComputing resource required to be consumed, computer (D)k) Representing a host DkThe frequency of calculation of (2).
Further, in the present invention, after the node is executed on the host, a completion notification is sent to the scheduling center, and after the scheduling center receives the node completion notification, the executable node set is updated, and the newly generated executable node is added to the set. Under a high concurrency scene, the scheduling center simultaneously performs two operations of updating the executable node set and selecting nodes in the executable node set for scheduling, and the problem of concurrency conflict exists. In order to solve the problem, as shown in fig. 4, dual priority queues are adopted to update the executable node set S and select nodes in the executable node set, wherein the first queue updates the executable node set S, the second queue selects nodes in the executable node set, and the priority of the first queue is higher than that of the second queue, so that the problem of concurrence conflict can be solved.
The second aspect of the present invention provides a computer system, including a memory, a processor, and a program stored on the memory and executable on the processor, for a massive workflow task dynamic cloud network interconnection method: when the dynamic cloud network interconnection method for the massive workflow tasks is executed by a processor, the following steps are realized:
s1: constructing an executable node set S, wherein the executable node set S is maintained by a scheduling center;
s2: sorting all the nodes in the S according to the importance degree by using a reinforcement learning algorithm, and selecting the node with the highest importance degree after sorting as the node to be scheduled at the next moment;
s3: traversing all hosts which accord with resource constraint conditions in the current network, selecting the host to be executed by the current scheduling node according to the evaluation function, immediately sending a notification to a scheduling center after the node is executed on the host, and updating an executable node set S;
s4: repeating S2-S3 until all nodes of all DAG tasks have been executed, resulting in a total time spent by parallel distributed job scheduling.
Further, the set of tasks to be scheduled is denoted as J ═ J1,j2,...,jnThe task set comprises n DAG tasks, and each DAG task can be expressed asWherein m isiRepresenting the number of nodes contained in the ith DAG task and the computing resources consumed by each nodeThe topological network is denoted as G ═ (V, E, w (E)), where V ═ D1,D2,...,DwW (e) represents the bandwidth of the edge e between two hosts connected in the network.
Further, the specific process of step S2 is:
inputting current topological network information and an executable node set S into an Agent of a sequencing node, wherein the Agent is essentially a classification model, obtaining the selected probability of each node in the output S after inputting the executable sequence S and the topological network information, selecting a node with the maximum probability, scheduling in a network, and updating the topological network information after a scheduling action is sent out.
Further, the evaluation function expression described in step S3 is:
whereinRepresenting a node niIs transmitted to DkThe elapsed transmission time, parent (n)i) Represents niAll parent nodes of trans (a, D)k) Indicating the host to which node a is located to DkThe time spent;representing a node niAt the host DkUpper execution of the computation time, vol (n)i) Is node niComputing resource required to be consumed, computer (D)k) Representing a host DkThe frequency of calculation of (2).
A third aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program for: the computer program, when executed by a processor, implements the steps of the method for parallel distributed job scheduling for adaptive network topology as described above.
Example 2
The system architecture operated by the method of the invention is shown in fig. 2, and a scheduling center is constructed between the parallel distributed system and the multi-DAG tasks and is responsible for maintaining the scheduling information of the multi-DAG tasks and the current executable node set. In this embodiment, multi-DAG task scheduling with the sizes of 20, 40, 60, 80, and 100 is performed on a parallel distributed system composed of 5 computers as a preferred embodiment, and the scheduling result is shown in fig. 5.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. The dynamic cloud network interconnection method facing the massive workflow tasks is characterized by comprising the following steps:
s1: constructing an executable node set S, wherein the executable node set S is maintained by a scheduling center;
s2: sorting all the nodes in the S according to the importance degree by using a reinforcement learning algorithm, and selecting the node with the highest importance degree after sorting as the node to be scheduled at the next moment;
s3: traversing all hosts which accord with resource constraint conditions in the current network, selecting the host to be executed by the current scheduling node according to the evaluation function, immediately sending a notification to a scheduling center after the node is executed on the host, and updating an executable node set S;
s4: repeating S2-S3 until all nodes of all DAG tasks have been executed, resulting in a total time spent by parallel distributed job scheduling.
2. The dynamic cloud network interconnection method for massive workflow tasks according to claim 1, wherein the task set to be scheduled is J ═ J1,j2,...,jnThe task set comprises n DAG tasks, and each DAG task can be expressed asWherein m isiRepresenting the number of nodes contained in the ith DAG task and the computing resources consumed by each nodeThe topological network is denoted as G ═ (V, E, w (E)), where V ═ D1,D2,...,DwW (e) represents the bandwidth of the edge e between two hosts connected in the network.
3. The dynamic cloud network interconnection method for the tasks of the mass workflows according to claim 1, wherein the specific process of step S2 is as follows:
inputting current topological network information and an executable node set S into an Agent of a sequencing node, obtaining the selected probability of each node in the output S, selecting a node with the maximum probability, scheduling in the network, and updating the topological network information after the scheduling action is sent out.
4. The dynamic cloud network interconnection method for massive workflow tasks according to claim 1, wherein the evaluation function expression in step S3 is:
whereinRepresenting a node niIs transmitted to DkThe elapsed transmission time, parent (n)i) Represents niAll parent nodes of trans (a, D)k) Indicating the host to which node a is located to DkThe time spent;representing a node niAt the host DkUpper execution of the computation time, vol (n)i) Is node niComputing resource required to be consumed, computer (D)k) Representing a host DkThe frequency of calculation of (2).
5. The dynamic cloud network interconnection method for the tasks of the mass workflows as recited in claim 1, wherein a dual-priority queue is used to update the executable node set S and select the nodes in the executable sequence, wherein the first queue updates the executable node set S, the second queue selects the nodes in the executable sequence, and the first queue has a higher priority than the second queue.
6. A computer system including a memory, a processor, and a program stored on the memory and executable on the processor for a method for dynamic cloud networking for mass workflow tasks, characterized in that: when the dynamic cloud network interconnection method for the massive workflow tasks is executed by a processor, the following steps are realized:
s1: constructing an executable node set S, wherein the executable node set S is maintained by a scheduling center;
s2: sorting all the nodes in the S according to the importance degree by using a reinforcement learning algorithm, and selecting the node with the highest importance degree after sorting as the node to be scheduled at the next moment;
s3: traversing all hosts which accord with resource constraint conditions in the current network, selecting the host to be executed by the current scheduling node according to the evaluation function, immediately sending a notification to a scheduling center after the node is executed on the host, and updating an executable node set S;
s4: repeating S2-S3 until all nodes of all DAG tasks have been executed, resulting in a total time spent by parallel distributed job scheduling.
7. The computer system of claim 6, wherein the set of tasks that need to be scheduled is denoted as J ═ J1,j2,...,jnThe task set comprises n DAG tasks, and each DAG task can be expressed asWherein m isiRepresenting the number of nodes contained in the ith DAG task and the computing resources consumed by each nodeThe topological network is denoted as G ═ (V, E, w (E)), where V ═ D1,D2,...,DwW (e) represents the bandwidth of the edge e between two hosts connected in the network.
8. The computer system as claimed in claim 6, wherein the specific process of step S2 is:
inputting current topological network information and an executable node set S into an Agent of a sequencing node, obtaining the selected probability of each node in the output S, selecting a node with the maximum probability, scheduling in the network, and updating the topological network information after the scheduling action is sent out.
9. The computer system as claimed in claim 6, wherein the evaluation function expression in step S3 is:
whereinRepresenting a node niIs transmitted to DkThe elapsed transmission time, parent (n)i) Represents niAll parent nodes of trans (a, D)k) Indicating the host to which node a is located to DkThe time spent;representing a node niAt the host DkUpper execution of the computation time, vol (n)i) Is node niComputing resource required to be consumed, computer (D)k) Representing a host DkThe frequency of calculation of (2).
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when executed by a processor, performs the steps of the method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110375737.1A CN113176933B (en) | 2021-04-08 | 2021-04-08 | Dynamic cloud network interconnection method for massive workflow tasks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110375737.1A CN113176933B (en) | 2021-04-08 | 2021-04-08 | Dynamic cloud network interconnection method for massive workflow tasks |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113176933A true CN113176933A (en) | 2021-07-27 |
CN113176933B CN113176933B (en) | 2023-05-02 |
Family
ID=76923919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110375737.1A Active CN113176933B (en) | 2021-04-08 | 2021-04-08 | Dynamic cloud network interconnection method for massive workflow tasks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113176933B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050071843A1 (en) * | 2001-12-20 | 2005-03-31 | Hong Guo | Topology aware scheduling for a multiprocessor system |
CN107301500A (en) * | 2017-06-02 | 2017-10-27 | 北京工业大学 | A kind of workflow schedule method looked forward to the prospect based on critical path task |
CN108984284A (en) * | 2018-06-26 | 2018-12-11 | 杭州比智科技有限公司 | DAG method for scheduling task and device based on off-line calculation platform |
CN109918182A (en) * | 2019-01-23 | 2019-06-21 | 中国人民解放军战略支援部队信息工程大学 | More GPU task dispatching methods under virtualization technology |
CN110825527A (en) * | 2019-11-08 | 2020-02-21 | 北京理工大学 | Deadline-budget driven scientific workflow scheduling method in cloud environment |
CN112486641A (en) * | 2020-11-18 | 2021-03-12 | 鹏城实验室 | Task scheduling method based on graph neural network |
-
2021
- 2021-04-08 CN CN202110375737.1A patent/CN113176933B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050071843A1 (en) * | 2001-12-20 | 2005-03-31 | Hong Guo | Topology aware scheduling for a multiprocessor system |
CN107301500A (en) * | 2017-06-02 | 2017-10-27 | 北京工业大学 | A kind of workflow schedule method looked forward to the prospect based on critical path task |
CN108984284A (en) * | 2018-06-26 | 2018-12-11 | 杭州比智科技有限公司 | DAG method for scheduling task and device based on off-line calculation platform |
CN109918182A (en) * | 2019-01-23 | 2019-06-21 | 中国人民解放军战略支援部队信息工程大学 | More GPU task dispatching methods under virtualization technology |
CN110825527A (en) * | 2019-11-08 | 2020-02-21 | 北京理工大学 | Deadline-budget driven scientific workflow scheduling method in cloud environment |
CN112486641A (en) * | 2020-11-18 | 2021-03-12 | 鹏城实验室 | Task scheduling method based on graph neural network |
Non-Patent Citations (1)
Title |
---|
詹文翰: ""移动边缘网络计算卸载调度与资源管理策略优化研究"", 《中国博士学位论文全文数据库 (信息科技辑)》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113176933B (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107992359B (en) | Task scheduling method for cost perception in cloud environment | |
US8434085B2 (en) | Scalable scheduling of tasks in heterogeneous systems | |
CN109617826B (en) | Storm dynamic load balancing method based on cuckoo search | |
CN108684046B (en) | Random learning-based access network service function chain deployment method | |
CN114281104B (en) | Multi-unmanned aerial vehicle cooperative regulation and control method based on improved ant colony algorithm | |
CN111381950A (en) | Task scheduling method and system based on multiple copies for edge computing environment | |
Swarup et al. | Task scheduling in cloud using deep reinforcement learning | |
CN109783225B (en) | Tenant priority management method and system of multi-tenant big data platform | |
CN113946431B (en) | Resource scheduling method, system, medium and computing device | |
CN110570075A (en) | Power business edge calculation task allocation method and device | |
CN113821318A (en) | Internet of things cross-domain subtask combined collaborative computing method and system | |
CN113835899A (en) | Data fusion method and device for distributed graph learning | |
Balla et al. | Reliability enhancement in cloud computing via optimized job scheduling implementing reinforcement learning algorithm and queuing theory | |
CN109976873B (en) | Scheduling scheme obtaining method and scheduling method of containerized distributed computing framework | |
CA2631255A1 (en) | Scalable scheduling of tasks in heterogeneous systems | |
CN117610899A (en) | Multi-robot task allocation method based on priority | |
CN117608840A (en) | Task processing method and system for comprehensive management of resources of intelligent monitoring system | |
CN113849295A (en) | Model training method and device and computer readable storage medium | |
CN113176933A (en) | Dynamic cloud network interconnection method for massive workflow tasks | |
CN112598112B (en) | Resource scheduling method based on graph neural network | |
Fan et al. | Associated task scheduling based on dynamic finish time prediction for cloud computing | |
Iijima et al. | Analysis of task allocation based on social utility and incompatible individual preference | |
CN112887347B (en) | Dynamic migration method and device for edge calculation in industrial internet | |
Nebagiri et al. | Multi-Objective of Load Balancing in Cloud Computing using Cuckoo Search Optimization based Simulation Annealing | |
Bensalem et al. | Towards optimal serverless function scaling in edge computing network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |