CN117348987A - Task scheduling method, device, computer equipment, storage medium and program product - Google Patents
Task scheduling method, device, computer equipment, storage medium and program product Download PDFInfo
- Publication number
- CN117348987A CN117348987A CN202311163750.6A CN202311163750A CN117348987A CN 117348987 A CN117348987 A CN 117348987A CN 202311163750 A CN202311163750 A CN 202311163750A CN 117348987 A CN117348987 A CN 117348987A
- Authority
- CN
- China
- Prior art keywords
- task
- executed
- target
- task scheduling
- computing node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000004590 computer program Methods 0.000 claims description 37
- 238000004364 calculation method Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application relates to a task scheduling method, a task scheduling device, computer equipment, a storage medium and a program product. The method comprises the following steps: acquiring a task to be executed from a task queue; acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed; determining a target computing node corresponding to a task to be executed from a plurality of computing nodes according to a plurality of task scheduling rules by adopting a multithreading parallel mode; the target computing node corresponds to a plurality of task scheduling rules. By adopting the method, the task scheduling efficiency can be improved.
Description
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to a task scheduling method, a task scheduling device, a computer device, a storage medium, and a program product.
Background
With the development of cloud computing technology, technicians may employ nodes in a cloud platform to perform cloud computing tasks. Because the cloud platform comprises various cloud computing tasks and various nodes, and different cloud computing tasks comprise at least one scheduling rule, selecting appropriate nodes for different cloud computing tasks according to different scheduling rules is crucial to the cloud platform.
However, in the conventional method, in the process of selecting appropriate nodes for different cloud computing tasks according to different scheduling rules, the scheduling efficiency is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a task scheduling method, apparatus, computer device, storage medium, and program product that can improve task scheduling efficiency.
In a first aspect, the present application provides a task scheduling method, including:
acquiring a task to be executed from a task queue;
acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed;
determining a target computing node corresponding to the task to be executed from a plurality of computing nodes according to the task scheduling rules by adopting a multithreading parallel mode; the target computing node corresponds to the plurality of task scheduling rules.
The method and the device can acquire a plurality of task scheduling rules corresponding to the task to be executed, and can determine the target computing node corresponding to the task to be executed from a plurality of computing nodes in a multithreading parallel mode according to the plurality of task scheduling rules corresponding to the task to be executed. Because the target computing node corresponds to the task scheduling rules, namely the target computing node can simultaneously meet the task scheduling rules, the task scheduling method and the task scheduling device can comprehensively consider the task scheduling rules corresponding to the task to be executed, and can determine the target computing node corresponding to the task to be executed in parallel through multithreading, so that task scheduling can be realized more quickly and accurately.
In one embodiment, the determining, by using a multithreading parallel manner, a target computing node corresponding to the task to be executed from a plurality of computing nodes according to the plurality of task scheduling rules includes:
acquiring a plurality of computing nodes from a computing node list, and determining target threads corresponding to the computing nodes from a thread pool according to the computing nodes; the number of the target threads is less than or equal to the number of the computing nodes;
and for each target thread, determining a target computing node corresponding to the task to be executed from computing nodes corresponding to the target threads according to the task scheduling rules in a parallel mode.
In this embodiment, a plurality of computing nodes are obtained from a computing node list, and target threads corresponding to the plurality of computing nodes can be accurately determined from a thread pool according to the plurality of computing nodes. The number of the target threads is smaller than or equal to the number of the computing nodes, so that the target computing nodes can be determined in parallel by adopting a plurality of threads, and the task scheduling efficiency can be improved. For each target thread, a parallel mode can be adopted, and the target computing node corresponding to the task to be executed can be rapidly and accurately determined from the computing nodes corresponding to the target threads according to a plurality of task scheduling rules.
In one embodiment, the determining, according to the task scheduling rules, a target computing node corresponding to the task to be executed from computing nodes corresponding to the target threads includes:
acquiring node attributes of the computing nodes corresponding to the target threads;
judging whether the node attribute meets each task scheduling rule in the plurality of task scheduling rules or not;
and if the node attribute meets each task scheduling rule in the task scheduling rules, taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed.
In this embodiment, node attributes of a computing node corresponding to a target thread are obtained; judging whether the node attribute meets each task scheduling rule in a plurality of task scheduling rules; and if the node attribute meets each task scheduling rule in the task scheduling rules, taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed. According to the method and the device, whether the computing node corresponding to the target thread is the target computing node corresponding to the task to be executed can be accurately determined according to whether the node attribute of the computing node corresponding to the target thread meets each task scheduling rule in a plurality of task scheduling rules.
In one embodiment, the determining, according to the plurality of computing nodes, a target thread corresponding to the plurality of computing nodes from a thread pool includes:
for each computing node in the computing node list, determining a target thread corresponding to the computing node from a thread pool according to the computing node; the target threads are in one-to-one correspondence with the computing nodes.
In this embodiment, for each computing node in the computing node list, a target thread corresponding to the computing node can be accurately determined from the thread pool according to the computing node. Because the target threads are in one-to-one correspondence with the computing nodes, whether the computing nodes corresponding to the target threads are the target computing nodes or not can be determined in parallel by adopting the target threads, and the task scheduling efficiency can be improved.
In one embodiment, the obtaining, according to the task to be executed, a plurality of task scheduling rules corresponding to the task to be executed includes:
acquiring a plurality of task attributes of the task to be executed; the task attributes comprise a first task attribute and a second task attribute;
selecting a first task scheduling rule corresponding to the first task attribute from a general task scheduling rule list according to the first task attribute;
According to the second task attribute, a second task scheduling rule corresponding to the second task attribute is newly added;
and generating a plurality of task scheduling rules corresponding to the task to be executed according to the first task scheduling rule and the second task scheduling rule.
In this embodiment, a plurality of task attributes of a task to be executed are obtained; the task attributes include a first task attribute and a second task attribute. The general first task scheduling rule corresponding to the first task attribute can be accurately selected from the general task scheduling rule list according to the first task attribute. According to the second task attribute, a special second task scheduling rule corresponding to the second task attribute can be more accurately added. Therefore, a plurality of task scheduling rules corresponding to the task to be executed can be accurately generated according to the accurate first task scheduling rule and the accurate second task scheduling rule.
In one embodiment, the method further comprises:
determining a task execution node from target computing nodes corresponding to the tasks to be executed;
and executing the task to be executed through the task execution node.
In this embodiment, since the target computing node is more accurate, the more accurate task execution node can be determined from the target computing nodes corresponding to the task to be executed, so that the task to be executed can be executed by the more accurate task execution node.
In a second aspect, the present application further provides a task scheduling device, including:
the task to be executed acquisition module is used for acquiring a task to be executed from the task queue;
the task scheduling rule acquisition module is used for acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed;
the target computing node determining module is used for determining a target computing node corresponding to the task to be executed from a plurality of computing nodes according to the task scheduling rules in a multithreading parallel mode; the target computing node corresponds to the plurality of task scheduling rules.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method in any of the embodiments of the first aspect described above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method in any of the embodiments of the first aspect described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the steps of the method in any of the embodiments of the first aspect described above.
The task scheduling method, the device, the computer equipment, the storage medium and the program product acquire the task to be executed from the task queue; acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed; determining a target computing node corresponding to a task to be executed from a plurality of computing nodes according to a plurality of task scheduling rules by adopting a multithreading parallel mode; the target computing node corresponds to a plurality of task scheduling rules. The method and the device can acquire a plurality of task scheduling rules corresponding to the task to be executed, and can determine the target computing node corresponding to the task to be executed from a plurality of computing nodes in a multithreading parallel mode according to the plurality of task scheduling rules corresponding to the task to be executed. Because the target computing node corresponds to the task scheduling rules, namely the target computing node can simultaneously meet the task scheduling rules, the task scheduling method and the task scheduling device can comprehensively consider the task scheduling rules corresponding to the task to be executed, and can determine the target computing node corresponding to the task to be executed in parallel through multithreading, so that task scheduling can be realized more quickly and accurately.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for a person having ordinary skill in the art.
FIG. 1 is an application environment diagram of a task scheduling method in one embodiment;
FIG. 2 is a flow diagram of a task scheduling method in one embodiment;
FIG. 3 is a flow diagram of a target computing node parallel determination step in one embodiment;
FIG. 4 is a flow diagram of the target computing node determination step in one embodiment;
FIG. 5 is a flow chart illustrating steps for generating a plurality of task scheduling rules in one embodiment;
FIG. 6 is a flow chart of a task scheduling method in an alternative embodiment;
FIG. 7 is a schematic overall flow diagram of task scheduling in one embodiment;
FIG. 8 is a block diagram of a task scheduler in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
With the development of cloud computing technology, technicians may employ nodes in a cloud platform to perform cloud computing tasks. Because the cloud platform comprises various cloud computing tasks and various nodes, and different cloud computing tasks comprise at least one scheduling rule, selecting appropriate nodes for different cloud computing tasks according to different scheduling rules is crucial to the cloud platform.
However, the conventional method can only select a suitable node for a cloud computing task according to a single scheduling rule, so that the conventional method has a problem of low scheduling efficiency in the process of selecting a suitable node for different cloud computing tasks according to different scheduling rules (i.e. in the process of task scheduling).
The task scheduling method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein cloud platform 100 includes a plurality of computing nodes 102. The data storage system may store data that multiple computing nodes 102 need to process. The data storage system may be integrated on multiple computing nodes 102 or may be located on a cloud or other network server. The cloud platform 100 acquires a task to be executed from a task queue; the cloud platform 100 obtains a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed; the cloud platform 100 adopts a multithreading parallel mode, and determines a target computing node corresponding to a task to be executed from a plurality of computing nodes 102 according to a plurality of task scheduling rules; the target computing node corresponds to a plurality of task scheduling rules. Wherein the plurality of computing nodes 102 may be implemented as separate servers or as a server cluster of multiple servers.
In an exemplary embodiment, as shown in fig. 2, a task scheduling method is provided, and an example of application of the method to the cloud platform 100 in fig. 1 is described, which includes the following steps 220 to 260.
Wherein:
step 220, obtain the task to be executed from the task queue.
The task queue is a queue stored in the cloud platform in advance, and a plurality of tasks are stored in the task queue. The task to be executed refers to a task which needs to be executed by adopting a computing node on the cloud platform. Alternatively, the cloud platform 100 may preset a task queue, and add a plurality of tasks to the task queue. Thus, the cloud platform 100 may perform task queue management on the plurality of tasks in a first-in first-out manner, and may acquire a task to be executed from the plurality of tasks in the task queue by using the heartbeat program. It should be noted that the task to be executed may be any one of a plurality of tasks in the task queue. When a plurality of tasks to be executed need to be acquired, the plurality of tasks to be executed can be sequentially taken out according to the first-in first-out order.
Step 240, obtaining a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed.
Alternatively, the cloud platform 100 may preset a task scheduling rule list, and add a plurality of scheduling rules to the task scheduling rule list; thus, the cloud platform 100 may determine, from among the plurality of scheduling rules of the task scheduling rule list, a plurality of task scheduling rules corresponding to the task to be performed according to the task to be performed. Alternatively, the cloud platform 100 may configure, for the task to be executed, a plurality of task scheduling rules corresponding to the task to be executed directly according to the task to be executed; or, the cloud platform 100 may select, according to the task to be executed, a task scheduling rule corresponding to the task to be executed from a plurality of scheduling rules in the task scheduling rule list; and configuring task scheduling rules corresponding to the tasks to be executed for the tasks to be executed according to the tasks to be executed aiming at task scheduling rules not included in the plurality of scheduling rules. Of course, the embodiment of the present application does not limit the manner of acquiring the plurality of task scheduling rules. The task scheduling rule refers to a policy or condition that needs to be met in the process of executing a task to be executed.
Step 260, determining a target computing node corresponding to the task to be executed from a plurality of computing nodes according to a plurality of task scheduling rules by adopting a multithreading parallel mode; the target computing node corresponds to a plurality of task scheduling rules.
Alternatively, the cloud platform 100 may determine, in a multithreaded parallel manner, a target computing node corresponding to a task to be executed from a plurality of computing nodes according to a plurality of task scheduling rules. The computing nodes refer to nodes which can be used for executing tasks in the cloud platform, and the target computing nodes refer to nodes used for executing tasks to be executed. The target computing node corresponds to a plurality of task scheduling rules, that is, it can be understood that one target computing node can simultaneously satisfy a plurality of task scheduling rules.
Optionally, assuming that different threads correspond to different computing nodes, for each thread, the cloud platform 100 may determine, according to a plurality of task scheduling rules, whether the computing node corresponding to the thread is a target computing node corresponding to a task to be executed; thus, at least one target computing node may be determined in parallel. Or, assuming that different threads correspond to different task scheduling rules, for each thread, the cloud platform 100 may determine, from a plurality of computing nodes, a computing node that meets the task scheduling rule corresponding to the thread according to the task scheduling rule corresponding to the thread; therefore, the computing nodes meeting the task scheduling rules corresponding to the threads can be determined in parallel, and at least one target computing node is determined according to the computing nodes meeting the task scheduling rules corresponding to the threads. Of course, the embodiment of the present application does not limit the manner of determining the target computing node.
In the task scheduling method, a task to be executed is obtained from a task queue; acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed; determining a target computing node corresponding to a task to be executed from a plurality of computing nodes according to a plurality of task scheduling rules by adopting a multithreading parallel mode; the target computing node corresponds to a plurality of task scheduling rules. The method and the device can acquire a plurality of task scheduling rules corresponding to the task to be executed, and can determine the target computing node corresponding to the task to be executed from a plurality of computing nodes in a multithreading parallel mode according to the plurality of task scheduling rules corresponding to the task to be executed. Because the target computing node corresponds to the task scheduling rules, namely the target computing node can simultaneously meet the task scheduling rules, the task scheduling method and the task scheduling device can comprehensively consider the task scheduling rules corresponding to the task to be executed, and can determine the target computing node corresponding to the task to be executed in parallel through multithreading, so that task scheduling can be realized more quickly and accurately.
In the above embodiment, the method of determining the target computing node corresponding to the task to be executed from the plurality of computing nodes according to the plurality of task scheduling rules by adopting a multithreading parallel mode is referred to, and a specific method thereof is described below. In one exemplary embodiment, as shown in fig. 3, S260 includes:
Step 320, obtaining a plurality of computing nodes from the computing node list, and determining target threads corresponding to the plurality of computing nodes from the thread pool according to the plurality of computing nodes; the number of target threads is less than or equal to the number of compute nodes.
The computing node list is a preset node list in the cloud platform, and a plurality of computing nodes are stored in the computing node list. The thread pool comprises a plurality of threads, and the plurality of threads are used for determining the target computing node in parallel. By way of example, 16 threads may be included in the thread pool. The target thread is a thread corresponding to a plurality of computing nodes in the thread pool, and the target thread is used for determining whether the plurality of computing nodes conform to a plurality of task scheduling rules.
Alternatively, the cloud platform 100 may preset a computing node list, and add a plurality of computing nodes to the computing node list, so that the cloud platform 100 may acquire the plurality of computing nodes from the computing node list. For each computing node of the plurality of computing nodes, the cloud platform 100 may determine, from the thread pool, a target thread corresponding to the computing node according to the computing node, and thus may determine the target thread corresponding to each computing node. It should be noted that, the number of the target threads is smaller than or equal to the number of the computing nodes, that is, the target threads and the computing nodes are in a one-to-one or one-to-many relationship, that is, one target thread may correspond to at least one computing node, and the target threads corresponding to each computing node may be repeated.
In one alternative embodiment, determining, from a thread pool, a target thread corresponding to a plurality of computing nodes from the plurality of computing nodes includes:
for each computing node in the computing node list, determining a target thread corresponding to the computing node from a thread pool according to the computing node; the target threads are in one-to-one correspondence with the computing nodes.
Alternatively, for each computing node in the list of computing nodes, cloud platform 100 may determine a target thread corresponding to the computing node from the thread pool according to the computing node. In this embodiment, the target threads are in one-to-one correspondence with the computing nodes, that is, one target thread may correspond to one computing node, and the target threads corresponding to the computing nodes may not be repeated. For example, assuming that the plurality of computing nodes include a computing node 1, a computing node 2, and a computing node 3, the thread pool includes a thread 1, a thread 2, and a thread 3, and it is preset that whether the computing node 1 meets a plurality of task scheduling rules is determined in the thread 1, whether the computing node 2 meets a plurality of task scheduling rules is determined in the thread 2, and whether the computing node 3 meets a plurality of task scheduling rules is determined in the thread 3, then it may be determined that a target thread corresponding to the computing node 1 is the thread 1, a target thread corresponding to the computing node 2 is the thread 2, and a target thread corresponding to the computing node 3 is the thread 3.
Step 340, for each target thread, determining a target computing node corresponding to the task to be executed from computing nodes corresponding to the target threads according to a plurality of task scheduling rules in a parallel manner.
Optionally, for each target thread, a parallel mode is adopted, and a target computing node corresponding to the task to be executed is determined from computing nodes corresponding to the target threads according to a plurality of task scheduling rules. For example, for each target thread, the cloud platform 100 may determine, in the target thread, according to a plurality of task scheduling rules, whether node information of a computing node corresponding to the target thread satisfies the plurality of task scheduling rules. Therefore, the cloud platform 100 may determine whether the computing node corresponding to the target thread is the target computing node corresponding to the task to be executed according to whether the node information of the computing node corresponding to the target thread satisfies the plurality of task scheduling rules. Further, the cloud platform 100 may determine, in parallel, at least one target computing node corresponding to the task to be performed from the computing nodes corresponding to the target threads.
In this embodiment, a plurality of computing nodes are obtained from a computing node list, and target threads corresponding to the plurality of computing nodes can be accurately determined from a thread pool according to the plurality of computing nodes. The number of the target threads is smaller than or equal to the number of the computing nodes, so that the target computing nodes can be determined in parallel by adopting a plurality of threads, and the task scheduling efficiency can be improved. For each target thread, a parallel mode can be adopted, and the target computing node corresponding to the task to be executed can be rapidly and accurately determined from the computing nodes corresponding to the target threads according to a plurality of task scheduling rules.
In the above embodiments, the determination of the target computing node corresponding to the task to be executed from the computing nodes corresponding to the target threads according to the plurality of task scheduling rules is involved, and a specific method thereof is described below. In one exemplary embodiment, as shown in fig. 4, S340 includes:
step 420, obtaining node attributes of the computing nodes corresponding to the target threads.
Optionally, the cloud platform 100 may monitor the working states of a plurality of computing nodes in the computing node list in real time, and record node attributes of the plurality of computing nodes. Thus, for each target thread, the cloud platform 100 may obtain node attributes of computing nodes corresponding to each target thread. The node attributes may include, but are not limited to, the operational state of the computing node (including normal and abnormal operation), the amount of memory in the computing node, whether the computing node contains a GPU (graphics processor, graphics processing unit), and the like.
Step 440, determining whether the node attribute satisfies each task scheduling rule of the plurality of task scheduling rules.
Step 460, if the node attribute satisfies each task scheduling rule of the plurality of task scheduling rules, the computing node corresponding to the target thread is used as the target computing node corresponding to the task to be executed.
Alternatively, the cloud platform 100 may determine whether the node attribute satisfies each of the plurality of task scheduling rules. And if the node attribute meets each task scheduling rule in the task scheduling rules, taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed. For example, for the target thread 1, it is assumed that a task scheduling rule 1 corresponding to a task to be executed is: if the process of executing the task to be executed requires 3 GPUs, the cloud platform 100 may determine, according to the node attribute of the computing node corresponding to the target thread 1, the number of GPUs included in the computing node, and determine whether the computing node includes 3 available GPUs. If the computing node is judged to comprise 3 available GPUs, the node attribute of the computing node is stated to meet the task scheduling rule 1 corresponding to the task to be executed. Thus, if the cloud platform 100 determines that the node attributes of the computing nodes all meet each task scheduling rule of the plurality of task scheduling rules, the cloud platform 100 may use the computing node corresponding to the target thread 1 as the target computing node corresponding to the task to be executed.
In this embodiment, node attributes of a computing node corresponding to a target thread are obtained; judging whether the node attribute meets each task scheduling rule in a plurality of task scheduling rules; and if the node attribute meets each task scheduling rule in the task scheduling rules, taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed. According to the method and the device, whether the computing node corresponding to the target thread is the target computing node corresponding to the task to be executed can be accurately determined according to whether the node attribute of the computing node corresponding to the target thread meets each task scheduling rule in a plurality of task scheduling rules.
In the above embodiment, the task scheduling rules corresponding to the task to be executed are acquired according to the task to be executed, and a specific method thereof is described below. In one exemplary embodiment, as shown in fig. 5, S240 includes:
step 520, obtaining a plurality of task attributes of the task to be executed; the task attributes include a first task attribute and a second task attribute.
Optionally, the cloud platform 100 may acquire multiple task attributes of the task to be performed in real time; alternatively, the cloud platform 100 may acquire a plurality of task attributes of the task to be performed at regular time. The task attributes may include, but are not limited to, an amount of memory required for the task to be performed, a number of GPUs (graphics processors, graphics processing unit) required for the task to be performed, and the like. The task attributes include a first task attribute and a second task attribute. The first task attribute refers to a general task attribute, and may exist not only in a task to be executed but also in another cloud computing task. The second task attribute refers to a task attribute specific to a task to be executed.
Step 540, according to the first task attribute, selecting a first task scheduling rule corresponding to the first task attribute from the general task scheduling rule list.
Alternatively, the cloud platform 100 may preset a general task scheduling rule list, and add a plurality of scheduling rules to the general task scheduling rule list, so that the general task scheduling rule list includes a plurality of scheduling rules. Thus, the cloud platform 100 may select, according to the first task attribute of the task to be executed, a first task scheduling rule corresponding to the first task attribute of the task to be executed from the universal task scheduling rule list. The first task scheduling rule refers to a general scheduling rule corresponding to a task to be executed.
Step 560, according to the second task attribute, a second task scheduling rule corresponding to the second task attribute is newly added.
Optionally, the cloud platform 100 may configure or add a second task scheduling rule corresponding to the second task attribute for the second task attribute according to the second task attribute of the task to be performed. The second task scheduling rule refers to a special scheduling rule corresponding to a task to be executed. It should be noted that, according to the actual situation of the task to be executed, the cloud platform 100 may add, modify, and delete the task scheduling rule to the general task scheduling rule list. For example, in the process of actually using the general task scheduling rule list, the cloud platform 100 may further add the newly added second task scheduling rule to the general task scheduling rule list.
In step 580, a plurality of task scheduling rules corresponding to the task to be executed are generated according to the first task scheduling rule and the second task scheduling rule.
Alternatively, the cloud platform 100 may generate a plurality of task scheduling rules corresponding to the task to be executed according to the first task scheduling rule and the second task scheduling rule, that is, the plurality of task scheduling rules corresponding to the task to be executed may be understood to include a general first task scheduling rule and a specific second task scheduling rule.
In this embodiment, a plurality of task attributes of a task to be executed are obtained; the task attributes include a first task attribute and a second task attribute. The general first task scheduling rule corresponding to the first task attribute can be accurately selected from the general task scheduling rule list according to the first task attribute. According to the second task attribute, a special second task scheduling rule corresponding to the second task attribute can be more accurately added. Therefore, a plurality of task scheduling rules corresponding to the task to be executed can be accurately generated according to the accurate first task scheduling rule and the accurate second task scheduling rule.
In the above embodiment, the method of determining the target computing node corresponding to the task to be executed from the multiple computing nodes according to the multiple task scheduling rules by adopting the multithreading parallel mode is referred to, and another implementation method is described below. In an exemplary embodiment, the task scheduling method further includes:
And determining a task execution node from target computing nodes corresponding to the tasks to be executed.
And executing the task to be executed through the task execution node.
Optionally, the cloud platform 100 may randomly select any one target computing node from the target computing nodes corresponding to the task to be executed, and determine the target computing node as the task executing node; alternatively, the cloud platform 100 may select a target computing node from target computing nodes corresponding to the task to be executed according to a preset priority order, and determine the target computing node as the task executing node. Of course, the embodiment of the present application does not limit the manner of determining the task execution node. The preset priority order may be specifically set according to a node attribute of the target computing node, which is not limited in the embodiment of the present application. Thus, the cloud platform 100 may employ the task execution node to execute the task to be executed.
In this embodiment, since the target computing node is more accurate, the more accurate task execution node can be determined from the target computing nodes corresponding to the task to be executed, so that the task to be executed can be executed by the more accurate task execution node.
In an alternative embodiment, as shown in fig. 6, a task scheduling method is provided, which is applied to the cloud platform 100, and includes:
step 602, obtaining a task to be executed from a task queue;
step 604, obtaining a plurality of task attributes of a task to be executed; the task attributes comprise a first task attribute and a second task attribute;
step 606, selecting a first task scheduling rule corresponding to the first task attribute from the universal task scheduling rule list according to the first task attribute;
step 608, according to the second task attribute, a second task scheduling rule corresponding to the second task attribute is newly added;
step 610, generating a plurality of task scheduling rules corresponding to the task to be executed according to the first task scheduling rule and the second task scheduling rule;
step 612, obtaining a plurality of computing nodes from the computing node list, and determining a target thread corresponding to each computing node from the thread pool according to the computing node for each computing node in the computing node list; the target threads are in one-to-one correspondence with the computing nodes;
step 614, for each target thread, obtaining node attributes of the computing nodes corresponding to the target threads in a parallel manner;
step 616, determining whether the node attribute satisfies each task scheduling rule of the plurality of task scheduling rules;
Step 618, if the node attribute satisfies each task scheduling rule of the plurality of task scheduling rules, taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed;
step 620, determining a task execution node from target computing nodes corresponding to the task to be executed;
at step 622, the task to be performed is performed by the task execution node.
Exemplary, as shown in fig. 7, fig. 7 is an overall flow diagram of task scheduling in one embodiment. S702, the cloud platform 100 may obtain a task to be executed and at least one task attribute of the task to be executed from a plurality of tasks in the task queue. In S704, the cloud platform 100 may obtain a plurality of computing nodes and node attributes of each computing node from the computing node list. S706, the cloud platform 100 may determine, according to at least one task attribute of the task to be executed, a plurality of task scheduling rules corresponding to the at least one task attribute of the task to be executed, and add the plurality of task scheduling rules to the target task scheduling rule list. At S708, the cloud platform 100 may determine, in parallel, a target computing node corresponding to the task to be executed from the computing nodes by using a plurality of threads in the thread pool according to at least one task attribute of the task to be executed, a node attribute of each computing node, and a plurality of task scheduling rules. S710, the cloud platform 100 may determine a task execution node from target computing nodes corresponding to the task to be executed, and execute the task to be executed through the task execution node.
In one exemplary embodiment, S708 includes: assuming that the task attribute of the task to be executed is denoted as x, the node attribute of the computing node is denoted as y, and the plurality of task scheduling rules include a scheduling rule a, a scheduling rule B, a scheduling rule C, and a scheduling rule D, then first, the cloud platform 100 may input the task attribute x and the node attribute y into a functional relationship corresponding to the scheduling rule a to perform computation, so as to obtain a computation result a (x, y) corresponding to the scheduling rule a. If the calculation result A (x, y) =true corresponding to the scheduling rule A, the node corresponding to the calculation node attribute y is proved to accord with the scheduling rule A; if the calculation result a (x, y) =false corresponding to the scheduling rule a, it is indicated that the calculation node corresponding to the node attribute y does not conform to the scheduling rule a. In parallel, the cloud platform 100 may further input the task attribute x and the node attribute y into the functional relationship corresponding to the scheduling rule B, the functional relationship corresponding to the scheduling rule C, and the functional relationship corresponding to the scheduling rule D, respectively, to calculate, so as to obtain a calculation result B (x, y) corresponding to the scheduling rule B, a calculation result C (x, y) corresponding to the scheduling rule C, and a calculation result D (x, y) corresponding to the scheduling rule D.
Next, the cloud platform 100 may perform an and logic operation on the calculation result a (x, y) corresponding to the scheduling rule a, the calculation result B (x, y) corresponding to the scheduling rule B, the calculation result C (x, y) corresponding to the scheduling rule C, and the calculation result D (x, y) corresponding to the scheduling rule D, that is, a (x, y) & B (x, y) & C (x, y) & D (x, y), to obtain an and logic operation result. If the result of the logical operation is 1, it indicates that the computing node corresponding to the node attribute y may simultaneously conform to the scheduling rule a, the scheduling rule B, the scheduling rule C, and the scheduling rule D, and then the cloud platform 100 may use the computing node corresponding to the node attribute y as the target computing node corresponding to the task to be executed. Then, for the other computing nodes in the plurality of computing nodes, the method can be used for judging whether the other computing nodes can be used as target computing nodes corresponding to the tasks to be executed, so that each target computing node corresponding to the tasks to be executed can be determined.
In the task scheduling method, a plurality of task scheduling rules corresponding to the task to be executed can be acquired, and a multithreading parallel mode can be adopted, so that a target computing node corresponding to the task to be executed is determined from a plurality of computing nodes according to the plurality of task scheduling rules corresponding to the task to be executed. Because the target computing node corresponds to the task scheduling rules, namely the target computing node can simultaneously meet the task scheduling rules, the task scheduling method and the task scheduling device can comprehensively consider the task scheduling rules corresponding to the task to be executed, and can determine the target computing node corresponding to the task to be executed in parallel through multithreading, so that task scheduling can be realized more quickly and accurately.
Based on the above embodiments, first, the task attribute of the task to be executed and the node attribute of the computing node may be both input into the functional relationship corresponding to each scheduling rule for calculation, so as to obtain the scheduling result corresponding to each scheduling rule, so that it may be determined whether the task to be executed may be executed by using the computing node according to the task attribute of the task to be executed and the node attribute of the computing node. Secondly, the cloud platform can perform AND logic operation on the scheduling results corresponding to the scheduling rules to obtain AND logic operation results, and determines each computing node corresponding to the logic operation result 1 as a target computing node, so that the cloud platform can determine the target computing node which can accord with all the scheduling rules, namely, the cloud platform can comprehensively consider a plurality of scheduling rules, and can select a more suitable node to execute a cloud computing task. Thirdly, because the thread pool comprises a plurality of threads, the cloud platform can execute a plurality of scheduling tasks in parallel, so that a plurality of scheduling results can be obtained simultaneously, and further, the task scheduling efficiency can be improved.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a task scheduling device for realizing the task scheduling method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of one or more task scheduling devices provided below may refer to the limitation of the task scheduling method hereinabove, and will not be repeated herein.
In an exemplary embodiment, as shown in fig. 8, there is provided a task scheduling device 800 including: a task to be executed acquisition module 820, a task scheduling rule acquisition module 840, and a target computing node determination module 860, wherein:
the task to be executed acquisition module 820 is configured to acquire a task to be executed from the task queue.
The task scheduling rule obtaining module 840 is configured to obtain a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed.
The target computing node determining module 860 is configured to determine, according to a plurality of task scheduling rules, a target computing node corresponding to a task to be executed from a plurality of computing nodes in a multithreading parallel manner; the target computing node corresponds to a plurality of task scheduling rules.
In one embodiment, the target computing node determination module 860 includes:
the target thread determining unit is used for acquiring a plurality of computing nodes from the computing node list and determining target threads corresponding to the computing nodes from the thread pool according to the computing nodes; the number of target threads is less than or equal to the number of compute nodes;
the target computing node determining unit is used for determining target computing nodes corresponding to tasks to be executed from computing nodes corresponding to the target threads according to a plurality of task scheduling rules in a parallel mode for each target thread.
In one embodiment, the target computing node determining unit comprises:
the node attribute acquisition subunit is used for acquiring the node attribute of the computing node corresponding to the target thread;
the judging subunit is used for judging whether the node attribute meets each task scheduling rule in the plurality of task scheduling rules;
and the target computing node determining subunit is used for taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed under the condition that the node attribute meets each task scheduling rule in the plurality of task scheduling rules.
In one embodiment, the target thread determining unit includes:
a target thread determining subunit, configured to determine, for each computing node in the computing node list, a target thread corresponding to the computing node from the thread pool according to the computing node; the target threads are in one-to-one correspondence with the computing nodes.
In one embodiment, the task scheduling rule acquisition module 840 includes:
the task attribute acquisition unit is used for acquiring a plurality of task attributes of a task to be executed; the task attributes comprise a first task attribute and a second task attribute;
the first task scheduling rule selection unit is used for selecting a first task scheduling rule corresponding to the first task attribute from the general task scheduling rule list according to the first task attribute;
A second task scheduling rule adding unit, configured to add a second task scheduling rule corresponding to a second task attribute according to the second task attribute;
and the task scheduling rule generating unit is used for generating a plurality of task scheduling rules corresponding to the task to be executed according to the first task scheduling rule and the second task scheduling rule.
In one embodiment, the task scheduling device 800 further includes:
the task execution node determining module is used for determining a task execution node from target computing nodes corresponding to the tasks to be executed;
and the task execution module is used for executing the task to be executed through the task execution node.
The respective modules in the task scheduling device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one exemplary embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 9. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing task scheduling data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a task scheduling method.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one exemplary embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring a task to be executed from a task queue;
acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed;
determining a target computing node corresponding to a task to be executed from a plurality of computing nodes according to a plurality of task scheduling rules by adopting a multithreading parallel mode; the target computing node corresponds to a plurality of task scheduling rules.
In one embodiment, a multithreading parallel manner is adopted, a target computing node corresponding to a task to be executed is determined from a plurality of computing nodes according to a plurality of task scheduling rules, and the following steps are further implemented when the processor executes a computer program:
Acquiring a plurality of computing nodes from a computing node list, and determining target threads corresponding to the computing nodes from a thread pool according to the computing nodes; the number of target threads is less than or equal to the number of compute nodes;
for each target thread, a parallel mode is adopted, and a target computing node corresponding to a task to be executed is determined from computing nodes corresponding to the target threads according to a plurality of task scheduling rules.
In one embodiment, the target computing node corresponding to the task to be executed is determined from the computing nodes corresponding to the target threads according to a plurality of task scheduling rules, and the following steps are further implemented when the processor executes the computer program:
acquiring node attributes of the computing nodes corresponding to the target threads;
judging whether the node attribute meets each task scheduling rule in a plurality of task scheduling rules;
and if the node attribute meets each task scheduling rule in the task scheduling rules, taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed.
In one embodiment, according to the plurality of computing nodes determining target threads corresponding to the plurality of computing nodes from the thread pool, the processor when executing the computer program further performs the steps of:
For each computing node in the computing node list, determining a target thread corresponding to the computing node from a thread pool according to the computing node; the target threads are in one-to-one correspondence with the computing nodes.
In one embodiment, according to the task to be executed, a plurality of task scheduling rules corresponding to the task to be executed are acquired, and the following steps are further implemented when the processor executes the computer program:
acquiring a plurality of task attributes of a task to be executed; the task attributes comprise a first task attribute and a second task attribute;
selecting a first task scheduling rule corresponding to the first task attribute from the universal task scheduling rule list according to the first task attribute;
according to the second task attribute, a second task scheduling rule corresponding to the second task attribute is newly added;
and generating a plurality of task scheduling rules corresponding to the task to be executed according to the first task scheduling rule and the second task scheduling rule.
In one embodiment, the processor when executing the computer program further performs the steps of:
determining a task execution node from target computing nodes corresponding to tasks to be executed;
and executing the task to be executed through the task execution node.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring a task to be executed from a task queue;
acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed;
determining a target computing node corresponding to a task to be executed from a plurality of computing nodes according to a plurality of task scheduling rules by adopting a multithreading parallel mode; the target computing node corresponds to a plurality of task scheduling rules.
In one embodiment, a multithreading parallel manner is adopted, a target computing node corresponding to a task to be executed is determined from a plurality of computing nodes according to a plurality of task scheduling rules, and the following steps are further implemented when the computer program is executed by a processor:
acquiring a plurality of computing nodes from a computing node list, and determining target threads corresponding to the computing nodes from a thread pool according to the computing nodes; the number of target threads is less than or equal to the number of compute nodes;
for each target thread, a parallel mode is adopted, and a target computing node corresponding to a task to be executed is determined from computing nodes corresponding to the target threads according to a plurality of task scheduling rules.
In one embodiment, the target computing node corresponding to the task to be executed is determined from the computing nodes corresponding to the target threads according to a plurality of task scheduling rules, and the computer program when executed by the processor further implements the steps of:
Acquiring node attributes of the computing nodes corresponding to the target threads;
judging whether the node attribute meets each task scheduling rule in a plurality of task scheduling rules;
and if the node attribute meets each task scheduling rule in the task scheduling rules, taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed.
In one embodiment, the method further comprises determining, from the thread pool, a target thread corresponding to the plurality of compute nodes, the computer program when executed by the processor, further comprising:
for each computing node in the computing node list, determining a target thread corresponding to the computing node from a thread pool according to the computing node; the target threads are in one-to-one correspondence with the computing nodes.
In one embodiment, according to the task to be executed, a plurality of task scheduling rules corresponding to the task to be executed are acquired, and the computer program when executed by the processor further realizes the following steps:
acquiring a plurality of task attributes of a task to be executed; the task attributes comprise a first task attribute and a second task attribute;
selecting a first task scheduling rule corresponding to the first task attribute from the universal task scheduling rule list according to the first task attribute;
According to the second task attribute, a second task scheduling rule corresponding to the second task attribute is newly added;
and generating a plurality of task scheduling rules corresponding to the task to be executed according to the first task scheduling rule and the second task scheduling rule.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a task execution node from target computing nodes corresponding to tasks to be executed;
and executing the task to be executed through the task execution node.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a task to be executed from a task queue;
acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed;
determining a target computing node corresponding to a task to be executed from a plurality of computing nodes according to a plurality of task scheduling rules by adopting a multithreading parallel mode; the target computing node corresponds to a plurality of task scheduling rules.
In one embodiment, a multithreading parallel manner is adopted, a target computing node corresponding to a task to be executed is determined from a plurality of computing nodes according to a plurality of task scheduling rules, and the following steps are further implemented when the computer program is executed by a processor:
Acquiring a plurality of computing nodes from a computing node list, and determining target threads corresponding to the computing nodes from a thread pool according to the computing nodes; the number of target threads is less than or equal to the number of compute nodes;
for each target thread, a parallel mode is adopted, and a target computing node corresponding to a task to be executed is determined from computing nodes corresponding to the target threads according to a plurality of task scheduling rules.
In one embodiment, the target computing node corresponding to the task to be executed is determined from the computing nodes corresponding to the target threads according to a plurality of task scheduling rules, and the computer program when executed by the processor further implements the steps of:
acquiring node attributes of the computing nodes corresponding to the target threads;
judging whether the node attribute meets each task scheduling rule in a plurality of task scheduling rules;
and if the node attribute meets each task scheduling rule in the task scheduling rules, taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed.
In one embodiment, the method further comprises determining, from the thread pool, a target thread corresponding to the plurality of compute nodes, the computer program when executed by the processor, further comprising:
For each computing node in the computing node list, determining a target thread corresponding to the computing node from a thread pool according to the computing node; the target threads are in one-to-one correspondence with the computing nodes.
In one embodiment, according to the task to be executed, a plurality of task scheduling rules corresponding to the task to be executed are acquired, and the computer program when executed by the processor further realizes the following steps:
acquiring a plurality of task attributes of a task to be executed; the task attributes comprise a first task attribute and a second task attribute;
selecting a first task scheduling rule corresponding to the first task attribute from the universal task scheduling rule list according to the first task attribute;
according to the second task attribute, a second task scheduling rule corresponding to the second task attribute is newly added;
and generating a plurality of task scheduling rules corresponding to the task to be executed according to the first task scheduling rule and the second task scheduling rule.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a task execution node from target computing nodes corresponding to tasks to be executed;
and executing the task to be executed through the task execution node.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not thereby to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.
Claims (10)
1. A method of task scheduling, the method comprising:
acquiring a task to be executed from a task queue;
acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed;
determining a target computing node corresponding to the task to be executed from a plurality of computing nodes according to the task scheduling rules by adopting a multithreading parallel mode; the target computing node corresponds to the plurality of task scheduling rules.
2. The method according to claim 1, wherein the determining, in a multithreaded parallel manner, a target computing node corresponding to the task to be executed from a plurality of computing nodes according to the plurality of task scheduling rules includes:
acquiring a plurality of computing nodes from a computing node list, and determining target threads corresponding to the computing nodes from a thread pool according to the computing nodes; the number of the target threads is less than or equal to the number of the computing nodes;
and for each target thread, determining a target computing node corresponding to the task to be executed from computing nodes corresponding to the target threads according to the task scheduling rules in a parallel mode.
3. The method according to claim 2, wherein the determining, from the computing nodes corresponding to the target threads according to the plurality of task scheduling rules, a target computing node corresponding to the task to be executed includes:
acquiring node attributes of the computing nodes corresponding to the target threads;
judging whether the node attribute meets each task scheduling rule in the plurality of task scheduling rules or not;
And if the node attribute meets each task scheduling rule in the task scheduling rules, taking the computing node corresponding to the target thread as the target computing node corresponding to the task to be executed.
4. A method according to claim 2 or 3, wherein said determining, from the thread pool, a target thread corresponding to the plurality of computing nodes from the plurality of computing nodes comprises:
for each computing node in the computing node list, determining a target thread corresponding to the computing node from a thread pool according to the computing node; the target threads are in one-to-one correspondence with the computing nodes.
5. A method according to any one of claims 1-3, wherein the obtaining, according to the task to be executed, a plurality of task scheduling rules corresponding to the task to be executed includes:
acquiring a plurality of task attributes of the task to be executed; the task attributes comprise a first task attribute and a second task attribute;
selecting a first task scheduling rule corresponding to the first task attribute from a general task scheduling rule list according to the first task attribute;
according to the second task attribute, a second task scheduling rule corresponding to the second task attribute is newly added;
And generating a plurality of task scheduling rules corresponding to the task to be executed according to the first task scheduling rule and the second task scheduling rule.
6. A method according to any one of claims 1-3, wherein the method further comprises:
determining a task execution node from target computing nodes corresponding to the tasks to be executed;
and executing the task to be executed through the task execution node.
7. A task scheduling device, the device comprising:
the task to be executed acquisition module is used for acquiring a task to be executed from the task queue;
the task scheduling rule acquisition module is used for acquiring a plurality of task scheduling rules corresponding to the task to be executed according to the task to be executed;
the target computing node determining module is used for determining a target computing node corresponding to the task to be executed from a plurality of computing nodes according to the task scheduling rules in a multithreading parallel mode; the target computing node corresponds to the plurality of task scheduling rules.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311163750.6A CN117348987A (en) | 2023-09-08 | 2023-09-08 | Task scheduling method, device, computer equipment, storage medium and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311163750.6A CN117348987A (en) | 2023-09-08 | 2023-09-08 | Task scheduling method, device, computer equipment, storage medium and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117348987A true CN117348987A (en) | 2024-01-05 |
Family
ID=89368108
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311163750.6A Pending CN117348987A (en) | 2023-09-08 | 2023-09-08 | Task scheduling method, device, computer equipment, storage medium and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117348987A (en) |
-
2023
- 2023-09-08 CN CN202311163750.6A patent/CN117348987A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9805140B2 (en) | Striping of directed graphs and nodes with improved functionality | |
CN110554909A (en) | task scheduling processing method and device and computer equipment | |
CN112202617B (en) | Resource management system monitoring method, device, computer equipment and storage medium | |
CN111443999A (en) | Data parallel processing method, actuator, computer device and storage medium | |
CN116719646A (en) | Hot spot data processing method, device, electronic device and storage medium | |
CN116127785B (en) | Reliability evaluation method, device and equipment based on multiple performance degradation | |
CN116700955A (en) | Job processing method, apparatus, computer device, and readable storage medium | |
CN117348987A (en) | Task scheduling method, device, computer equipment, storage medium and program product | |
CN117033170A (en) | Test task allocation method, device, equipment, storage medium and program product | |
CN114880315A (en) | Service information cleaning method and device, computer equipment and storage medium | |
CN114416438A (en) | Data export method and device, computer equipment and scheduling service system | |
CN114490041A (en) | Array calculation method, device, equipment, medium and computer program product | |
JP6756680B2 (en) | Information processing equipment, information processing methods, and information processing programs | |
CN113157403A (en) | Job processing method and device, computer equipment and readable storage medium | |
CN117539587A (en) | Resource scheduling method, apparatus, computer device, storage medium and program product | |
CN114706687B (en) | Distribution method and device of computing tasks, computer equipment and storage medium | |
CN117314067A (en) | Work order distribution method, apparatus, device, storage medium and program product | |
US8996911B2 (en) | Core file limiter for abnormally terminating processes | |
CN118034885A (en) | Task processing method, device, computer equipment and storage medium | |
CN117539625A (en) | Server resource allocation method, device, computer equipment and storage medium | |
CN117648310A (en) | Data degradation processing method, device, equipment and medium | |
CN118860293A (en) | Disk capacity balancing method and device and computer equipment | |
CN116880979A (en) | Task event scheduling method, device, computer equipment and storage medium | |
CN116156062A (en) | Outbound resource allocation method, outbound resource allocation device, computer equipment and storage medium | |
CN118819816A (en) | Training data processing method, device, computer equipment and model training cluster |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |