CN115756773B - Task scheduling method, device, electronic equipment and storage medium - Google Patents

Task scheduling method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115756773B
CN115756773B CN202211355252.7A CN202211355252A CN115756773B CN 115756773 B CN115756773 B CN 115756773B CN 202211355252 A CN202211355252 A CN 202211355252A CN 115756773 B CN115756773 B CN 115756773B
Authority
CN
China
Prior art keywords
tasks
task
information
running state
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211355252.7A
Other languages
Chinese (zh)
Other versions
CN115756773A (en
Inventor
姚晨
肖勃飞
况文川
贾栩杰
陈飞
付笳益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongdian Jinxin Software Co Ltd
Original Assignee
Zhongdian Jinxin Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdian Jinxin Software Co Ltd filed Critical Zhongdian Jinxin Software Co Ltd
Priority to CN202211355252.7A priority Critical patent/CN115756773B/en
Publication of CN115756773A publication Critical patent/CN115756773A/en
Application granted granted Critical
Publication of CN115756773B publication Critical patent/CN115756773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Debugging And Monitoring (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a task scheduling method, a task scheduling device, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks, determining candidate scheduling information by adopting at least one optimization model according to the resource information and the task information of the plurality of tasks, scheduling resources of the target equipment for the plurality of tasks and scheduling execution sequences of the plurality of tasks based on the candidate scheduling information determined by the at least one optimization model, obtaining running state parameters of the plurality of tasks, and selecting target scheduling information from the candidate scheduling information determined by the at least one optimization model based on the running state parameters of the plurality of tasks, so as to schedule the resources of the target equipment for the plurality of tasks and schedule the execution sequences of the plurality of tasks according to the target scheduling information. Therefore, the optimal scheduling information can be determined according to the running state parameters of each task, and the resource utilization rate is effectively improved.

Description

Task scheduling method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of resource allocation technologies, and in particular, to a task scheduling method, a task scheduling device, an electronic device, and a storage medium.
Background
With the rapid development of computer technology, more and more task systems need to perform large data resource allocation. In the related art, even if the task quantity is large, the task system for configuring the big data resources can only manually perform scheduling configuration according to the task business logic, but cannot perform more reasonable allocation according to the current task load distribution condition, so that whether the configured parameters are truly suitable for task operation cannot be fed back, and effective model adjustment resource configuration cannot be provided for task resource requirements after the fixed model is changed, so that resources cannot be effectively utilized.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems in the related art to some extent.
Therefore, a first object of the present invention is to provide a task scheduling method, so as to select target scheduling information from candidate scheduling information determined by at least one optimization model based on running state parameters of a plurality of tasks, and improve resource utilization.
A second object of the present invention is to propose a task scheduling device.
A third object of the present invention is to propose an electronic device.
A fourth object of the present invention is to propose a computer readable storage medium.
A fifth object of the invention is to propose a computer programme product.
To achieve the above object, an embodiment of a first aspect of the present invention provides a task scheduling method, including:
acquiring task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks;
according to the resource information and the task information of the plurality of tasks, determining candidate scheduling information by adopting at least one optimization model;
scheduling the resources of the target device to the plurality of tasks based on the candidate scheduling information determined by at least one optimization model, and scheduling the execution sequence of the plurality of tasks to obtain the running state parameters of the plurality of tasks;
selecting target scheduling information from candidate scheduling information determined by at least one optimization model based on the running state parameters of the tasks, so as to schedule the resources of the target equipment to the tasks according to the target scheduling information, and schedule the execution sequence of the tasks.
Optionally, as a first possible implementation manner of the first aspect, the determining candidate scheduling information according to the resource information and task information of the plurality of tasks using at least one optimization model includes:
Inputting the resource information and task information of the plurality of tasks into at least one optimization model as input parameters so as to respectively adopt at least one optimization model to predict, and obtaining output of at least one optimization model to determine the candidate scheduling information;
wherein the at least one of the optimization models comprises at least one of: a maximum value estimation model, a minimum value estimation model, a linear programming model, a multi-element allocation model and a multi-period allocation model.
Optionally, as a second possible implementation manner of the first aspect, the obtaining task information of the plurality of tasks and resource information of a target device for running the plurality of tasks includes:
monitoring the running states of the tasks;
and under the condition that the running state is not in accordance with the set condition, acquiring task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks.
Optionally, as a third possible implementation manner of the first aspect, the obtaining task information of the plurality of tasks and resource information of a target device for running the plurality of tasks when any one monitored running state does not meet a set condition includes:
Under the condition that the running state does not accord with the set condition, generating task information of the plurality of tasks according to the required standard running resources of the plurality of tasks and the dependency relationship among the plurality of tasks;
and generating the resource information according to at least one of the number of the operated tasks of the target equipment, the IO interface occupancy rate, the memory occupancy information, the available network transmission bandwidth and the CPU load.
Optionally, as a fourth possible implementation manner of the first aspect, the monitoring the running states of the plurality of tasks includes:
and monitoring at least one of the task queue congestion time, the resource suspension time, the task failure times, the task waiting time and the task retry times of the plurality of tasks.
Optionally, as a fifth possible implementation manner of the first aspect, the selecting, based on the operation parameters of the plurality of tasks, target scheduling information from candidate scheduling information determined by at least one optimization model includes:
aiming at any one of the optimization models, acquiring running state parameters of the plurality of tasks under the scheduling based on the corresponding candidate scheduling information;
Under the condition that the running state parameters are multiple, aiming at each running state parameter, averaging the running state parameters of the tasks to obtain an average value of each running state parameter;
determining an evaluation value of a corresponding optimization model according to the average value of each running state parameter;
and selecting the target scheduling information from the candidate scheduling information determined by each optimization model according to the evaluation value of each optimization model.
Optionally, as a sixth possible implementation manner of the first aspect, the determining, according to a mean value of each of the operation state parameters, an evaluation value of a corresponding optimization model includes:
and carrying out weighted summation based on the weight of each operation state parameter and the average value of each operation state parameter so as to obtain the evaluation value.
Optionally, as a seventh possible implementation manner of the first aspect, the scheduling, based on the candidate scheduling information determined by at least one optimization model, the resources of the target device to the plurality of tasks, and scheduling an execution sequence of the plurality of tasks to obtain running state parameters of the plurality of tasks, includes:
Configuring resources of the plurality of tasks according to candidate scheduling information determined by each optimization model aiming at each optimization model, and determining the execution sequence of the plurality of tasks;
and monitoring running state parameters of the plurality of tasks under the condition that the plurality of tasks are executed based on the configured resources and the execution sequence, wherein the running state parameters comprise one or more of data network transmission quantity, task memory overflow OOM, task waiting time and task resource consumption.
To achieve the above object, an embodiment of a second aspect of the present invention provides a task scheduling device, including:
the system comprises an acquisition module, a storage module and a control module, wherein the acquisition module is used for acquiring task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks;
the determining module is used for determining candidate scheduling information by adopting at least one optimization model according to the resource information and the task information of the plurality of tasks;
the scheduling module is used for scheduling the resources of the target equipment to the plurality of tasks based on the candidate scheduling information determined by the at least one optimization model and scheduling the execution sequence of the plurality of tasks so as to obtain the running state parameters of the plurality of tasks;
And the processing module is used for selecting target scheduling information from candidate scheduling information determined by at least one optimization model based on the running state parameters of the tasks, scheduling the resources of the target equipment for the tasks according to the target scheduling information and scheduling the execution sequence of the tasks.
Optionally, as a first possible implementation manner of the second aspect, the determining module is further configured to:
inputting the resource information and task information of the plurality of tasks into at least one optimization model as input parameters so as to respectively adopt at least one optimization model to predict, and obtaining output of at least one optimization model to determine the candidate scheduling information;
wherein the at least one of the optimization models comprises at least one of: a maximum value estimation model, a minimum value estimation model, a linear programming model, a multi-element allocation model and a multi-period allocation model.
Optionally, as a second possible implementation manner of the second aspect, the acquiring module includes:
the monitoring unit is used for monitoring the running states of the tasks;
The first acquisition unit is used for acquiring task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks under the condition that the running state is monitored to be not in accordance with the set condition at any time.
Optionally, as a third possible implementation manner of the second aspect, the first obtaining unit is further configured to:
under the condition that the running state does not accord with the set condition, generating task information of the plurality of tasks according to the required standard running resources of the plurality of tasks and the dependency relationship among the plurality of tasks;
and generating the resource information according to at least one of the number of the operated tasks of the target equipment, the IO interface occupancy rate, the memory occupancy information, the available network transmission bandwidth and the CPU load.
Optionally, as a fourth possible implementation manner of the second aspect, the monitoring unit is further configured to:
and monitoring at least one of the task queue congestion time, the resource suspension time, the task failure times, the task waiting time and the task retry times of the plurality of tasks.
Optionally, as a fifth possible implementation manner of the second aspect, the processing module includes:
The second obtaining unit is used for obtaining running state parameters of the plurality of tasks under the scheduling based on the corresponding candidate scheduling information aiming at any one of the optimization models;
the first processing unit is used for solving the average value of the running state parameters of the tasks aiming at each running state parameter under the condition that the running state parameters are multiple so as to obtain the average value of each running state parameter;
the determining unit is used for determining an evaluation value of the corresponding optimization model according to the average value of each running state parameter;
and the second processing unit is used for selecting the target scheduling information from the candidate scheduling information determined by each optimization model according to the evaluation value of each optimization model.
Optionally, as a sixth possible implementation manner of the second aspect, the determining unit is further configured to:
and carrying out weighted summation based on the weight of each operation state parameter and the average value of each operation state parameter so as to obtain the evaluation value.
Optionally, as a seventh possible implementation manner of the second aspect, the scheduling module is further configured to:
configuring resources of the plurality of tasks according to candidate scheduling information determined by each optimization model aiming at each optimization model, and determining the execution sequence of the plurality of tasks;
And monitoring running state parameters of the plurality of tasks under the condition that the plurality of tasks are executed based on the configured resources and the execution sequence, wherein the running state parameters comprise one or more of data network transmission quantity, task memory overflow OOM, task waiting time and task resource consumption.
To achieve the above object, an embodiment of a third aspect of the present invention provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the task scheduling method of the first aspect.
To achieve the above object, an embodiment of a fourth aspect of the present invention proposes a computer-readable storage medium storing computer instructions for causing the computer to execute the task scheduling method of the first aspect.
In order to achieve the above object, an embodiment of a fifth aspect of the present invention proposes a computer program product comprising a computer program which, when executed by a processor, implements the task scheduling method of the first aspect.
The technical scheme provided by the embodiment of the invention comprises the following beneficial effects:
the method comprises the steps of obtaining task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks, determining candidate scheduling information by adopting at least one optimization model according to the resource information and the task information of the plurality of tasks, scheduling resources of the target equipment for the plurality of tasks and scheduling execution sequences of the plurality of tasks based on the candidate scheduling information determined by the at least one optimization model, obtaining running state parameters of the plurality of tasks, and selecting target scheduling information from the candidate scheduling information determined by the at least one optimization model based on the running state parameters of the plurality of tasks, so as to schedule the resources of the target equipment for the plurality of tasks and schedule the execution sequences of the plurality of tasks according to the target scheduling information. Therefore, the optimal scheduling information can be determined according to the running state parameters of each task, and the resource utilization rate is effectively improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
Fig. 1 is a schematic flow chart of a task scheduling method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another task scheduling method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another task scheduling method according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another task scheduling method according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of a task scheduling method in a scenario provided by an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a task scheduling device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
In the related art, even if the task quantity is large, the task system for configuring the big data resources can only manually perform scheduling configuration according to the task business logic, but cannot perform more reasonable allocation according to the current task load distribution condition, so that whether the configured parameters are truly suitable for task operation cannot be fed back, and effective model adjustment resource configuration cannot be provided for task resource requirements after the fixed model is changed, so that resources cannot be effectively utilized.
Aiming at the problem, the embodiment of the invention provides a task scheduling method to determine optimal scheduling information according to the running state parameters of each task, and ensure that the running time of the task is shortest on the premise of limited resources; or the minimum use of resources is ensured on the premise of the established running task, and the utilization rate of the resources is improved.
The task scheduling method, the task scheduling device, the electronic equipment and the storage medium according to the embodiment of the invention are described below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a task scheduling method according to an embodiment of the present invention.
It should be noted that, the task scheduling method in the embodiment of the present invention may be executed by the task scheduling device provided in the embodiment of the present invention. The task scheduling device may be an electronic device or may be configured in the electronic device, so as to select target scheduling information from candidate scheduling information determined by at least one optimization model based on running state parameters of a plurality of tasks.
The electronic device may be any stationary or mobile computing device capable of performing data processing, for example, a mobile computing device such as a notebook computer, a smart phone, a wearable device, or a stationary computing device such as a desktop computer, or a server, or other types of computing devices, which is not limited in this embodiment.
As shown in fig. 1, the task scheduling method includes the steps of:
step 101, acquiring task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks.
In this embodiment, the task information refers to some information related to the task when the task runs, and optionally, the task information may include a task waiting duration, a task running result, and the like. The target device refers to a device for running a plurality of tasks, alternatively the target device may be a server. The resource information refers to some information related to the target device when the target device runs the task, and optionally, the resource information of the target device may include the number of running tasks of the target device, an IO (Input/Output) interface occupancy rate, memory occupancy information, available network transmission bandwidth, and a CPU (central processing unit ) load, and so on.
In this embodiment, the task scheduling device may acquire task information of a plurality of tasks and resource information of a target device for running the plurality of tasks. Specifically, the task scheduling device may acquire task information of a plurality of tasks and resource information of the target device when the target device runs the plurality of tasks.
It should be noted that, in this embodiment, the task scheduling device may acquire, in various public, legal and compliant manners, task information of a plurality of tasks and resource information of a target device for running the plurality of tasks, for example, the task scheduling device may acquire, in online real time, task information of a plurality of tasks and resource information of a target device for running the plurality of tasks, or may acquire, in a network transmission or physical copy manner, task information of a plurality of tasks and resource information of a target device for running the plurality of tasks, or may acquire, in other public, legal and compliant manners, task information of a plurality of tasks and resource information of a target device for running the plurality of tasks, which is not limited in this embodiment.
And 102, determining candidate scheduling information by adopting at least one optimization model according to the resource information and the task information of a plurality of tasks.
In this embodiment, the candidate scheduling information may be understood as information for scheduling the target device resource and scheduling the plurality of tasks, alternatively, the candidate scheduling information may include information for scheduling the target device resource and information for scheduling an execution order of the plurality of tasks, and so on.
In this embodiment, after the resource information of the target device and the task information of the plurality of tasks are acquired, at least one optimization model may be adopted, and candidate scheduling information may be determined based on the resource information of the target device and the task information of the plurality of tasks. Wherein the at least one optimization model may include at least one of: a maximum value estimation model, a minimum value estimation model, a linear programming model, a multi-element allocation model and a multi-period allocation model.
It should be noted that the maximum value estimation model is a model that takes priority to satisfy the maximum number of resource demands as an optimization target, the minimum value estimation model is a model that takes priority to satisfy the minimum number of resource demands as an optimization target, the linear programming model is a model that takes priority of an advanced resource queue as an optimization target, the multi-component allocation model is a model that takes priority of tasks with the greatest demand allocated to one or more resources as an optimization target, and the multi-period allocation model is a model that takes priority of tasks with the matching of resources as an optimization target under the condition that priority adjustment is performed by one or more resources according to time periods.
It is understood that since at least one optimization model may be employed to determine candidate scheduling information, the number of determined candidate scheduling information is consistent with the number of employed optimization models.
And step 103, scheduling the resources of the target equipment and the execution sequence of the tasks to the tasks based on the candidate scheduling information determined by the at least one optimization model so as to obtain the running state parameters of the tasks.
In this embodiment, after the candidate scheduling information is determined by using at least one optimization model, the resources of the target device and the execution sequence of the plurality of tasks may be scheduled to the plurality of tasks based on the candidate scheduling information determined by the at least one optimization model, so as to obtain the running state parameters of the plurality of tasks. The running state parameters refer to some parameters related to task running during task running, and optionally, the running state parameters may include one or more of data network transmission quantity, task memory overflow OOM, task waiting duration and task resource consumption.
It can be understood that, since the number of the determined candidate scheduling information is consistent with the number of the adopted optimization models, the operation state parameters of a plurality of corresponding tasks can be obtained for the candidate scheduling information determined by any one optimization model.
And 104, selecting target scheduling information from candidate scheduling information determined by at least one optimization model based on the running state parameters of the plurality of tasks, so as to schedule resources of target equipment to the plurality of tasks according to the target scheduling information, and schedule the execution sequence of the plurality of tasks.
In this embodiment, after obtaining the operation state parameters of the plurality of tasks corresponding to the candidate scheduling information determined by the at least one optimization model, the target scheduling information may be selected from the candidate scheduling information determined by the at least one optimization model based on the operation state parameters of the plurality of tasks corresponding to the candidate scheduling information determined by the obtained at least one optimization model, so as to schedule the resources of the target device to the plurality of tasks according to the target scheduling information, and schedule the execution order of the plurality of tasks.
In one possible implementation manner of this embodiment, the corresponding evaluation value may be determined according to the running state parameters of the plurality of tasks corresponding to the candidate scheduling information determined by any one of the optimization models, so that the candidate scheduling information with the highest evaluation value in the candidate scheduling information determined by each optimization model is selected as the target scheduling information, and then the resources of the target device are scheduled to the plurality of tasks according to the target scheduling information, and the execution sequence of the plurality of tasks is scheduled.
The task scheduling method provided by the embodiment of the invention realizes that the candidate scheduling information is determined by adopting at least one optimization model according to the resource information and the task information of the target equipment for running the plurality of tasks by acquiring the task information of the plurality of tasks and the resource information of the target equipment for running the plurality of tasks, so that the resources of the target equipment and the execution sequence of the plurality of tasks are scheduled to obtain the running state parameters of the plurality of tasks based on the candidate scheduling information determined by the at least one optimization model, and the target scheduling information is selected from the candidate scheduling information determined by the at least one optimization model to schedule the resources of the target equipment to the plurality of tasks according to the target scheduling information and the execution sequence of the plurality of tasks. Therefore, the optimal scheduling information can be determined according to the running state parameters of each task, and the resource utilization rate is effectively improved.
In order to clearly illustrate the above embodiment, another task scheduling method is provided in this embodiment, and fig. 2 is a schematic flow chart of another task scheduling method provided in the embodiment of the present invention.
As shown in fig. 2, the task scheduling method may include the steps of:
step 201, monitoring running states of a plurality of tasks, and acquiring task information of the plurality of tasks and resource information of target equipment for running the plurality of tasks when the running states are not in accordance with the set conditions.
In this embodiment, the running states of the plurality of tasks may be monitored to determine whether the running states meet the set conditions, so that, if the running states are monitored to be not met with the set conditions at any time, task information of the plurality of tasks and resource information of a target device for running the plurality of tasks are obtained. The embodiment of the present invention is not limited to specific setting of the setting conditions, alternatively, the setting conditions may be set according to manual experience, for example, the setting conditions may be set as running state index thresholds, so that whether the monitored running state meets the setting conditions may be judged by setting specific values for each running state index threshold, or dynamic adjustment may be performed according to actual application requirements, which is not limited in this embodiment.
It should be noted that, in this step, the execution process of acquiring the task information of the plurality of tasks and the resource information of the target device for running the plurality of tasks may refer to the execution process of step 101 in the above embodiment, and the principle is the same, which is not described herein again.
Step 202, inputting the resource information and the task information of the plurality of tasks into at least one optimization model as input parameters, so as to respectively adopt the at least one optimization model to predict, and obtaining the output determination candidate scheduling information of the at least one optimization model.
In this embodiment, after the resource information of the target device and the task information of the plurality of tasks are obtained, the resource information and the task information of the plurality of tasks may be input as input parameters into at least one optimization model, so as to respectively predict by adopting the at least one optimization model, and obtain the output determination candidate scheduling information of the at least one optimization model. Wherein the at least one optimization model may include at least one of: a maximum value estimation model, a minimum value estimation model, a linear programming model, a multi-element allocation model and a multi-period allocation model.
It should be noted that, the other execution process of the present step may be referred to the execution process of step 102 in the above embodiment, and the principle is the same, which is not described herein again.
Step 203, for each optimization model, configuring resources of a plurality of tasks according to the candidate scheduling information determined by each optimization model, and determining an execution sequence of the plurality of tasks.
In this embodiment, for any one of the optimization models, resources of a plurality of tasks may be configured according to candidate scheduling information determined by the optimization model, and an execution order of the plurality of tasks may be determined, so as to schedule the resources of the target device to the plurality of tasks, and schedule the execution order of the plurality of tasks, so that the plurality of tasks are executed based on the configured resources and the determined execution order.
In step 204, in the case where the plurality of tasks are executed based on the configured resources and the execution order, the operation state parameters of the plurality of tasks are monitored.
The running state parameters comprise one or more of data network transmission quantity, task memory overflow OOM, task waiting time length and task resource consumption.
In this embodiment, in the case where a plurality of tasks are executed based on the configured resources and the execution order, the running state parameters of the plurality of tasks may be monitored to obtain specific values of the running state parameters of the plurality of tasks.
Step 205, selecting target scheduling information from the candidate scheduling information determined by at least one optimization model based on the running state parameters of the plurality of tasks, so as to schedule the resources of the target device to the plurality of tasks according to the target scheduling information, and schedule the execution sequence of the plurality of tasks.
It should be noted that, the execution of this step may refer to the execution of step 104 in the above embodiment, and the principle is the same, which is not described herein again.
According to the task scheduling method provided by the embodiment of the invention, the task information of the plurality of tasks and the resource information of the target equipment for running the plurality of tasks are obtained through monitoring the running state of the plurality of tasks at any time under the condition that the running state is not in accordance with the set condition, the resource information and the task information of the plurality of tasks are input into at least one optimizing model as input parameters, the at least one optimizing model is adopted for respectively predicting, the output of the at least one optimizing model is obtained to determine candidate scheduling information, thus, for each optimizing model, the resources of the plurality of tasks are configured according to the candidate scheduling information determined by each optimizing model, the execution sequence of the plurality of tasks is determined, and further, the running state parameters of the plurality of tasks are monitored under the condition that the plurality of tasks are executed based on the configured resources and the execution sequence, and the target scheduling information is selected from the candidate scheduling information determined by at least one optimizing model so as to schedule the resources of the target equipment to the plurality of tasks and the execution sequence of the plurality of tasks according to the target scheduling information. Therefore, in the task operation process, the task operation state which is caused by the task resource requirement or environmental change does not meet the set condition, and the target scheduling information is selected again based on the operation state parameters of each task, so that the task operation time length and the task waiting time length are optimized, and the task operation failure times caused by OOM and the like are effectively reduced.
According to the analysis, the invention can monitor the running states of a plurality of tasks, acquire task information of the plurality of tasks and resource information of target equipment for running the plurality of tasks when the running states are not in accordance with the set conditions, and provide another task scheduling method in this embodiment, and fig. 3 is a schematic flow chart of another task scheduling method provided in the embodiment of the invention, in order to clearly illustrate how the running states of the plurality of tasks are monitored, and how the task information of the plurality of tasks and the resource information of the target equipment for running the plurality of tasks are acquired when the running states are not in accordance with the set conditions when the running states are monitored at any time.
As shown in fig. 3, the task scheduling method may include the steps of:
step 301, monitoring at least one of a task queue congestion duration, a resource suspension duration, a task failure number, a task waiting duration, and a task retry number of the plurality of tasks.
In this embodiment, at least one of a task queue congestion duration, a resource suspension duration, a task failure number, a task waiting duration, and a task retry number of the plurality of tasks may be monitored, so as to determine whether at least one of the monitored task queue congestion duration, the resource suspension duration, the task failure number, the task waiting duration, and the task retry number of the plurality of tasks meets a set condition.
It can be understood that the monitoring of at least one of the task queue congestion duration, the resource suspension duration, the number of task failures, the task waiting duration, and the number of task retries of the plurality of tasks is to monitor the running state of the plurality of tasks.
Step 302, generating task information of a plurality of tasks according to required standard operation resources of the plurality of tasks and dependency relations among the plurality of tasks when the operation state is monitored to be not in accordance with the set condition at any time.
In this embodiment, when it is monitored that the running state does not meet the set condition at any time, task information of a plurality of tasks may be generated according to required standard running resources of the plurality of tasks and dependency relationships among the plurality of tasks. The embodiment of the present invention is not limited to specific setting of the setting conditions, alternatively, the setting conditions may be set according to manual experience, for example, the setting conditions may be set as running state index thresholds, so that whether the monitored running state meets the setting conditions may be judged by setting specific values for each running state index threshold, or dynamic adjustment may be performed according to actual application requirements, which is not limited in this embodiment.
It can be appreciated that, since the previous step is monitoring at least one of the task queue congestion time, the resource suspension time, the task failure time, the task waiting time and the task retry time of the plurality of tasks, the situation that the running state is not in compliance with the set condition is any monitored, that is, the situation that at least one of the task queue congestion time, the resource suspension time, the task failure time, the task waiting time and the task retry time is not in compliance with the set condition is any monitored, and therefore, the setting condition can be set to at least one of the task queue congestion time threshold, the resource suspension time threshold, the task failure time threshold, the task waiting time threshold and the task retry time threshold, and the situation that the running state is not in compliance with the set condition is any monitored includes any one of the task queue congestion time threshold, the resource suspension time threshold of the plurality of tasks, the task failure time of the plurality of tasks is not in compliance with the task failure time threshold, the task waiting time of the plurality of tasks is not in compliance with the task waiting time threshold, and the task retry time of the plurality of tasks is not in compliance with the task waiting time threshold.
Step 303, generating resource information according to at least one of the number of tasks executed by the target device, the occupancy rate of the IO interface, the memory occupancy information, the available network transmission bandwidth and the CPU load.
In this embodiment, since the resource information is the resource information of the target device, the resource information may be generated according to at least one of the number of tasks executed by the target device, the occupancy rate of the IO interface, the memory occupancy information, the available network transmission bandwidth, and the CPU load.
And step 304, inputting the resource information and the task information of the plurality of tasks into at least one optimization model as input parameters, so as to respectively adopt the at least one optimization model to predict, and obtaining the output determination candidate scheduling information of the at least one optimization model.
And step 305, scheduling the resources of the target device and the execution sequence of the tasks to the tasks based on the candidate scheduling information determined by the at least one optimization model, so as to obtain the running state parameters of the tasks.
And 306, selecting target scheduling information from candidate scheduling information determined by at least one optimization model based on the running state parameters of the plurality of tasks, so as to schedule resources of target equipment to the plurality of tasks according to the target scheduling information, and schedule the execution sequence of the plurality of tasks.
It should be noted that the execution of steps 304-306 can be referred to the execution of steps 203-205 in the above embodiment, and the principle is the same, and will not be described herein.
According to the task scheduling method provided by the embodiment of the invention, at least one of the congestion time of the task queue, the resource suspension time, the number of task failures, the task waiting time and the number of task retries of the plurality of tasks is monitored, so that the task information of the plurality of tasks is generated according to the required standard operation resources of the plurality of tasks and the dependency relationship among the plurality of tasks and the resource information is generated according to at least one of the number of operated tasks of the target equipment, the IO interface occupancy rate, the memory occupancy information, the available network transmission bandwidth and the CPU load under the condition that the operation state is monitored to be not in accordance with the set condition at any time. Therefore, the monitoring of the running states of the tasks can be realized, and the task information of the tasks and the resource information of the target equipment for running the tasks can be acquired under the condition that the running states are not in accordance with the set conditions when the running states are monitored at any time.
In order to clearly explain how to determine, based on the operation state parameters of the plurality of tasks in the above embodiment, the candidate scheduling information determined by at least one optimization model, the target scheduling information is selected to schedule the resources of the target device to the plurality of tasks according to the target scheduling information, and schedule the execution sequence of the plurality of tasks, another task scheduling method is provided in the present embodiment, and fig. 4 is a schematic flow chart of another task scheduling method provided in the embodiment of the present invention.
As shown in fig. 4, the task scheduling method may include the steps of:
step 401, acquiring task information of a plurality of tasks and resource information of a target device for running the plurality of tasks.
And step 402, determining candidate scheduling information by adopting at least one optimization model according to the resource information and task information of a plurality of tasks.
Step 403, scheduling the resources of the target device to the plurality of tasks and scheduling the execution sequence of the plurality of tasks based on the candidate scheduling information determined by the at least one optimization model, so as to obtain the running state parameters of the plurality of tasks.
It should be noted that, the execution process of steps 401 to 403 may refer to the execution process of steps 101 to 103 in the above embodiment, and the principle is the same, and will not be described herein again.
Step 404, for any optimization model, acquiring running state parameters of a plurality of tasks under the scheduling based on the corresponding candidate scheduling information.
In this embodiment, the running state parameters of a plurality of tasks under the scheduling of the candidate scheduling information determined based on the optimization model may be obtained for any one of the optimization models, that is, the running state parameters of a plurality of tasks scheduled based on the candidate scheduling information determined based on any one of the optimization models may be obtained.
Step 405, in the case that the running state parameters are multiple, for each running state parameter, calculating an average value of the running state parameters of the multiple tasks to obtain an average value of each running state parameter.
It can be appreciated that, since the operation state parameters may include one or more combinations of data network traffic, task memory overflow OOM, task waiting time, and task resource consumption, the operation state parameters of the plurality of tasks may be averaged for each operation state parameter under the condition that the operation state parameters are multiple, so as to obtain an average value of each operation state parameter.
And step 406, determining an evaluation value of the corresponding optimization model according to the average value of each operation state parameter.
In this embodiment, after the average value of each operation state parameter is obtained, the evaluation value of the corresponding optimization model may be determined based on the average value of each operation state parameter. Optionally, the evaluation value of the corresponding optimization model is obtained by weighted summation of the weight of each operation state parameter and the average value of each operation state parameter.
It should be noted that the weight of each operation state parameter may be any one of subjective weighting method, objective weighting method and combined weighting method And (5) determining. The subjective weighting method is a method for determining the weight of each running state parameter according to the subjective importance of a decision maker on each running state parameter, and can comprise a Delphi method, an analytic hierarchy process, a two-term coefficient method, a loop ratio scoring method, a least squares method and the like. The objective weighting method is a method for determining the weight of each running state parameter according to the relation degree of each running state parameter or the data relation of each running state parameter, and can comprise a principal component analysis method, an entropy value method, a CRITIC weighting method, a dispersion and mean square error method, a multi-objective planning method and the like. The combined weighting method is a method for determining the weight of each running state parameter by carrying out combined calculation on the weight configured by the subjective weighting method and the weight determined by the objective weighting method. Alternatively, the combined weighting method may employ a "multiplication" integration method, based on the formulaPerforming combination calculation, wherein i represents the ith operating state parameter in the operating state parameters, P i Weights representing the ith operating state parameter, a i Representing the weight of the subjective weighting method to the i-th running state parameter configuration, b i And the weight determined by the objective weighting method for the ith operation state parameter is represented. An integration method of addition can also be adopted, and the method is based on the formula P i =αa i +(1-α)b i (0.ltoreq.α.ltoreq.1), wherein i represents an i-th operation state parameter among the operation state parameters, P i Weights representing the ith operating state parameter, a i Representing the weight of the subjective weighting method to the i-th running state parameter configuration, b i The weight determined by the objective weighting method for the ith operating state parameter is represented, and α represents a manually set preference value, which is not limited in this embodiment.
Step 407, selecting target scheduling information from the candidate scheduling information determined by each optimization model according to the evaluation value of each optimization model, so as to schedule the resources of the target device to the plurality of tasks according to the target scheduling information and schedule the execution sequence of the plurality of tasks.
In this embodiment, after the evaluation values of the respective optimization models are obtained, the target scheduling information may be selected from the candidate scheduling information determined according to the respective optimization models, so as to schedule the resources of the target device to the plurality of tasks according to the target scheduling information, and to schedule the execution order of the plurality of tasks, based on the evaluation values of the respective optimization models. Alternatively, candidate scheduling information determined by an optimization model having the highest evaluation value among the optimization models may be selected as the target scheduling information.
According to the task scheduling method provided by the embodiment of the invention, by aiming at any one optimization model, under the scheduling based on the corresponding candidate scheduling information, the running state parameters of a plurality of tasks are obtained, and under the condition that the running state parameters are multiple, the running state parameters of the plurality of tasks are averaged for each running state parameter to obtain the average value of each running state parameter, so that the evaluation value of the corresponding optimization model is determined according to the average value of each running state parameter, and then the target scheduling information is selected from the candidate scheduling information determined by each optimization model according to the evaluation value of each optimization model, so that the resources of the target equipment are scheduled to the plurality of tasks according to the target scheduling information, and the execution sequence of the plurality of tasks is scheduled. Thus, it is possible to select target scheduling information from among candidate scheduling information determined by at least one optimization model based on the running state parameters of the plurality of tasks, to schedule resources of the target device to the plurality of tasks according to the target scheduling information, and to schedule an execution order of the plurality of tasks.
In order to more clearly illustrate the above embodiments, an example will now be described.
Fig. 5 is a flow chart of a task scheduling method in a scenario provided by an embodiment of the present invention. As shown in fig. 5, the task scheduling method may include the steps of: first, the task initial operation parameters are set. Specifically, the task initial operation parameters, the initial resource information of the target device, and the initial task dependency information may be configured. The task initial operation parameters may include the number of CPU cores, memory size, disk capacity, and dependent task number required for the task, and the like, denoted as x= (X1, X2,..xn). Correspondingly, the initial resource information of the target device may be denoted as y= (Y1, Y2...yn). Alternatively, the task dependency information may be an execution order of the tasks. Thereby running tasks and collecting environmental data. Specifically, task information of a plurality of tasks and resource information of a target device in the task running process can be collected. The task information may include task runtime waiting time, task runtime result, task error information, etc. The resource information of the target device may include the number of tasks executed, the interface occupancy, the memory occupancy information, the available network transmission bandwidth, the CPU load, and the like of the target device.
After the environmental data is collected, at least one optimization model may be set to input resource information required by the task, resource information available to the target device, and execution order of the task to the selected at least one optimization model. Alternatively, the set at least one optimization model may be at least one of the following: a maximum value estimation model, a minimum value estimation model, a linear programming model, a multi-element allocation model and a multi-period allocation model. Therefore, an operation result output by at least one optimization model can be obtained, and the operation result is compared to judge whether the operation result is optimal or not. Specifically, a certain optimal solution may be set as candidate scheduling information of at least one optimization model, so that, in a case where at least one optimization target set during task operation is satisfied and there is an optimal solution, the resource xt= (X1 t, X2t,.. Xnt) required by each task obtains the actually allocated resource yt= (Y1 t, Y2t.. Ynt) under the optimization model, and relevant parameter configuration of the optimal solution is recorded. Wherein the optimization objective of the selected at least one optimization model may be at least one of: the data network transmission amount is minimum, the task OOM is minimum, the task waiting time is minimum and the set task resource is minimum. And further, task configuration can be carried out again according to the calculation result, the operation result after optimization is collected, the operation results before and after optimization are compared, and whether the operation result is optimal or not is judged. Optionally, it may be determined whether the data meets the optimization objective by comparing the data network transmission size, the number of tasks OOM, the average waiting time of the tasks, and the number of given task resources. If the optimization objective is not met, the optimization objective is not optimal, so that the operation task is needed to be jumped to, the environment data is collected, and the optimal solution of at least one optimization model is re-solved. Alternatively, the optimal solution method can be replaced, and the model parameter weights can be adjusted. If the optimization target is met, the task configuration is optimized, so that the task configuration can be switched to the optimized task configuration, the task is operated according to the optimized candidate scheduling information, the environment data is collected again, and whether the environment data exceeds the set condition is judged. Alternatively, the setting condition may be at least one of: task queue congestion duration threshold, resource suspension duration threshold, task failure number threshold, task waiting duration threshold, task retry number threshold. If the corresponding data in the environment data does not meet the set conditions, the operation task is needed to be skipped, the environment data is collected, and the optimal solution of at least one optimization model is re-solved. If the corresponding data in the environment data meets the set condition, the candidate scheduling information at this time can be determined as target scheduling information, so as to schedule the resources of the target device to the plurality of tasks according to the target scheduling information and schedule the execution sequence of the plurality of tasks.
In order to achieve the above embodiment, the present invention further provides a task scheduling device.
Fig. 6 is a schematic structural diagram of a task scheduling device according to an embodiment of the present invention.
As shown in fig. 6, the task scheduling device includes: an acquisition module 61, a determination module 62, a scheduling module 63 and a processing module 64.
An acquisition module 61, configured to acquire task information of a plurality of tasks, and resource information of a target device for running the plurality of tasks;
a determining module 62, configured to determine candidate scheduling information by using at least one optimization model according to the resource information and task information of the plurality of tasks;
a scheduling module 63, configured to schedule resources of the target device to the plurality of tasks and schedule an execution order of the plurality of tasks based on the candidate scheduling information determined by the at least one optimization model, so as to obtain running state parameters of the plurality of tasks;
the processing module 64 is configured to select target scheduling information from the candidate scheduling information determined by the at least one optimization model based on the running state parameters of the plurality of tasks, to schedule resources of the target device to the plurality of tasks according to the target scheduling information, and to schedule an execution order of the plurality of tasks.
Further, in one possible implementation of the embodiment of the present invention, the determining module 62 is further configured to:
Inputting the resource information and task information of a plurality of tasks into at least one optimization model as input parameters, so as to respectively adopt the at least one optimization model to predict, and obtaining the output determination candidate scheduling information of the at least one optimization model;
wherein the at least one optimization model comprises at least one of: a maximum value estimation model, a minimum value estimation model, a linear programming model, a multi-element allocation model and a multi-period allocation model.
Further, in one possible implementation manner of the embodiment of the present invention, the acquiring module 61 includes:
the monitoring unit is used for monitoring the running states of a plurality of tasks;
the first acquisition unit is used for acquiring task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks under the condition that the running state is monitored to be not in accordance with the set condition at any time.
Further, in one possible implementation manner of the embodiment of the present invention, the first obtaining unit is further configured to:
under the condition that the running state does not accord with the set condition, generating task information of a plurality of tasks according to the required standard running resources of the plurality of tasks and the dependency relationship among the plurality of tasks;
And generating resource information according to at least one of the number of the operated tasks of the target equipment, the IO interface occupancy rate, the memory occupancy information, the available network transmission bandwidth and the CPU load.
Further, in a possible implementation manner of the embodiment of the present invention, the monitoring unit is further configured to:
at least one of a task queue congestion time, a resource suspension time, a task failure number, a task waiting time and a task retry number of the plurality of tasks is monitored.
Further, in one possible implementation of the embodiment of the present invention, the processing module 64 further includes:
the second acquisition unit is used for acquiring running state parameters of a plurality of tasks under the scheduling based on the corresponding candidate scheduling information aiming at any one of the optimization models;
the first processing unit is used for solving the average value of the running state parameters of a plurality of tasks aiming at each running state parameter under the condition that the running state parameters are multiple so as to obtain the average value of each running state parameter;
the determining unit is used for determining an evaluation value of the corresponding optimization model according to the average value of each running state parameter;
and the second processing unit is used for selecting target scheduling information from candidate scheduling information determined by each optimization model according to the evaluation value of each optimization model.
Optionally, as a sixth possible implementation manner of the second aspect, the determining unit is further configured to:
and carrying out weighted summation based on the weight of each operation state parameter and the average value of each operation state parameter to obtain an evaluation value.
Optionally, as a seventh possible implementation manner of the second aspect, the scheduling module 63 is further configured to:
configuring resources of a plurality of tasks according to candidate scheduling information determined by each optimization model aiming at each optimization model, and determining the execution sequence of the plurality of tasks;
in the case of the plurality of tasks being executed based on the configured resources and the execution order, the operational state parameters of the plurality of tasks are monitored, wherein the operational state parameters include one or more combinations of data network traffic, task memory overflow OOM, task waiting duration, and task resource consumption.
It should be noted that the foregoing explanation of the task scheduling method embodiment is also applicable to the task scheduling device of this embodiment, and will not be repeated here.
The task scheduling device provided by the embodiment of the invention determines candidate scheduling information by adopting at least one optimization model according to the resource information and the task information of the target equipment for running the plurality of tasks by acquiring the task information of the plurality of tasks and the resource information of the target equipment for running the plurality of tasks, so that the resources of the target equipment and the execution sequence of the plurality of tasks are scheduled to obtain the running state parameters of the plurality of tasks based on the candidate scheduling information determined by the at least one optimization model, and the target scheduling information is selected from the candidate scheduling information determined by the at least one optimization model to schedule the resources of the target equipment to the plurality of tasks according to the target scheduling information and the execution sequence of the plurality of tasks. Therefore, the optimal scheduling information can be determined according to the running state parameters of each task, and the resource utilization rate is effectively improved.
In order to achieve the above embodiment, the present invention further proposes an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the task scheduling method according to any one of the above embodiments of the present invention.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. It should be noted that the electronic device shown in fig. 7 is only an example, and should not impose any limitation on the functions and application scope of the embodiments of the present invention.
As shown in fig. 7, the electronic device may include: the processor 72 and the memory 73 are arranged on the circuit board 74, wherein the circuit board 74 is arranged in a space surrounded by the shell 71; a power supply circuit 75 for supplying power to the respective circuits or devices of the above-described electronic apparatus; memory 73 is used to store executable program code; the processor 72 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 73 for executing the task scheduling method according to any one of the above embodiments of the present invention.
The specific implementation of the above steps by the processor 72 and the further implementation of the steps by the processor 72 by running executable program code may be referred to the description of the embodiment of fig. 1-5 of the present invention, and will not be repeated here.
In order to implement the above embodiments, the present invention also proposes a computer-readable storage medium storing computer instructions for causing a computer to execute the task scheduling method proposed in any one of the above embodiments of the present invention.
In order to implement the above embodiments, the present invention also proposes a computer program product comprising a computer program which, when executed by a processor, implements the task scheduling method according to any of the above embodiments of the present invention.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present invention.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (9)

1. The task scheduling method is characterized by comprising the following steps of:
acquiring task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks;
according to the resource information and the task information of the plurality of tasks, adopting at least one optimization model to determine candidate scheduling information, wherein the number of the candidate scheduling information is consistent with the number of the adopted optimization models;
configuring resources of the plurality of tasks according to candidate scheduling information determined by each optimization model, determining an execution sequence of the plurality of tasks, and monitoring running state parameters of the plurality of tasks under the condition that the plurality of tasks are executed based on the configured resources and the execution sequence so as to obtain the running state parameters of the plurality of tasks;
And aiming at any one of the optimization models, acquiring running state parameters of the plurality of tasks under the scheduling based on corresponding candidate scheduling information, determining the average value of each running state parameter when the running state parameters are multiple, determining the weight of each running state parameter by adopting a combined weighting method, determining the evaluation value of the corresponding optimization model based on the weight and the average value of each running state parameter, selecting target scheduling information from the candidate scheduling information according to the evaluation value of each optimization model, scheduling the resources of the target equipment to the plurality of tasks according to the target scheduling information, and scheduling the execution sequence of the plurality of tasks.
2. The method of claim 1, wherein the determining candidate scheduling information using at least one optimization model based on the resource information and the task information for the plurality of tasks comprises:
inputting the resource information and task information of the plurality of tasks into at least one optimization model as input parameters so as to respectively adopt at least one optimization model to predict, and obtaining output of at least one optimization model to determine the candidate scheduling information;
Wherein the at least one of the optimization models comprises at least one of: a maximum value estimation model, a minimum value estimation model, a linear programming model, a multi-element allocation model and a multi-period allocation model.
3. The method of claim 1, wherein the obtaining task information for a plurality of tasks and resource information for a target device running the plurality of tasks comprises:
monitoring the running states of the tasks;
and under the condition that the running state is not in accordance with the set condition, acquiring task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks.
4. A method according to claim 3, wherein, in the case where the running state is not in compliance with the set condition, the task information of the plurality of tasks and the resource information of the target device for running the plurality of tasks are obtained, including:
under the condition that the running state does not accord with the set condition, generating task information of the plurality of tasks according to the required standard running resources of the plurality of tasks and the dependency relationship among the plurality of tasks;
And generating the resource information according to at least one of the number of the operated tasks of the target equipment, the IO interface occupancy rate, the memory occupancy information, the available network transmission bandwidth and the CPU load.
5. A method according to claim 3, wherein said monitoring of the operational status of said plurality of tasks comprises:
and monitoring at least one of the task queue congestion time, the resource suspension time, the task failure times, the task waiting time and the task retry times of the plurality of tasks.
6. The method of any one of claims 1-5, wherein determining a mean value for each of the operating state parameters when the operating state parameters are plural comprises:
and under the condition that the running state parameters are multiple, aiming at each running state parameter, averaging the running state parameters of the tasks to obtain an average value of each running state parameter.
7. The method of any of claims 1-5, wherein the operational status parameters include one or more of data network traffic, task memory overflow om, task latency, and task resource consumption.
8. A task scheduling device, comprising:
the system comprises an acquisition module, a storage module and a control module, wherein the acquisition module is used for acquiring task information of a plurality of tasks and resource information of target equipment for running the plurality of tasks;
the determining module is used for determining candidate scheduling information by adopting at least one optimization model according to the resource information and the task information of the plurality of tasks, wherein the number of the candidate scheduling information is consistent with the number of the adopted optimization models;
the scheduling module is used for configuring resources of the plurality of tasks according to candidate scheduling information determined by each optimization model and determining the execution sequence of the plurality of tasks, and monitoring running state parameters of the plurality of tasks under the condition that the plurality of tasks are executed based on the configured resources and the execution sequence so as to obtain the running state parameters of the plurality of tasks;
the processing module is used for acquiring running state parameters of the plurality of tasks under the scheduling based on corresponding candidate scheduling information aiming at any one of the optimization models, determining the average value of each running state parameter when the running state parameters are multiple, determining the weight of each running state parameter by adopting a combined weighting method, determining the evaluation value of the corresponding optimization model based on the weight and the average value of each running state parameter, selecting target scheduling information from the candidate scheduling information according to the evaluation value of each optimization model, scheduling the resources of the target equipment for the plurality of tasks according to the target scheduling information, and scheduling the execution sequence of the plurality of tasks.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
CN202211355252.7A 2022-11-01 2022-11-01 Task scheduling method, device, electronic equipment and storage medium Active CN115756773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211355252.7A CN115756773B (en) 2022-11-01 2022-11-01 Task scheduling method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211355252.7A CN115756773B (en) 2022-11-01 2022-11-01 Task scheduling method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115756773A CN115756773A (en) 2023-03-07
CN115756773B true CN115756773B (en) 2023-08-29

Family

ID=85355008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211355252.7A Active CN115756773B (en) 2022-11-01 2022-11-01 Task scheduling method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115756773B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017166643A1 (en) * 2016-03-31 2017-10-05 乐视控股(北京)有限公司 Method and device for quantifying task resources
CN110059942A (en) * 2019-04-02 2019-07-26 南京邮电大学 A kind of cloud manufacturing recourses service Optimization Scheduling based on fuzzy multiobjective optimization
CN111090502A (en) * 2018-10-24 2020-05-01 阿里巴巴集团控股有限公司 Streaming data task scheduling method and device
CN111190718A (en) * 2020-01-07 2020-05-22 第四范式(北京)技术有限公司 Method, device and system for realizing task scheduling
CN111343275A (en) * 2020-03-02 2020-06-26 北京奇艺世纪科技有限公司 Resource scheduling method and system
CN112506669A (en) * 2021-01-29 2021-03-16 浙江大华技术股份有限公司 Task allocation method and device, storage medium and electronic equipment
CN112596898A (en) * 2020-12-16 2021-04-02 北京三快在线科技有限公司 Task executor scheduling method and device
CN113220378A (en) * 2021-05-11 2021-08-06 中电金信软件有限公司 Flow processing method and device, electronic equipment, storage medium and system
CN114518945A (en) * 2021-12-31 2022-05-20 广州文远知行科技有限公司 Resource scheduling method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017166643A1 (en) * 2016-03-31 2017-10-05 乐视控股(北京)有限公司 Method and device for quantifying task resources
CN111090502A (en) * 2018-10-24 2020-05-01 阿里巴巴集团控股有限公司 Streaming data task scheduling method and device
CN110059942A (en) * 2019-04-02 2019-07-26 南京邮电大学 A kind of cloud manufacturing recourses service Optimization Scheduling based on fuzzy multiobjective optimization
CN111190718A (en) * 2020-01-07 2020-05-22 第四范式(北京)技术有限公司 Method, device and system for realizing task scheduling
CN111343275A (en) * 2020-03-02 2020-06-26 北京奇艺世纪科技有限公司 Resource scheduling method and system
CN112596898A (en) * 2020-12-16 2021-04-02 北京三快在线科技有限公司 Task executor scheduling method and device
CN112506669A (en) * 2021-01-29 2021-03-16 浙江大华技术股份有限公司 Task allocation method and device, storage medium and electronic equipment
CN113220378A (en) * 2021-05-11 2021-08-06 中电金信软件有限公司 Flow processing method and device, electronic equipment, storage medium and system
CN114518945A (en) * 2021-12-31 2022-05-20 广州文远知行科技有限公司 Resource scheduling method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CPS资源服务模型和资源调度研究;徐久强等;计算机学报;第41卷(第10期);第2330-2343页 *

Also Published As

Publication number Publication date
CN115756773A (en) 2023-03-07

Similar Documents

Publication Publication Date Title
CN110869909B (en) System and method for applying machine learning algorithms to calculate health scores for workload scheduling
CN109254842B (en) Resource management method and device for distributed stream system and readable storage medium
CN109324875B (en) Data center server power consumption management and optimization method based on reinforcement learning
CN103383655A (en) Performance interference model for managing consolidated workloads in qos-aware clouds
CN111026553B (en) Resource scheduling method and server system for offline mixed part operation
US20200301685A1 (en) Provisioning of software applications on edge devices in an internet-of-things environment
CN115269108A (en) Data processing method, device and equipment
CN112162891A (en) Performance test method in server cluster and related equipment
CN110297743B (en) Load testing method and device and storage medium
Han et al. Performance improvement of Linux CPU scheduler using policy gradient reinforcement learning for Android smartphones
CN114500578A (en) Load balancing scheduling method and device for distributed storage system and storage medium
CN113467944A (en) Resource deployment device and method for complex software system
CN115543626A (en) Power defect image simulation method adopting heterogeneous computing resource load balancing scheduling
CN115421930A (en) Task processing method, system, device, equipment and computer readable storage medium
CN116594913A (en) Intelligent software automatic test method
Naqvi et al. Mascot: self-adaptive opportunistic offloading for cloud-enabled smart mobile applications with probabilistic graphical models at runtime
CN115756773B (en) Task scheduling method, device, electronic equipment and storage medium
US10216606B1 (en) Data center management systems and methods for compute density efficiency measurements
CN115525394A (en) Method and device for adjusting number of containers
CN114936089A (en) Resource scheduling method, system, device and storage medium
Fourati et al. A review of container level autoscaling for microservices-based applications
Lili et al. A Markov chain based resource prediction in computational grid
CN111861012A (en) Test task execution time prediction method and optimal execution node selection method
KR20030005409A (en) Scalable expandable system and method for optimizing a random system of algorithms for image quality
CN114745282B (en) Resource allocation model prediction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant