CN115421905A - Task scheduling method and device, electronic equipment and storage medium - Google Patents

Task scheduling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115421905A
CN115421905A CN202210987579.XA CN202210987579A CN115421905A CN 115421905 A CN115421905 A CN 115421905A CN 202210987579 A CN202210987579 A CN 202210987579A CN 115421905 A CN115421905 A CN 115421905A
Authority
CN
China
Prior art keywords
task
scheduling
queue
historical
proportion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210987579.XA
Other languages
Chinese (zh)
Inventor
齐强
王增慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN202210987579.XA priority Critical patent/CN115421905A/en
Publication of CN115421905A publication Critical patent/CN115421905A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure provides a task scheduling method, a task scheduling device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a current task scheduling proportion determined by a monitor aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at each level of task queues under the condition of minimizing the cluster blocking state; determining a plurality of tasks to be scheduled corresponding to the current task scheduling proportion from each level task queue; and inserting a plurality of tasks to be scheduled into a buffer queue. In addition, the current task scheduling proportion is generated in real time, so that various problems generated in the process of scheduling big data tasks by a service system can be relieved as much as possible, and the scheduling efficiency is further improved.

Description

Task scheduling method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a task scheduling method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of internet big data application, most application platforms provide computing services in a cluster mode, which mainly considers that the processing capacity of a cluster is far higher than that of a single machine, so that a large number of tasks submitted by users can be executed in time.
The computing and storage capacity of the big data platform also represents the capacity of outward enabling of the asset of 'data', and the timeliness of the result obtained by user data analysis is closely related to the correctness of the decision made based on the result. Therefore, for the related service system deployed with the cluster, the output result of the data analysis task needs to be ensured to be available in time.
However, in the process of scheduling big data tasks, the number of tasks changes continuously, the task scheduling flow is difficult to divide, the task running time is difficult to determine, and the like, which directly or indirectly cause that the big data tasks cannot be scheduled well at present, and the timeliness of the task output result is poor.
Disclosure of Invention
The embodiment of the disclosure at least provides a task scheduling method, a task scheduling device, electronic equipment and a storage medium, so as to improve timeliness of task scheduling of big data.
In a first aspect, an embodiment of the present disclosure provides a task scheduling method, including:
acquiring a current task scheduling proportion determined by a monitor aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at each level task queue under the condition of minimizing the cluster blocking state;
determining a plurality of tasks to be scheduled corresponding to the current task scheduling proportion from the task queues of each hierarchy;
and inserting the tasks to be scheduled into a cache queue, so that the target cluster dequeues the tasks to be scheduled according to a cache mechanism of the cache queue and performs cluster operation.
In one possible embodiment, the method further comprises:
and responding to the next task scheduling proportion determined by the monitor aiming at each layer of task queue under the condition that the task to be scheduled in the buffer queue is empty.
In a possible implementation manner, the obtaining of the current task scheduling proportion determined by the monitor for each level of task queue includes:
and periodically acquiring the current task scheduling proportion determined by the monitor for each layer of task queue.
In a second aspect, an embodiment of the present disclosure further provides a task scheduling method, including:
determining the current task scheduling proportion aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at the each-level task queue under the condition of minimizing the cluster blocking state;
and responding to a task scheduling request of a scheduler, and sending the current task scheduling proportion to the scheduler so that the scheduler schedules the tasks in the task queues of each layer to a buffer queue according to the current task scheduling proportion.
In a possible embodiment, the determining a current task scheduling ratio for each hierarchical task queue includes:
acquiring the current task submission number and the historical task submission number corresponding to each level task queue;
predicting to obtain future task submissions corresponding to each level of task queue based on the current task submissions and the historical task submissions;
constructing a future cluster blocking state function containing a scheduling proportion parameter based on the future task submission number, the waiting time currently required by each level task queue and the scheduling proportion parameter;
and determining a target parameter value corresponding to the scheduling proportion parameter under the condition of minimizing the future cluster blocking state function, and taking the target parameter value as the current task scheduling proportion.
In a possible implementation manner, in a case that the future task submittal number indicates a predicted result of y years m and d days i hour, y is a positive integer, m is a positive integer smaller than 12, d is a positive integer smaller than 365, and i is a positive integer smaller than 24, the predicting, based on the current task submittal number and the historical task submittal number, a future task submittal number corresponding to each hierarchical task queue includes:
determining the average number of submissions of a first historical task based on the number of submissions of the historical task generated by the task queue of each level within i-3 to i-1 hours in m, m and d days of y years; determining the average submissions number of second historical tasks based on the submissions number of the historical tasks of each level of task queue generated in the ith hour from d-3 to d-1 days in m months in y years; determining the average number of submissions of a third history task based on the number of submissions of the history task generated by the task queue of each level within the ith hour from m-3 months to d days of m-1 month in y year; determining the average number of submissions of a fourth historical task based on the number of submissions of the historical task generated by the task queue of each level in the ith hour from y-3 years to y-1 years, m months and d days;
determining the task submission increment number generated in the future by each hierarchical task queue based on the first historical task average submission number, the second historical task average submission number, the third historical task average submission data and the fourth historical task average submission number;
and determining the future task submission number based on the current task submission number and the task submission increment number.
In one possible embodiment, the determining the number of task submission increases generated in the future for each hierarchical task queue based on the first historical task average submission number, the second historical task average submission number, the third historical task average submission data, and the fourth historical task average submission number includes:
respectively endowing a task weight value for the first historical task average submission number, the second historical task average submission number, the third historical task average submission data and the fourth historical task average submission number;
determining the task submission increment number based on a weighted sum of the first historical task average submission number, the second historical task average submission number, the third historical task average submission data, the fourth historical task average submission number, and task weight values assigned to the respective historical task average submission numbers.
In a possible implementation manner, the determining a target parameter value corresponding to the scheduling proportion parameter under the condition that the future cluster congestion state function is minimized includes:
determining a function value corresponding to the future cluster blocking state function under the condition of traversing a parameter value corresponding to a preset scheduling proportion parameter;
and selecting the parameter value which enables the function value to be minimum as the target parameter value.
In one possible embodiment, the method further comprises:
and receiving the dequeue time of each task corresponding to each level task queue recorded by the cache queue so as to update the state information of each level task queue according to the task dequeue time.
In a third aspect, an embodiment of the present disclosure further provides a task scheduling apparatus, including:
the acquisition module is used for acquiring the current task scheduling proportion determined by the monitor aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at each level task queue under the condition of minimizing the cluster blocking state;
the determining module is used for determining a plurality of tasks to be scheduled corresponding to the current task scheduling proportion from the task queues of each hierarchy;
and the scheduling module is used for inserting the tasks to be scheduled into a cache queue so that the target cluster dequeues the tasks to be scheduled according to a cache mechanism of the cache queue and performs cluster operation.
In a fourth aspect, an embodiment of the present disclosure further provides a task scheduling apparatus, including:
the determining module is used for determining the current task scheduling proportion aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at the each-level task queue under the condition of minimizing the cluster blocking state;
and the scheduling module is used for responding to a task scheduling request of a scheduler and sending the current task scheduling proportion to the scheduler so that the scheduler schedules the tasks in the task queues of each level to a buffer queue according to the current task scheduling proportion.
In a fifth aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing a task scheduling method as set forth in any one of the first aspect and its various embodiments, the second aspect and its various embodiments.
In a sixth aspect, the disclosed embodiments further provide a computer-readable storage medium, where a computer program is stored, and the computer program is executed by a processor to perform the task scheduling method according to any one of the first aspect and its various embodiments, the second aspect and its various embodiments.
By adopting the task scheduling method, the task scheduling device, the electronic equipment and the storage medium, under the condition that the current task scheduling proportion determined by the monitor aiming at each level of task queue is obtained, a plurality of tasks to be scheduled corresponding to the current task scheduling proportion can be determined from each level of task queue, and then the plurality of scheduling tasks are inserted into the cache queue to realize task scheduling. The current task scheduling proportion in the disclosure is determined based on the minimized cluster blocking state, so that the realized scheduling can utilize cluster resources more possibly, and all tasks can be scheduled effectively.
Other advantages of the present disclosure will be explained in more detail in conjunction with the following description and the accompanying drawings.
It should be understood that the above description is only an overview of the technical solutions of the present disclosure, so that the technical solutions of the present disclosure can be more clearly understood and implemented according to the contents of the specification. In order that the manner in which the above-recited and other objects, features and advantages of the disclosure are obtained will be readily understood, particular embodiments of the disclosure are described below by way of illustration.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Also, like reference numerals are used to refer to like elements throughout. In the drawings:
fig. 1 is a flowchart illustrating a task scheduling method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating an application of a task scheduling method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating another task scheduling method provided by an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a task scheduling apparatus according to an embodiment of the disclosure;
FIG. 5 is a diagram illustrating another task scheduling device provided by an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the description of the embodiments of the present disclosure, it is to be understood that terms such as "including" or "having" are intended to indicate the presence of the features, numbers, steps, actions, components, parts, or combinations thereof disclosed in the specification, and are not intended to preclude the presence or addition of one or more other features, numbers, steps, actions, components, parts, or combinations thereof.
A "/" indicates an OR meaning, for example, A/B may indicate A or B; "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of such features. In the description of the embodiments of the present disclosure, "a plurality" means two or more unless otherwise specified.
Research shows that some task scheduling methods are provided in the related art and can be divided into single queue scheduling and multi-queue scheduling, wherein the main modes of the single queue scheduling are First-Input-First-Output (FIFO) scheduling and Strict Priority (SP) scheduling; the multi-queue scheduling method mainly includes Round-Robin (RR) scheduling, weighted Fair Queuing (WFQ), and the like.
The FIFO scheduling mode is known to adopt first-in first-out service, is more suitable for scheduling scenes with basically consistent task importance, and has the problem that the task with high importance is likely to be blocked due to larger difference of the task importance; the SP queue scheduling method schedules according to the queue priority, the queue with high priority is responsible for scheduling the task with high importance, and the task is scheduled from high to low strictly according to the priority, so that the phenomenon of starvation of the queue with low priority can be caused, and the scheduling cannot be carried out for a long time; the RR queue scheduling algorithm schedules a plurality of queues according to a preset proportion, but when the difference of the number of tasks of each priority queue is large, the waiting scheduling time of individual queue tasks is too long; the WFQ queue scheduling method is based on multi-stage queues to carry out weighting to ensure scheduling fairness of each queue, and scheduling in the method is applied to transmission of data packets and highly depends on determination of task weight.
However, in the process of scheduling big data tasks, the number of tasks changes continuously, the task scheduling flow is difficult to divide, the task running time is difficult to determine, and the like, which directly or indirectly cause that the big data tasks cannot be scheduled well at present, and the timeliness of the task output result is poor.
Meanwhile, for the task scheduling method and system of the cluster in the enterprise, the timely availability of the output result of the data analysis task needs to be ensured. However, considering that users oriented in a related business system often present a plurality of role forms, including enterprise decision makers, data analysts, data operation and maintenance personnel, various front-end business personnel, and the like, different tasks issued for different roles often need to be scheduled in a targeted manner to meet requirements of different role forms, none of the above solutions can meet the current requirement of big data scheduling.
To at least partially address one or more of the above issues and other potential issues, the present disclosure provides at least one task scheduling scheme that performs task scheduling according to a real-time task scheduling ratio to improve timeliness of scheduling of big data tasks as much as possible.
To facilitate understanding of the present embodiment, first, a task scheduling method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the task scheduling method provided in the embodiments of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes, for example: a server or other processing device. In some possible implementations, the task scheduling method may be implemented by a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, which is a flowchart of a task scheduling method provided in an embodiment of the present disclosure, an execution subject of the method may be a scheduler (a scheduling process corresponding to a server) for scheduling a cluster task, and the method includes steps S101 to S103, where:
s101: acquiring a current task scheduling proportion determined by a monitor aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at each level of task queues under the condition of minimizing the cluster blocking state;
s102: determining a plurality of tasks to be scheduled corresponding to the current task scheduling proportion from each level task queue;
s103: and inserting the tasks to be scheduled into a cache queue, so that the target cluster dequeues the tasks to be scheduled according to a cache mechanism of the cache queue and performs cluster operation.
In order to facilitate understanding of the task scheduling method provided by the embodiments of the present disclosure, an application scenario of the method is first specifically described below. The task scheduling method in the embodiment of the present disclosure may be mainly applied to a process of scheduling a cluster task, where the task may be a task issued by a user role docked with a related service system, for example, the task may be a specific application task issued by an enterprise decision maker, a data analyst, a data operation and maintenance worker, various foreground service workers, and the like.
The enqueuing operation can be realized according to task priority, that is, after the application server splits the priority of the big data task, the big data task can be enqueued according to the high-low order of the priority, and it is determined that one or more tasks correspond to each layer of task queue.
In the embodiment of the present disclosure, the task queue may be divided into any hierarchy level greater than 2, for example, the queues may be divided into three hierarchy levels, i.e., high, medium, and low. In the embodiment of the present disclosure, the number of the related task queues is not specifically limited, and the number of the task queues may be set in a user-defined manner according to the importance of the service type corresponding to the big data cluster.
In practical applications, in order to better meet various task requirements, the queues may be divided into 5 levels of queues, such as queue 1 to queue 5 shown in fig. 2. The priorities of the queues 1 to 5 are sequentially increased, and the tasks respectively correspond to a user common task with the lowest priority, a daily data analysis task with a lower priority, an auxiliary leading layer decision-making type data analysis task with a moderate priority, a data analysis task in a key scene with a higher priority, and a basic and intermediate data generation task in a key scene with a highest priority. For convenience of explanation, the following description will be given by exemplifying 5-level task queues (corresponding to 5 queues).
No matter which queue is used, the queues can be FIFO queues, namely, the multi-level task queue can be composed of five FIFO queues, the practical problem that the number of the application servers connected is overloaded is considered, each queue is set to contain a limited number of tasks, namely, five FIFO queues with limited length are correspondingly arranged, and the length of each queue can be set in a self-defining mode according to the bearing capacity of the application servers and the number distribution of various types of tasks.
For each hierarchy task queue, the task scheduling method provided by the embodiment of the present disclosure may determine, based on the monitor, a corresponding current task scheduling proportion, where the scheduling proportion is a real-time scheduling proportion predicted for each hierarchy task queue under the condition of minimizing the cluster blocking state, and thus, the scheduler may determine, based on the scheduling proportion, a corresponding task to be scheduled from each hierarchy task queue and insert the task to be scheduled into the cache queue. Here, the task to be scheduled may be a set of multiple waiting tasks obtained from each hierarchical task queue. The purpose of minimizing the cluster blocking state is to maximize the response processing capacity of the cluster, so that the determined scheduling proportion can meet the requirements of large data task scheduling more possibly.
After the buffer queue mechanism is added, the scheduling proportion of each priority task can be effectively controlled. For example, when the monitor determines that the current task scheduling ratio of the 5-level task queue is 1. The task scheduling proportion can be adjusted based on the new cluster blocking state so as to enable the whole cluster to be in the optimal operation state.
The cache queue serves as a unified interface for distributing tasks to the computing cluster, and effectively plays a role in controlling flow. When the buffer queue is empty, the trigger scheduler takes corresponding tasks from the multi-stage queues to insert into the buffer queue according to the flow (next task scheduling proportion) of each queue.
As shown in fig. 2, in the buffer queue, the task dequeuing sequence is not related to the priority level, but is strictly arranged according to the time of entering the multi-level queue, so that the fairness that each task entering the buffer queue is scheduled each time is effectively guaranteed, and the continuous preemption of the high-priority task is avoided.
The scheduler in the embodiment of the present disclosure, as a scheduling core of a cluster task, is mainly responsible for adding a waiting task (i.e., a task to be scheduled) in a multi-level task queue into a cache queue according to a fixed proportion, and dequeuing a head task of the cache queue in a computing cluster when there is a free resource in the cluster to pull up the data task.
When the task scheduling proportion set by the scheduler is a: b: c: d: e and the buffer queue is empty, the scheduler can schedule a, b, c, d and e tasks from the head of the queues 1 to 5 to insert into the buffer queue. When there are not enough tasks in a queue, no waiting may be done. Although the scheduling strategy cannot guarantee that the proportion of tasks of each priority level is kept consistent with the set proportion strictly when a certain action of inserting the buffer queue is executed, the proportion of tasks in a certain time period can be well guaranteed, and redundant waiting delay caused by strictly controlling dequeuing flow is effectively avoided.
It should be noted that, as the monitor monitors in real time, the current task scheduling proportion herein will also change at any time, and in the embodiment of the present disclosure, the current task scheduling proportion determined by the monitor for each hierarchical task queue may be periodically obtained to implement the periodic scheduling of the scheduler.
Considering the critical role of determining the current task scheduling proportion for realizing the scheduling task, the monitor is taken as the execution subject, and the related task scheduling method will be further described.
As shown in fig. 3, a flowchart of another task scheduling method provided in the embodiment of the present disclosure, an execution subject of the method may be a component for automatically determining an appropriate current task scheduling proportion (corresponding to a monitoring process of a server), and includes steps S301 to S302, where:
s301: determining the current task scheduling proportion aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at each level of task queues under the condition of minimizing the cluster blocking state;
s302: and responding to the task scheduling request of the scheduler, and sending the current task scheduling proportion to the scheduler so that the scheduler schedules the tasks in each layer of task queue to the cache queue according to the current task scheduling proportion.
The monitor in the embodiment of the disclosure can automatically determine a proper task scheduling proportion in the operation process, so as to achieve the purpose of minimizing the cluster blocking state, and can maximize the response processing capability of the cluster by minimizing the cluster blocking state at the end of each period, thereby ensuring the stability of the whole scheduling process of the cluster task.
Here, a current task scheduling ratio may be determined for each hierarchy of task queues, and then the determined current task scheduling ratio may be transmitted to the scheduler in response to a task scheduling request of the scheduler to enable the scheduler to perform task scheduling based on the current task scheduling ratio.
The related task scheduling request may be a request sent when there is a task scheduling requirement, and the monitor may feed back the determined real-time scheduling proportion, where the task scheduling request may be triggered periodically or triggered under some trigger mechanism, for example, during a process of updating a task state, the task scheduling request may be triggered.
The scheduler may schedule the tasks in the task queues of each hierarchy to the buffer queue according to the current task scheduling ratio, where the relevant description of the task queues and the buffer queues of each hierarchy is referred to above, and is not described herein again.
The task scheduling method provided by the embodiment of the disclosure can determine the current task scheduling proportion according to the following steps:
acquiring a current task submission number and a historical task submission number corresponding to each hierarchical task queue;
predicting to obtain future task submissions corresponding to each level of task queue based on the current task submissions and the historical task submissions;
step three, constructing a future cluster blocking state function containing a scheduling proportion parameter based on the future task submission number, the waiting time currently required by each level task queue and the scheduling proportion parameter;
and step four, determining a target parameter value corresponding to the scheduling proportion parameter under the condition of minimizing the future cluster blocking state function, and taking the target parameter value as the current task scheduling proportion.
Here, first, the current task submission number and the historical task submission number corresponding to each level of task queue may be obtained, then, the future task submission number may be obtained based on the current task submission number and the historical task submission number, and a future cluster congestion state function including a scheduling ratio parameter may be constructed based on the future task submission number, that is, the future cluster congestion state may be changed due to both the future task submission number and the scheduling ratio. Finally, the current task scheduling proportion may be determined with the future cluster blocking state function minimized.
In practical applications, for better prediction of the scheduling ratio, the historical task submission number may be a plurality of historical task submissions in one or more historical periods, and the corresponding predicted future task submission number may be the sum of the number of task submissions in one period in the future and the number of current task submissions.
In consideration of different influences of different years, months, days, times and the like in an actual business system, the prediction of the future task submissions can be carried out by comprehensively considering the historical task submissions with the period of the year, the historical task submissions with the period of the month, the historical task submissions with the period of the day and the historical task submissions with the period of the time, and the prediction can be specifically realized according to the following steps:
step one, determining the average submittal number of a first historical task based on the submittal number of the historical tasks generated by each hierarchical task queue within i-3 to i-1 hours in m and d days of y years; determining the average submittal number of a second historical task based on the submittal number of the historical tasks generated by each hierarchical task queue in the ith hour from d-3 to d-1 days m.y; determining the average number of submissions of the third history task based on the number of submissions of the history task generated by each hierarchical task queue within the ith hour from m-3 months to d days of m-1 months in y years; determining the average submission number of the fourth historical task based on the submission numbers of the historical tasks generated by each hierarchical task queue within the ith hour from y-3 years to y-1 years, m months and d days;
determining the task submission increment quantity generated in the future by each level task queue based on the first historical task average submission quantity, the second historical task average submission quantity, the third historical task average submission data and the fourth historical task average submission quantity;
and step three, determining the future task submission number based on the current task submission number and the task submission increment number.
The number of future task submissions is determined by combining the current number of task submissions and the number of task submissions, and the following description focuses on the prediction process of the number of task submissions.
In the embodiment of the disclosure, the method may be implemented in combination with a first prediction model, where an input of the first prediction model is historical task submission numbers in each historical period, and an output is a predicted task submission increment number in the next period.
In the process of predicting, considering different influences of different periods on the prediction result, different task weights can be given to the average submission number of each historical task, so as to determine the task submission increment number in a weighted summation mode.
In the embodiment of the disclosure, by analyzing the historical task submissions, it is determined that the task submissions show strong regularity in the periods of day and night, holidays/workdays, early months/late months, early years/late years and the like. Therefore, in practical applications, the embodiment of the present disclosure may use a Moving Average (MA) prediction algorithm based on a time series to predict the number of task submission increases within a certain hour of a certain day, specifically:
Figure BDA0003802400810000141
where the left side of the equation represents the number of task submissions to be predicted, with the start time of the cycle being m months of y and d days of i hours.
To derive the forecasts, it is necessary to obtain the average number of task submissions from i-3 to i-1 hours of d days m.month y (corresponding to the average number of submissions for the first historical task), the average number of task submissions from d-3 to i-1 days i.e. of m.month y (corresponding to the average number of submissions for the second historical task), the average number of task submissions from m-3 to i-1 month d.i.d.y (corresponding to the average number of submissions for the third historical task), and the average number of task submissions from y-3 to y-1 m.d.i.d.d.d.d.y (corresponding to the average number of submissions for the fourth historical task).
In addition, the number of days can be divided into working days and holidays, and the working hours are processed, namely d-3 th day of the week is predicted to be three days of the week (saturday and bihai-day), and i-3 th hour of the task 9 th earlier is predicted to be four afternoon of the yesterday (19 o-09 o is the working hours). The prediction results obtained above determine 40% (corresponding to a first task weight of 0.4), 30% (corresponding to a second task weight of 0.3), 20% (corresponding to a third task weight of 0.2), and 10% (corresponding to a fourth task weight of 0.1) of the prediction values, respectively, according to the degree of correlation. The first prediction Model depends on the submission characteristics of the cluster tasks, can be adjusted according to the actual change of the cluster task characteristics, and can also use other time-series-based prediction algorithms such as an Autoregressive Moving Average Model (ARMA Model) and an Autoregressive Moving Average Model (ARIMA Model).
It should be noted that, in the process of calculating the average number of submissions of each historical task, the disclosed embodiment considers the number of submissions of historical tasks in three periods, that is, in the process of predicting the number of increments of submissions of tasks in a certain hour of a certain day of a certain month of a certain year, reference is made to the number of submissions of historical tasks in three hours before the hour, the number of submissions of historical tasks in the hour of the three days before the day, the number of submissions of historical tasks in the hour of the day of the three months before the month, and the number of submissions of historical tasks in the day of the month of the three years before the year. In the actual prediction process, the reference cycle unit and the cycle number corresponding to the corresponding cycle unit can be flexibly adjusted.
Here, only two of the cycle units may be referred to, for example, only the historical submissions in days and times are taken as cycle units, or only three of the cycle units may be referred to, for example, only the historical submissions in months, days and times are taken as cycle units, or besides, the historical submissions in four, five or other cycles may be taken as consideration, and taking the cycle unit as an example, the historical submissions in four hours before the hour may be referred to, and here, the adjustment may be performed based on other ways, which is not limited in particular.
Where the number of future task submissions is determined, the current task scheduling proportion may be determined based on minimizing the future cluster congestion status function. The method can be realized by the following steps:
step one, determining a function value corresponding to a future cluster blocking state function under the condition of traversing a parameter value corresponding to a preset scheduling proportion parameter;
and step two, selecting the parameter value which enables the function value to be minimum as the target parameter value.
The future cluster blocking state function constructed here is a dependent variable taking a scheduling proportion parameter as an independent variable, and under the condition that the scheduling proportion is changed, the function value corresponding to the future cluster blocking state function is also changed. Besides, in the process of constructing the state function, the future task submission number and the currently required waiting time of each hierarchical task queue are referred to, and here, the future task submission number and the waiting time can be combined with a second prediction model.
The second prediction model in the embodiment of the present disclosure may obtain the most appropriate scheduling proportion in the next cycle through brute force search according to the queuing state of the current cluster task (corresponding to the waiting time) and the predicted number of future task submissions in the next cycle, where the size of the search space is 10 5 I.e. the ratio of each type of priority task is between 0 and 9. In practical applications, the second prediction model may adopt a Support Vector Regression (SVR) prediction algorithm, the kernel function is a linear kernel function, and the input is a one-dimensional vector x =including five elements<x 1 ,x 2 ,x 3 ,x 4 ,x 5 >The output is the blocking status y of the cluster at the end of the next cycle.
The cluster blocking state is positively correlated with the number of queued tasks, the task waiting mean value and the task waiting delay median value in each level of queues, and the task waiting delay maximum value is exponentially increased along with the increase of the delay and the number of tasks above the user tolerance value. Therefore, embodiments of the present disclosure may employ the following evaluation model to represent the blocking state of a cluster:
Figure BDA0003802400810000161
wherein the content of the first and second substances,
Figure BDA0003802400810000162
the average waiting time and the maximum waiting time (unit: every 1 hour) of each task in the queue with the priority p are respectively represented. For example, when the waiting time of queue 5 exceeds 1 hour, it means that queue 5 is blocked more seriously, and it is equivalent to the user perception degree that the waiting time of queue 1 exceeds 5 hours.
Thus, the values of the elements of the input vector of the cluster block state prediction model are as follows:
Figure BDA0003802400810000163
wherein R is p It is indicated that the scheduling ratio parameter,
Figure BDA0003802400810000164
the number of submissions of the current task is indicated,
Figure BDA0003802400810000165
the predicted number of task submissions is indicated.
Each element value corresponds to the value of the one-dimensional vector, and the element value is proportional to the current cluster blocking state and the total future task submission number expected to be scheduled in the next circle (the sum of the current task submission number and the predicted task submission increment number in the next cycle) and inversely proportional to the priority queue scheduling duty ratio.
In practical application, the second prediction model can be trained by combining simulation data and actual operation data together, so that a better task scheduling proportion can be obtained. Taking the following periodic task number [12,40,46,49,79] as an example, the optimal task scheduling ratio obtained by space search, without considering the waiting delay state of the current queue, is 2. The predicted value obtained by inputting the scheduling proportion by a cluster blocking state prediction model (namely a second prediction model) is 0.5; when the input ratio is 9; when the input ratio is 1. The prediction result is positively correlated with the cluster blocking state, the input proportion with the minimum prediction value is searched in the search space, and the cluster task running condition can be self-adapted.
By adaptively adjusting the proportion of the number of the tasks scheduled by each level of queue, the method can effectively avoid overlong waiting time of various tasks and effectively avoid the phenomenon of starvation of low-priority queues. Compared with the fixed proportion scheduling of each queue task, the dynamic adjustment of the scheduling proportion can better deal with the burst task flow on one hand; on the other hand, the waiting delay of each priority queue can be controlled by regulating and controlling the scheduling proportion (such as within one day) with finer granularity. In practical application, the number of the important tasks with the priority level of 5 to be scheduled is large, and the priority scheduling of the queue tasks with high priority levels can be well guaranteed in a fixed proportion, so that the maximum delay of the tasks is obviously reduced, and the method is more practical.
The task scheduling method provided by the embodiment of the disclosure can also receive the dequeue time of each task corresponding to each level of task queue recorded by the cache queue, so as to update the state information of each level of task queue according to the task dequeue time, thereby enabling the monitor to update the scheduling proportion in a better self-adaptive manner, and ensuring the orderly progress of the scheduling mechanism.
In the description of the present specification, reference to the description of the terms "some possible embodiments," "some embodiments," "examples," "specific examples," or "some examples," or the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples and features of the various embodiments or examples described in this specification can be combined and combined by those skilled in the art without contradiction.
With regard to the method flow diagrams of the disclosed embodiments, certain operations are described as different steps performed in a certain order. Such flow diagrams are illustrative and not restrictive. Certain steps described herein may be grouped together and performed in a single operation, certain steps may be separated into sub-steps, and certain steps may be performed in an order different than presented herein. The various steps shown in the flowcharts may be implemented in any way by any circuit structure and/or tangible mechanism (e.g., by software running on a computer device, hardware (e.g., logical functions implemented by a processor or chip), etc., and/or any combination thereof).
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a task scheduling apparatus corresponding to the task scheduling method, and as the principle of solving the problem of the apparatus in the embodiment of the present disclosure is similar to the task scheduling method in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and the repeated parts are not described again.
Referring to fig. 4, a schematic diagram of a task scheduling apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 401, a determination module 402, and a scheduling module 403; wherein the content of the first and second substances,
an obtaining module 401, configured to obtain a current task scheduling proportion determined by the monitor for each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at each level of task queues under the condition of minimizing the cluster blocking state;
a determining module 402, configured to determine multiple to-be-scheduled tasks corresponding to a current task scheduling ratio from each hierarchical task queue;
the scheduling module 403 is configured to insert a plurality of tasks to be scheduled into the cache queue, so that the target cluster dequeues the plurality of tasks to be scheduled according to a cache mechanism of the cache queue and performs cluster operation.
By adopting the task scheduling device, under the condition that the current task scheduling proportion determined by the monitor aiming at each level of task queue is obtained, a plurality of tasks to be scheduled corresponding to the current task scheduling proportion can be determined from each level of task queue, and then a plurality of scheduling tasks are inserted into the cache queue to realize task scheduling. The current task scheduling proportion in the disclosure is determined based on the minimized cluster blocking state, so that the realized scheduling can utilize cluster resources more possibly, and all tasks can be scheduled effectively.
In a possible implementation, the obtaining module 401 is further configured to:
and responding to the next task scheduling proportion determined by the monitor for each layer of task queue under the condition that the task to be scheduled in the buffer queue is empty.
In a possible implementation manner, the obtaining module 401 is configured to obtain a current task scheduling ratio determined by the monitor for each hierarchical task queue according to the following steps:
and periodically acquiring the current task scheduling proportion determined by the monitor for each layer of task queue.
Referring to fig. 5, a schematic diagram of another task scheduling apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: a determining module 501 and a scheduling module 502; wherein the content of the first and second substances,
a determining module 501, configured to determine a current task scheduling ratio for each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at each level of task queues under the condition of minimizing the cluster blocking state;
the scheduling module 502 is configured to send the current task scheduling proportion to the scheduler in response to a task scheduling request of the scheduler, so that the scheduler schedules the tasks in the task queues at the different levels to the buffer queue according to the current task scheduling proportion.
In a possible implementation manner, the determining module 501 is configured to determine a current task scheduling proportion for each level of task queue according to the following steps:
acquiring the current task submission number and the historical task submission number corresponding to each level task queue;
predicting to obtain the future task submission number corresponding to each level of task queue based on the current task submission number and the historical task submission number;
constructing a future cluster blocking state function containing a scheduling proportion parameter based on the future task submission number, the waiting time currently required by each level of task queue and the scheduling proportion parameter;
and determining a target parameter value corresponding to the scheduling proportion parameter under the condition of minimizing the future cluster blocking state function, and taking the target parameter value as the current task scheduling proportion.
In a possible implementation manner, in a case that the future task submissions indicate a predicted result of y years m months d days i hours, y is a positive integer, m is a positive integer smaller than 12, d is a positive integer smaller than 365, and i is a positive integer smaller than 24, the determining module 501 is configured to predict the future task submissions corresponding to each level task queue based on the current task submissions and the historical task submissions according to the following steps:
determining the average submittal number of the first historical task based on the submittal number of the historical tasks generated by each level task queue within i-3 to i-1 hours in m, m and d days of y years; determining the average submittal number of a second historical task based on the submittal number of the historical tasks generated by each hierarchical task queue in the ith hour from d-3 to d-1 days m.y; determining the average number of submissions of the third history task based on the number of submissions of the history task generated by each hierarchical task queue within the ith hour from m-3 months to d days of m-1 months in y years; determining the average submission number of the fourth historical task based on the submission numbers of the historical tasks generated by each hierarchical task queue within the ith hour from y-3 years to y-1 years, m months and d days;
determining the task submission increment quantity generated in the future by each level task queue based on the first historical task average submission quantity, the second historical task average submission quantity, the third historical task average submission data and the fourth historical task average submission quantity;
and determining the future task submission number based on the current task submission number and the task submission increment number.
In one possible implementation, the determining module 501 is configured to determine the number of task submission increases generated in the future for each level task queue based on the first historical task average submission number, the second historical task average submission number, the third historical task average submission data, and the fourth historical task average submission number according to the following steps:
respectively assigning task weight values to the first historical task average submission number, the second historical task average submission number, the third historical task average submission data and the fourth historical task average submission number;
and determining the task submission increment number based on the weighted sum of the first historical task average submission number, the second historical task average submission number, the third historical task average submission data, the fourth historical task average submission number and the task weight values given to the historical task average submission numbers.
In a possible implementation, the determining module 501 is configured to determine a target parameter value corresponding to the scheduling proportion parameter when the future cluster congestion state function is minimized, according to the following steps:
determining a function value corresponding to a future cluster blocking state function under the condition of traversing a parameter value corresponding to a preset scheduling proportion parameter;
and selecting the parameter value which enables the function value to be minimum as the target parameter value.
In a possible implementation manner, the scheduling module 502 is further configured to receive dequeue time of each task corresponding to each hierarchical task queue recorded in the buffer queue, so as to update state information of each hierarchical task queue according to the task dequeue time.
It should be noted that the apparatus in the embodiment of the present application may implement each process of the foregoing method embodiment, and achieve the same effect and function, which are not described herein again.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 6, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes: a processor 601, a memory 602, and a bus 603. The memory 602 stores machine-readable instructions executable by the processor 601 (for example, execution instructions corresponding to the obtaining module 401, the determining module 402, and the scheduling module 403 in the apparatus in fig. 4; further, execution instructions corresponding to the determining module 501, and the scheduling module 502 in the apparatus in fig. 5, and the like), when the electronic device is operated, the processor 601 and the memory 602 communicate through the bus 603, and the machine-readable instructions are executed by the processor 601 to perform the steps of the task scheduling method shown in fig. 1 or fig. 3.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the task scheduling method in the foregoing method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the task scheduling method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the description of the apparatus, device, and computer-readable storage medium embodiments is simplified because they are substantially similar to the method embodiments, and reference may be made to some descriptions of the method embodiments for related aspects.
The apparatus, the device, and the computer-readable storage medium provided in the embodiments of the present application correspond to the method one to one, and therefore, the apparatus, the device, and the computer-readable storage medium also have similar beneficial technical effects to the corresponding method.
It will be appreciated by one skilled in the art that embodiments of the present disclosure may be provided as a method, apparatus (device or system), or computer-readable storage medium. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer-readable storage medium embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices or systems), and computer-readable storage media according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that the present disclosure is not limited to the particular embodiments disclosed, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (13)

1. A method for task scheduling, comprising:
acquiring a current task scheduling proportion determined by a monitor aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at the each-level task queue under the condition of minimizing the cluster blocking state;
determining a plurality of tasks to be scheduled corresponding to the current task scheduling proportion from the task queues of each hierarchy;
and inserting the tasks to be scheduled into a cache queue, so that the target cluster dequeues the tasks to be scheduled according to a cache mechanism of the cache queue and performs cluster operation.
2. The method of claim 1, further comprising:
and responding to the next task scheduling proportion determined by the monitor aiming at each layer of task queue under the condition that the task to be scheduled in the buffer queue is empty.
3. The method of claim 1, wherein obtaining the current task scheduling proportion determined by the monitor for each level of task queue comprises:
and periodically acquiring the current task scheduling proportion determined by the monitor for each layer of task queue.
4. A method for task scheduling, comprising:
determining the current task scheduling proportion aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at each level task queue under the condition of minimizing the cluster blocking state;
and responding to a task scheduling request of a scheduler, and sending the current task scheduling proportion to the scheduler so that the scheduler schedules the tasks in the task queues of each layer to a buffer queue according to the current task scheduling proportion.
5. The method of claim 4, wherein determining a current task scheduling proportion for each level of task queues comprises:
acquiring the current task submission number and the historical task submission number corresponding to each level task queue;
predicting to obtain future task submissions corresponding to each level of task queue based on the current task submissions and the historical task submissions;
constructing a future cluster blocking state function containing a scheduling proportion parameter based on the future task submission number, the waiting time currently required by each level task queue and the scheduling proportion parameter;
and determining a target parameter value corresponding to the scheduling proportion parameter under the condition of minimizing the future cluster blocking state function, and taking the target parameter value as the current task scheduling proportion.
6. The method of claim 5, wherein in the case that the future task submissions indicate a predicted result of the ith hour of d days of m months of y years, y is a positive integer, m is a positive integer less than 12, d is a positive integer less than 365, and i is a positive integer less than 24, predicting the future task submissions corresponding to each hierarchical task queue based on the current task submissions and the historical task submissions comprises:
determining the average number of submissions of a first historical task based on the number of submissions of the historical task generated by the task queue of each level within i-3 to i-1 hours in m, m and d days of y years; determining the average submissions number of second historical tasks based on the submissions number of the historical tasks of each level of task queue generated in the ith hour from d-3 to d-1 days in m months in y years; determining the average number of submissions of a third history task based on the number of submissions of the history task generated by the task queue of each level within the ith hour from m-3 months to d days of m-1 month in y year; determining the average number of submissions of a fourth historical task based on the number of submissions of the historical task generated by the task queue of each level in the ith hour from y-3 years to y-1 years, m months and d days;
determining the task submission increment number of the task queues in each hierarchy generated in the future based on the first historical task average submission number, the second historical task average submission number, the third historical task average submission data and the fourth historical task average submission number;
and determining the future task submission number based on the current task submission number and the task submission increment number.
7. The method of claim 6, wherein determining the number of task submission augmentations generated in the future for the hierarchical task queue based on the first historical average number of tasks submissions, the second historical average number of tasks submissions, the third historical average number of tasks submissions, and the fourth historical average number of tasks submissions comprises:
respectively assigning task weight values to the first historical task average submission number, the second historical task average submission number, the third historical task average submission data and the fourth historical task average submission number;
determining the task submission increment number based on a weighted sum of the first historical task average submission number, the second historical task average submission number, the third historical task average submission data, the fourth historical task average submission number, and task weight values assigned to the respective historical task average submission numbers.
8. The method according to any one of claims 5 to 7, wherein the determining the target parameter value corresponding to the scheduling proportion parameter with the minimized future cluster congestion status function comprises:
determining a function value corresponding to the future cluster blocking state function under the condition of traversing a parameter value corresponding to a preset scheduling proportion parameter each time;
and selecting the parameter value which enables the function value to be minimum as the target parameter value.
9. The method of claim 4, further comprising:
and receiving the dequeue time of each task corresponding to each level task queue recorded by the cache queue so as to update the state information of each level task queue according to the task dequeue time.
10. A task scheduling apparatus, comprising:
the acquisition module is used for acquiring the current task scheduling proportion determined by the monitor aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at the each-level task queue under the condition of minimizing the cluster blocking state;
the determining module is used for determining a plurality of tasks to be scheduled corresponding to the current task scheduling proportion from the task queues of each hierarchy;
and the scheduling module is used for inserting the tasks to be scheduled into a cache queue so that the target cluster dequeues the tasks to be scheduled according to a cache mechanism of the cache queue and performs cluster operation.
11. A task scheduling apparatus, comprising:
the determining module is used for determining the current task scheduling proportion aiming at each level of task queue; the current task scheduling proportion is a real-time scheduling proportion predicted by aiming at each level task queue under the condition of minimizing the cluster blocking state;
and the scheduling module is used for responding to a task scheduling request of a scheduler and sending the current task scheduling proportion to the scheduler so that the scheduler schedules the tasks in the task queues of each level to a buffer queue according to the current task scheduling proportion.
12. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the task scheduling method of any one of claims 1 to 9.
13. A computer-readable storage medium, having stored thereon a computer program for performing a method of task scheduling as claimed in any one of claims 1 to 9 when executed by a processor.
CN202210987579.XA 2022-08-17 2022-08-17 Task scheduling method and device, electronic equipment and storage medium Pending CN115421905A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210987579.XA CN115421905A (en) 2022-08-17 2022-08-17 Task scheduling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210987579.XA CN115421905A (en) 2022-08-17 2022-08-17 Task scheduling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115421905A true CN115421905A (en) 2022-12-02

Family

ID=84198056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210987579.XA Pending CN115421905A (en) 2022-08-17 2022-08-17 Task scheduling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115421905A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116382925A (en) * 2023-06-05 2023-07-04 北京纷扬科技有限责任公司 Dynamic adjustment method and device for task queue and storage medium
CN117076555A (en) * 2023-05-08 2023-11-17 芜湖本初子午信息技术有限公司 Distributed task management system and method based on calculation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117076555A (en) * 2023-05-08 2023-11-17 芜湖本初子午信息技术有限公司 Distributed task management system and method based on calculation
CN117076555B (en) * 2023-05-08 2024-03-22 深圳市优友网络科技有限公司 Distributed task management system and method based on calculation
CN116382925A (en) * 2023-06-05 2023-07-04 北京纷扬科技有限责任公司 Dynamic adjustment method and device for task queue and storage medium
CN116382925B (en) * 2023-06-05 2023-08-15 北京纷扬科技有限责任公司 Dynamic adjustment method and device for task queue and storage medium

Similar Documents

Publication Publication Date Title
KR102562260B1 (en) Commitment-aware scheduler
US11509596B2 (en) Throttling queue for a request scheduling and processing system
CN115421905A (en) Task scheduling method and device, electronic equipment and storage medium
CN109324900B (en) Bidding-based resource sharing for message queues in an on-demand service environment
US20130007753A1 (en) Elastic scaling for cloud-hosted batch applications
US20130031282A1 (en) Dynamic stabilization for a stream processing system
Nan et al. Optimal resource allocation for multimedia cloud in priority service scheme
US8984521B2 (en) Computer system performance by applying rate limits to control block tenancy
CN102915254A (en) Task management method and device
CN113454614A (en) System and method for resource partitioning in distributed computing
Khan Optimized hybrid service brokering for multi-cloud architectures
Jia et al. A deadline constrained preemptive scheduler using queuing systems for multi-tenancy clouds
CN112463044A (en) Method and system for ensuring tail reading delay of server side of distributed storage system
Jagadish Kumar et al. Hybrid gradient descent golden eagle optimization (HGDGEO) algorithm-based efficient heterogeneous resource scheduling for big data processing on clouds
Singh et al. A comparative study of various scheduling algorithms in cloud computing
Jaiganesh et al. Performance evaluation of cloud services with profit optimization
CN115766582A (en) Flow control method, device and system, medium and computer equipment
Patel et al. An improved approach for load balancing among heterogeneous resources in computational grids
CN113760549B (en) Pod deployment method and device
CN112988363B (en) Resource scheduling method, device, server and storage medium
Loganathan et al. Job scheduling with efficient resource monitoring in cloud datacenter
Gutierrez-Estevez et al. Multi-resource schedulable unit for adaptive application-driven unified resource management in data centers
Chouhan et al. A MLFQ Scheduling technique using M/M/c queues for grid computing
Ullah et al. Task priority-based cached-data prefetching and eviction mechanisms for performance optimization of edge computing clusters
Zheng et al. Design of a power efficient cloud computing environment: heavy traffic limits and QoS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination