CN107423120B - Task scheduling method and device - Google Patents

Task scheduling method and device Download PDF

Info

Publication number
CN107423120B
CN107423120B CN201710239595.XA CN201710239595A CN107423120B CN 107423120 B CN107423120 B CN 107423120B CN 201710239595 A CN201710239595 A CN 201710239595A CN 107423120 B CN107423120 B CN 107423120B
Authority
CN
China
Prior art keywords
task
queue
processing
priority
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710239595.XA
Other languages
Chinese (zh)
Other versions
CN107423120A (en
Inventor
华洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710239595.XA priority Critical patent/CN107423120B/en
Publication of CN107423120A publication Critical patent/CN107423120A/en
Application granted granted Critical
Publication of CN107423120B publication Critical patent/CN107423120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence

Abstract

The application provides a task scheduling method and a task scheduling device, aiming at each task processing priority, task queues corresponding to the priority are respectively configured, and the method comprises the following steps: a task receiving module: after receiving the tasks to be processed, adding the tasks to be processed into task queues corresponding to the priorities of the tasks to be processed according to the priorities of the tasks to be processed; a task scheduling module: respectively setting a selection probability for each task queue according to the priority of each task queue and the number of tasks currently contained in the task queue, wherein the selection probability of the task queue is positively correlated with the priority of the task queue, and the selection probability of the task queue is positively correlated with the number of tasks currently contained in the task queue; and randomly selecting one queue according to the selection probability of each task queue, and processing the tasks in the selected queue. The embodiment provides a scheduling scheme which can give consideration to both task priority and task quantity, and can avoid the system performance problem caused by the accumulation of a large amount of task data.

Description

Task scheduling method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a task scheduling method and apparatus.
Background
With the rapid development of computer technology and internet technology, users can easily access the internet and submit tasks to a server on the internet, and the server can provide corresponding services for the users by executing the tasks submitted by the users. For an internet product, it is often used by a large number of users during the same time period. In a task request instruction sent by a user when using an internet product, a server generally submits a task corresponding to the task request instruction to an abstract queue, accumulates the tasks, and then selects a specific task from the queue and triggers execution through some scheduling strategies.
The most widely used scheduling strategies In the related art are FIFO (First In First Out) and fair scheduling algorithm.
FIFOs are implementations based on a single queue and task dimension. And submitting the tasks of all the users to the same task queue, and sequencing the tasks according to the priority and the submission time. When in specific scheduling, the task with high priority is executed firstly; tasks with the same priority are executed first with an earlier commit time. The FIFO scheduling policy is preferentially executed for a task with a high priority, but in the policy, a task with a lower priority may not be effectively scheduled, which may cause the disadvantage of task accumulation.
Fair scheduling policies are based on the implementation of multiple queues and user dimensions. The strategy establishes a single queue for each user, selects the queue of any user with equal probability, and selects a task to execute according to the FIFO principle for the selected queue. Since each user may submit multiple tasks, the priority of each task may be different, which may cause high-priority tasks of some users to be not scheduled in time.
In summary, how to reasonably schedule all tasks and avoid task accumulation when processing tasks is an urgent technical problem to be solved.
Disclosure of Invention
In view of this, the present application provides a task scheduling method and apparatus.
According to a first aspect of the embodiments of the present application, there is provided a task scheduling method, where for each task processing priority, a task queue corresponding to the priority is configured, respectively, and the method includes:
a task receiving module: after receiving a task to be processed, adding the task to be processed into a task queue corresponding to the priority of the task to be processed according to the priority of the task to be processed;
a task scheduling module:
respectively setting a selection probability for each task queue according to the priority of each task queue and the number of tasks currently contained in the task queue, wherein the selection probability of the task queue is positively correlated with the priority of the task queue, and the selection probability of the task queue is positively correlated with the number of tasks currently contained in the task queue;
and randomly selecting one queue according to the selection probability of each task queue, and processing the tasks in the selected queue.
According to a second aspect of the embodiments of the present application, there is provided a task scheduling apparatus, which respectively configures a task queue corresponding to each task processing priority, the apparatus including:
an enqueue control module to: after receiving a task to be processed, adding the task to be processed into a task queue corresponding to the priority of the task to be processed according to the priority of the task to be processed;
a dequeue control module to: respectively setting a selection probability for each task queue according to the priority of each task queue and the number of tasks currently contained in the task queue, wherein the selection probability of the task queue is positively correlated with the priority of the task queue, and the selection probability of the task queue is positively correlated with the number of tasks currently contained in the task queue; randomly selecting one queue according to the selection probability of each task queue;
a task execution module to: and processing the tasks in the selected queue.
According to a third aspect of the embodiments of the present application, there is provided a task scheduling apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
after receiving a task to be processed, adding the task to be processed into a task queue corresponding to the priority of the task to be processed according to the priority of the task to be processed;
respectively setting a selection probability for each task queue according to the priority of each task queue and the number of tasks currently contained in the task queue, wherein the selection probability of the task queue is positively correlated with the priority of the task queue, and the selection probability of the task queue is positively correlated with the number of tasks currently contained in the task queue;
and randomly selecting one queue according to the selection probability of each task queue, and processing the tasks in the selected queue.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, the task queues are divided according to the task processing priority, and the tasks with the same priority are added into the same task queue, so that the number of the tasks with the same priority can be monitored. When processing tasks, respectively setting the selected probability for each task queue according to the priority of each task queue and the number of the tasks currently contained, then randomly selecting one queue according to the selected probability of each task queue, and processing the tasks in the selected queue. Therefore, the embodiment provides a scheduling scheme that can simultaneously consider the priority of the task and the number of the tasks in the task queue, where the higher the priority of the task queue is, the higher the probability that the task queue is selected is, and the more the number of the tasks in the task queue is, the higher the probability that the task queue is selected is. Therefore, when the number of the tasks in the low-priority queue is increased, the tasks in the low-priority queue can be executed in time, and the problem of unreasonable task scheduling in the related art is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flowchart illustrating a task scheduling method according to an exemplary embodiment of the present application.
Fig. 2 is an application scenario diagram illustrating a task scheduling method according to an exemplary embodiment of the present application.
Fig. 3 is a block diagram of a server where a task scheduling device according to an exemplary embodiment of the present application is located.
FIG. 4 is a block diagram of a task scheduler shown in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the field of computers and the internet, there is often a need to handle a large number of tasks. Aiming at the condition that task scheduling is unreasonable in the related technology, the embodiment of the application provides a task scheduling scheme, which can solve the problems of priority scheduling of high-priority tasks and reasonable sending of user tasks with low success rate, and achieves reasonable scheduling of all tasks with certain probability without generating extreme condition of task accumulation.
The following examples of the present application will be described in detail. Fig. 1 is a flowchart of a task scheduling method according to an exemplary embodiment of the present application, and includes the following steps 101 to 103:
in step 101, the task receiving module: after receiving a task to be processed, adding the task to be processed into a task queue corresponding to the priority of the task to be processed according to the priority of the task to be processed;
in step 102, the task scheduling module: respectively setting a selection probability for each task queue according to the priority of each task queue and the number of tasks currently contained in the task queue, wherein the selection probability of the task queue is positively correlated with the priority of the task queue, and the selection probability of the task queue is positively correlated with the number of tasks currently contained in the task queue;
in step 103, the task scheduling module: and randomly selecting one queue according to the selection probability of each task queue, and processing the tasks in the selected queue.
In this embodiment, multiple task processing priorities may be preset according to actual requirements, and task queues corresponding to each priority may be configured respectively. When the task to be processed is submitted to the task receiving module, the task receiving module may determine the priority of the task according to the relevant information of the task (e.g., task identifier, object generating the task, task generation time or processing object of the task, etc.), and add the task to be processed to the task queue corresponding to the priority of the task. It will be appreciated that tasks of the same priority level will be added to the same task queue.
The types and the number of the priorities and the determination mode of the priorities corresponding to different tasks can be flexibly configured according to the scene and the specific needs of the task scheduling scheme in practical application. Taking the shopping platform as an example, the shopping platform party is most concerned about the stability of the trading process, so that the tasks related to the trading should be set with higher priority, and the user information updating task, the comment message publishing task, the order logistics information updating task and other tasks can be lower in priority than the trading task. For example again, for the third party payment platform side, the tasks related to payment should be set to a higher priority, and the user information updating task, the service window message publishing task, the revenue information updating task, and other tasks may be set to a lower priority than the payment tasks. When the priority of the task is determined, the corresponding relation between different tasks and the priority can be recorded in various ways, so that the task can be processed quickly when being added to the task queue corresponding to the priority of the task in the following.
And when the task receiving module carries out the processing, the task scheduling module carries out task scheduling. The task receiving module may receive the task in real time, and the processing time of the task scheduling module may be real-time processing, or may be periodically automatically triggered to process, or may be automatically triggered according to some events, or may be manually triggered, or the like, in practical applications. Considering that a large number of tasks may be submitted in the same time, if the task scheduling module always processes the task with higher priority, a stacking phenomenon may occur due to the fact that the task with low priority cannot be effectively scheduled. The scheduling policy of this embodiment can give consideration to both the priority and the number of tasks. Specifically, in order to make the high-priority task perform priority scheduling and also consider the task queues with larger number of tasks, in this embodiment, when scheduling the tasks, the selection probability is set for each task queue according to the priority and the number of tasks of the task queue. Since the selection probability of the task queue is positively correlated with the priority of the task queue and the selection probability of the task queue is positively correlated with the number of tasks currently contained in the task queue, the higher the priority is, the more the number of tasks is, the higher the selection probability is.
The process of determining the selected probability may be to quantize the priority and set the corresponding probability by combining the number of task queues, and the specific quantization process may be flexibly configured in practical application.
In an optional implementation manner, in order to set the selected probability quickly, the setting of the selected probability for each task queue according to the priority of each task queue and the number of tasks currently included includes:
and calculating a weighted value of the task quantity of each task queue, wherein the weighted value of the task quantity is determined by the priority of the task queue.
And calculating the proportion of the weighted value of the task quantity of each task queue to the sum of the weighted values of all the task queues as the selection probability of each task queue.
Specifically, the above process can be expressed by the following formula:
Figure GDA0002428916310000061
wherein FWi' selection probability, FW, for the ith task queueiA weighted value (i.e., the product of the weight and the number of tasks) for the number of tasks in the ith task queue.
After the selection probability of each task is set, one queue can be randomly selected according to the selection probability, and the tasks in the selected queue are processed. Because the selected queue comprises a plurality of tasks, in practical application, the tasks in the selected queue can be processed according to a first-in first-out principle.
In actual applications, when a task is processed, the task may fail to be executed. In the related art, the FIFO policy and the fair scheduling policy adopt a repeated execution mode for the task that fails to be executed, and if the task fails to be continuously processed, the task will be blocked by other tasks, resulting in accumulation of other tasks. For the problem, in this embodiment, when processing the task in the selected queue, the method further includes:
and if the task processing fails, removing the task with the failed processing.
And after the task failed in the processing is removed, if a preset retry time length is reached, submitting the task failed in the processing to the task receiving module, and adding the task failed in the processing to a task queue again by the task receiving module.
In this embodiment, after the task is executed, there are two types of results, i.e., success or failure, in the execution result. If the execution is successful, the task is finished; if the task fails, the task which fails in the processing is temporarily removed, and the retry duration of the task is set, so that the task waits for the next task scheduling.
In the embodiment, it is considered that when a task fails, if the task is retried immediately, the probability that the task also fails is still high, so that a reasonable retry time length can be set for the failed task, and the task is not re-executed when the retry time length is not up, so that other tasks with high success rates can be effectively scheduled.
The retry duration may be flexibly configured in practical application, for example, a uniform duration (all set to 3 minutes, etc.) may be set for all tasks, different retry durations may be set for tasks with different priorities (in some examples, the retry duration may be negatively related to the priority, and the retry duration is shorter for a task with a higher priority), or the retry duration may be determined according to the failure times of processing a failed task (in some examples, the retry duration may be positively related to the failure times, and the retry duration is longer for a task with a higher failure time), and the like, and the retry duration may be flexibly configured in practical application, which is not specifically limited in this embodiment.
When the task failing to be processed is removed, if the preset retry time length is up, the task is executed again. Specifically, the task that failed the processing is resubmitted to the task receiving module, and the task receiving module re-enqueues the task. When the task failed to be processed is added to one task queue again, the priority of the task failed to be processed needs to be determined, and in some examples, the task failed to be processed may be added to the task queue corresponding to the original priority of the task again according to the original priority of the task.
In other examples, in order to prevent the task from processing the failure again and affecting the processing efficiency, the priority of the task with the processing failure may be reduced before the task with the processing failure is submitted to the task receiving module, considering that the task with the processing failure may not have a high execution success rate. The specific reduced priority can be flexibly configured according to actual needs, for example, the priority can be reduced according to the failure times, and the failure times are positively correlated with the reduced priority; or, corresponding failure times can be preset for certain lower priorities, for example, the failure times correspond to a certain priority below 3 times, and the failure times correspond to a certain priority above 5 times; or, if the failure occurs once, the priority of the first level is lowered, and the like, and the configuration can be flexible in practical application.
In practical applications, there may be a case where after a task has been executed for many times, the task is still executed unsuccessfully. For such a situation, in this embodiment, if the failure frequency of the processing-failed task reaches a preset discarding threshold, the processing-failed task is discarded. Wherein, the preset discarding threshold may be 5 times, 8 times, 10 times, etc. Discarding the task that failed the process may be temporarily removing and storing the task instead of temporarily participating in re-enqueuing, a scheduler may view the task and may re-trigger execution of the task, and so on.
By the mode, due to the fact that the priorities of different task queues are defined, the retry duration of the monitoring task is monitored, and the priority of the task which fails in processing is adjusted, the purposes that system resources are fully utilized, the user task with high success rate is sent preferentially with large probability, and the user task with low success rate is reasonably sent are achieved. Meanwhile, the condition that the system is crashed due to too much accumulated tasks with low success rate is avoided by defining the preset discarding threshold. The embodiment of the application solves the problem that the traditional scheduling strategy cannot give consideration to the contradiction between the priority and the task success rate, namely the task scheduling performance is sharply reduced due to a large number of wrong requests of the high-priority task. The more times of failure of the task, the task can be moved into a task queue with lower priority, so that the high-success-rate and high-priority task can be quickly scheduled.
Next, the task scheduling scheme of the present embodiment will be described again with reference to a specific scenario. Fig. 2 is an application scenario diagram of a task scheduling method according to an exemplary embodiment of the present application, where the task scheduling method of the present embodiment is applicable to a shopping platform side, and the shopping platform side provides a client for a user, and the user can obtain a shopping service provided by the shopping platform side through the client.
The shopping platform side is provided with a server, and a dispatching system is arranged in the server. In practical applications, there are cases where a large number of clients submit tasks simultaneously. The tasks submitted by the client end can reach a scheduling system arranged by the server, the scheduling system calls an enqueue controller, and the enqueue controller pushes submitted different tasks to be processed into different task queues according to a preset task allocation rule (according to the priority of the tasks to be processed, the tasks to be processed are added into a task queue corresponding to the priority of the tasks to be processed).
And the dequeue controller selects a certain task queue I according to the scheduling strategy and informs the task executor to execute the task queue I.
After the task executor acquires the queue I, acquiring a task from the queue according to an FIFO principle to execute the task; according to the success or failure of the execution result, the method comprises two processes: if the task is successfully removed directly; and if the task fails, setting the retry time length and the priority of the task according to the retry strategy, and waiting for the next task scheduling.
Retry policy specification: when a task fails, if the task is retried immediately, the probability of the same failure is still high, and therefore it is reasonable to set a reasonable retry duration (for example, 3 minutes for 1 failure, 10 minutes for 2 failures, etc.), and the task is not re-executed when the retry duration is not reached. Meanwhile, based on a retry strategy, scanning a current retry queue in real time, and if the retry duration of a task arrives, resubmitting the task to the queuing controller; and if the number of current task retries is more than a preset discarding threshold value, discarding the task.
And (3) scheduling policy specification:
assuming that the scheduling system processes priorities for m kinds of tasks, task queues corresponding to the priorities are respectively configured. According to the priority, each task queue can be correspondingly configured with different weights W1,W2,…,Wi,…,WmWherein W isiRefers to the weight of the ith task queue. The weight indicates the priority level of the queue, and the specific value of the queue is determined according to the priority level and can be flexibly configured in practical application. The greater the weight, the greater the probability that the task queue will be selected.
However, the probability of selecting the task queue should not be uniquely determined by the weight, and the number of tasks included in each task queue may be considered.
The number of tasks in each task queue is assumed to be: l is1,L2,…,Li,…,Lm
The selected probability of the task queue is calculated in a manner (for comparison, normalization processing is performed in calculating probability in the following example, and the method can be flexibly processed in practical application):
normalizing the weight and the number of the tasks of each task queue:
Figure GDA0002428916310000091
Figure GDA0002428916310000101
calculating the normalized selected probability:
Figure GDA0002428916310000102
wherein, Wi'is the normalized weight of the ith task queue, L'iNormalized number of tasks for ith task queue, FWiIs Wi'and L'iProduct of (a), FWi' is the chosen probability of the ith task queue.
The greater the probability of selection, the greater the probability that the task queue will be selected when it is randomly selected.
Assuming that there are 5 task queues, the probability of selection of the 5 queues calculated by the above method is as follows:
task queue 1: 0.2;
and (3) a task queue 2: 0.1;
and (3) task queue: 0.15;
and 4, task queue 4: 0.25;
and (5) task queue: 0.3.
the principle of randomly selecting one task queue from the 5 task queues according to the selected probability is as follows:
assuming table 1 as a whole, table 1 is made up of 10 identical boxes:
TABLE 1
According to the selected probabilities, task queue 1 occupies 4 grids in table 1, task queue 2 occupies 2 grids in table 1, and so on, as shown in table 2 (which grid each task queue occupies can be randomly allocated, table 2 is just one example of the following):
task queue 1 Task queue 3 Task queue 3 Task queue 4 Task queue 4
Task queue 1 Task queue 2 Task queue 3 Task queue 4 Task queue 4
Task queue 1 Task queue 2 Task queue 5 Task queue 5 Task queue 4
Task queue 1 Task queue 5 Task queue 5 Task queue 5 Task queue 5
Then, a grid is randomly selected from table 2, and the grid is occupied by which task queue, i.e. which task queue can be selected.
In practical applications, there may be a plurality of implementation manners for a specific implementation manner of how to randomly select one task queue according to the selection probability of each task queue. Taking the above table 2 as an example, after randomly allocating the occupied grids in each task queue according to the selected probability in the 20 grids of the table, a random number is generated from 1 to 20 for the grid number, and the grid indicated by the random number is occupied by which task queue, so that the task queue can be selected.
In other alternative embodiments, it may also be:
arranging the task queues in sequence or randomly, and assuming that the sequence of the randomly arranged task queues is as follows: task queue 3, task queue 2, task queue 5, task queue 1, and task queue 4.
As can be seen from the foregoing calculation of the selected probabilities, the sum of the selected probabilities of the respective queues is 1, and therefore a random number between 0 and 1 is generated for the selected probability of the respective task queue.
Selecting a task queue through the random number: and according to the sequence of the task queues, adding the selection probability of each task queue from the first task queue. And the first task queue which enables the sum value of the selection probabilities of all the task queues to be larger than or equal to the random number is determined as the selected task queue.
For example, assuming that the generated random number is 0.6, the probability of selection of task queue 3 is 0.15 plus the probability of selection of task queue 2 is 0.1, and the sum of the two is less than 0.6, in the aforementioned task queue order; continuing to calculate, wherein the sum of the selected probabilities of the task queues 3, 2 and 5 is 0.55, and the sum of the probabilities of the task queues 3, 2 and 5 is less than 0.6; continuing the calculation, the sum of the selected probabilities of the task queues 3, 2, 5 and 1 is 0.75, and the sum of the four probabilities is greater than 0.6, so that the task queue 1 is selected.
Besides the two embodiments, other embodiments can be flexibly designed according to needs in practical application, and this embodiment does not limit this.
From the above analysis, it can be seen that the priorities of different task queues are different, and if the priority of a certain task queue is higher, only the weighting factor W needs to be increasediAnd (4) finishing. WiThe greater the probability that a task queue is scheduled. In addition, considering the number of task accumulations in a task queue, the greater the number of tasks in a particular queue, the higher the calculated probability of being selected. And tasks with low priority (including tasks with lower priority due to the increase of failure times) are put into a task queue with lower weight, and the tasks can obtain certain execution opportunity, so that the task accumulation is avoided. Assuming that there is no task in a task queue, the calculated selection probability is 0, and the task queue is not scheduled.
Corresponding to the embodiment of the task scheduling method, the application also provides an embodiment of a task scheduling device.
The embodiment of the task scheduling device can be applied to the server. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the server where the device is located. From a hardware aspect, as shown in fig. 3, which is a hardware structure diagram of a server where the task scheduling device of the present application is located, except for the processor 310, the memory 330, the network interface 320, and the nonvolatile memory 340 shown in fig. 3, the server where the device 331 is located in the embodiment may also include other hardware according to the actual function of the server, which is not described again.
As shown in fig. 4, which is a block diagram of a task scheduling apparatus according to an exemplary embodiment of the present application, a task queue corresponding to each task processing priority is configured for each task processing priority, and the apparatus includes:
an enqueue control module 41 configured to: after receiving the task to be processed, adding the task to be processed into a task queue corresponding to the priority of the task to be processed according to the priority of the task to be processed.
A dequeue control module 42 for: respectively setting a selection probability for each task queue according to the priority of each task queue and the number of tasks currently contained in the task queue, wherein the selection probability of the task queue is positively correlated with the priority of the task queue, and the selection probability of the task queue is positively correlated with the number of tasks currently contained in the task queue; and randomly selecting one queue according to the selection probability of each task queue.
A task execution module 43, configured to: and processing the tasks in the selected queue.
In an optional implementation manner, the dequeue control module 42 is further configured to:
and calculating a weighted value of the task quantity of each task queue, wherein the weighted value of the task quantity is determined by the priority of the task queue.
And calculating the proportion of the weighted value of the task quantity of each task queue to the sum of the weighted values of all the task queues as the selection probability of each task queue.
In an optional implementation manner, the task execution module 43 is further configured to:
and processing the tasks in the selected queue according to a first-in first-out principle.
In an alternative implementation, the apparatus further includes a retry processing module (not shown in fig. 4) configured to:
and if the task processing fails, removing the task with the failed processing.
After the task failed in the processing is removed, if a preset retry time length is reached, submitting the task failed in the processing to the enqueue control module, and adding the task failed in the processing to a task queue again by the enqueue control module.
In an optional implementation manner, the retry processing module is configured to:
and before submitting the task failed in processing to the enqueue control module, reducing the priority of the task failed in processing.
In an optional implementation manner, the preset retry duration is determined according to the failure times and/or corresponding priorities of the tasks that fail to be processed.
In an optional implementation manner, the retry processing module is further configured to:
and if the failure times of the processing failed task reach a preset discarding threshold value, discarding the processing failed task.
Correspondingly, the present application also provides a task scheduling device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
after receiving the task to be processed, adding the task to be processed into a task queue corresponding to the priority of the task to be processed according to the priority of the task to be processed.
And respectively setting a selection probability for each task queue according to the priority of each task queue and the number of the tasks currently contained in the task queue, wherein the selection probability of the task queue is positively correlated with the priority of the task queue, and the selection probability of the task queue is positively correlated with the number of the tasks currently contained in the task queue.
And randomly selecting one queue according to the selection probability of each task queue, and processing the tasks in the selected queue.
The implementation process of the functions and actions of each module in the task scheduling device is specifically described in the implementation process of the corresponding step in the task scheduling method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (13)

1. A task scheduling method is characterized in that a task queue corresponding to each task processing priority is configured for each task processing priority, and the method comprises the following steps:
a task receiving module: after receiving a task to be processed, adding the task to be processed into a task queue corresponding to the priority of the task to be processed according to the priority of the task to be processed;
a task scheduling module:
calculating a weighted value of the task quantity of each task queue, wherein the weighted value of the task quantity is determined by the priority of the task queue; calculating the proportion of the weighted value of the task quantity of each task queue to the sum of the weighted values of all the task queues as the selection probability of each task queue;
and randomly selecting one queue according to the selection probability of each task queue, and processing the tasks in the selected queue.
2. The method of claim 1, wherein processing the task in the selected queue comprises:
and processing the tasks in the selected queue according to a first-in first-out principle.
3. The method of claim 1, wherein in processing the tasks in the selected queue, the method further comprises:
if the task processing fails, removing the task with the failed processing;
and after the task failed in the processing is removed, if a preset retry time length is reached, submitting the task failed in the processing to the task receiving module, and adding the task failed in the processing to a task queue again by the task receiving module.
4. The method of claim 3, wherein prior to submitting the processing-failed task to the task receiving module, the method further comprises:
and reducing the priority of the task with the processing failure.
5. The method according to claim 3, wherein the preset retry duration is determined according to the failure times and/or corresponding priorities of the failed tasks.
6. The method of claim 3, further comprising:
and if the failure times of the processing failed task reach a preset discarding threshold value, discarding the processing failed task.
7. A task scheduling apparatus that processes a priority for each task and that configures a task queue corresponding to the priority, respectively, the apparatus comprising:
an enqueue control module to: after receiving a task to be processed, adding the task to be processed into a task queue corresponding to the priority of the task to be processed according to the priority of the task to be processed;
a dequeue control module to:
calculating a weighted value of the task quantity of each task queue, wherein the weighted value of the task quantity is determined by the priority of the task queue; calculating the proportion of the weighted value of the task quantity of each task queue to the sum of the weighted values of all the task queues as the selection probability of each task queue; randomly selecting one queue according to the selection probability of each task queue;
a task execution module to: and processing the tasks in the selected queue.
8. The apparatus of claim 7, wherein the task execution module is further configured to:
and processing the tasks in the selected queue according to a first-in first-out principle.
9. The apparatus of claim 7, further comprising a retry processing module to:
if the task processing fails, removing the task with the failed processing;
after the task failed in the processing is removed, if a preset retry time length is reached, submitting the task failed in the processing to the enqueue control module, and adding the task failed in the processing to a task queue again by the enqueue control module.
10. The apparatus of claim 9, wherein the retry processing module is configured to:
and before submitting the task failed in processing to the enqueue control module, reducing the priority of the task failed in processing.
11. The apparatus according to claim 9, wherein the predetermined retry duration is determined according to the number of failures of the failed task and/or the corresponding priority.
12. The apparatus of claim 9, wherein the retry processing module is further configured to:
and if the failure times of the processing failed task reach a preset discarding threshold value, discarding the processing failed task.
13. A task scheduling apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
after receiving a task to be processed, adding the task to be processed into a task queue corresponding to the priority of the task to be processed according to the priority of the task to be processed;
calculating a weighted value of the task quantity of each task queue, wherein the weighted value of the task quantity is determined by the priority of the task queue; calculating the proportion of the weighted value of the task quantity of each task queue to the sum of the weighted values of all the task queues as the selection probability of each task queue;
and randomly selecting one queue according to the selection probability of each task queue, and processing the tasks in the selected queue.
CN201710239595.XA 2017-04-13 2017-04-13 Task scheduling method and device Active CN107423120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710239595.XA CN107423120B (en) 2017-04-13 2017-04-13 Task scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710239595.XA CN107423120B (en) 2017-04-13 2017-04-13 Task scheduling method and device

Publications (2)

Publication Number Publication Date
CN107423120A CN107423120A (en) 2017-12-01
CN107423120B true CN107423120B (en) 2020-06-30

Family

ID=60424010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710239595.XA Active CN107423120B (en) 2017-04-13 2017-04-13 Task scheduling method and device

Country Status (1)

Country Link
CN (1) CN107423120B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197314A (en) * 2018-02-27 2019-09-03 北京京东尚科信息技术有限公司 A kind of dispatching method and device
CN108519920B (en) * 2018-03-14 2020-12-01 口碑(上海)信息技术有限公司 Scheduling retry method and device
CN108829521B (en) * 2018-06-13 2022-05-13 平安科技(深圳)有限公司 Task processing method and device, computer equipment and storage medium
CN109343941B (en) * 2018-08-14 2023-02-21 创新先进技术有限公司 Task processing method and device, electronic equipment and computer readable storage medium
CN110968406B (en) * 2018-09-30 2023-04-07 北京国双科技有限公司 Method, device, storage medium and processor for processing task
CN111026514B (en) * 2018-10-10 2022-12-02 上海寒武纪信息科技有限公司 Task scheduling method
JP7286942B2 (en) * 2018-10-24 2023-06-06 コニカミノルタ株式会社 Image forming system and program
CN109617988B (en) * 2018-12-28 2022-04-29 平安科技(深圳)有限公司 Request retry method and related product
CN109905479B (en) * 2019-03-04 2022-06-07 腾讯科技(深圳)有限公司 File transmission method and device
CN112151131A (en) * 2019-06-28 2020-12-29 深圳迈瑞生物医疗电子股份有限公司 Sample scheduling method, and setting method and system of sample priority
CN110287003B (en) * 2019-06-28 2020-04-21 北京九章云极科技有限公司 Resource management method and management system
CN111831686A (en) * 2019-09-17 2020-10-27 北京嘀嘀无限科技发展有限公司 Optimization method, device and system of sequencing model, electronic equipment and storage medium
CN110728520B (en) * 2019-09-27 2023-02-17 支付宝(杭州)信息技术有限公司 Processing method and device for refusal payment application and server
CN110674078B (en) * 2019-10-08 2020-11-10 北京航空航天大学 Digital twin system complex task heterogeneous multi-core parallel efficient solving method and system
CN111061570B (en) * 2019-11-26 2023-03-31 深圳云天励飞技术有限公司 Image calculation request processing method and device and terminal equipment
CN111209112A (en) * 2019-12-31 2020-05-29 杭州迪普科技股份有限公司 Exception handling method and device
CN111400010A (en) * 2020-03-18 2020-07-10 中国建设银行股份有限公司 Task scheduling method and device
CN111292025B (en) * 2020-04-01 2022-06-07 成都卡普数据服务有限责任公司 Power transmission line online inspection operation scheduling method
CN111930486B (en) * 2020-07-30 2023-11-17 中国工商银行股份有限公司 Task selection data processing method, device, equipment and storage medium
CN111949424A (en) * 2020-09-18 2020-11-17 成都精灵云科技有限公司 Method for realizing queue for processing declarative events
CN113590030B (en) * 2021-06-30 2023-12-26 济南浪潮数据技术有限公司 Queue scheduling method, system, equipment and medium
CN113590289A (en) * 2021-07-30 2021-11-02 中科曙光国际信息产业有限公司 Job scheduling method, system, device, computer equipment and storage medium
CN114664014A (en) * 2022-03-28 2022-06-24 中国银行股份有限公司 User queuing method and device for bank outlets
CN116225665B (en) * 2023-05-04 2023-08-08 井芯微电子技术(天津)有限公司 Queue scheduling method and device
CN116483549B (en) * 2023-06-25 2023-09-19 清华大学 Task scheduling method and device for intelligent camera system, camera and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102752136A (en) * 2012-06-29 2012-10-24 广东东研网络科技有限公司 Method for operating and scheduling communication equipment
CN102915254A (en) * 2011-08-02 2013-02-06 中兴通讯股份有限公司 Task management method and device
CN103106116A (en) * 2012-12-31 2013-05-15 国家计算机网络与信息安全管理中心 Dynamic resource management method based on sliding window
CN103514037A (en) * 2012-06-21 2014-01-15 中兴通讯股份有限公司 Task scheduling processing method and device
CN105511950A (en) * 2015-12-10 2016-04-20 天津海量信息技术有限公司 Dispatching management method for task queue priority of large data set

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915254A (en) * 2011-08-02 2013-02-06 中兴通讯股份有限公司 Task management method and device
CN103514037A (en) * 2012-06-21 2014-01-15 中兴通讯股份有限公司 Task scheduling processing method and device
CN102752136A (en) * 2012-06-29 2012-10-24 广东东研网络科技有限公司 Method for operating and scheduling communication equipment
CN103106116A (en) * 2012-12-31 2013-05-15 国家计算机网络与信息安全管理中心 Dynamic resource management method based on sliding window
CN105511950A (en) * 2015-12-10 2016-04-20 天津海量信息技术有限公司 Dispatching management method for task queue priority of large data set

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高速交换网络中基于综合优先级计算的调度及路由算法;杨帆等;《西安电子科技大学学报(自然科学版)》;20071210;第34卷(第5期);正文第2段,第2节 *

Also Published As

Publication number Publication date
CN107423120A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
CN107423120B (en) Task scheduling method and device
CN112162865B (en) Scheduling method and device of server and server
JP5041805B2 (en) Service quality controller and service quality method for data storage system
US7627618B2 (en) System for managing data collection processes
US11159649B2 (en) Systems and methods of rate limiting for a representational state transfer (REST) application programming interface (API)
CN105159782A (en) Cloud host based method and apparatus for allocating resources to orders
WO2021159638A1 (en) Method, apparatus and device for scheduling cluster queue resources, and storage medium
US10884667B2 (en) Storage controller and IO request processing method
US9292336B1 (en) Systems and methods providing optimization data
KR20120082598A (en) Cost based scheduling algorithm for multiple workflow in cloud computing and system of the same
CN105022668B (en) Job scheduling method and system
US9973306B2 (en) Freshness-sensitive message delivery
CN107430526B (en) Method and node for scheduling data processing
CN111124254B (en) Method, electronic device and program product for scheduling memory space reclamation requests
CN113312160A (en) Techniques for behavioral pairing in a task distribution system
Denninnart et al. Improving robustness of heterogeneous serverless computing systems via probabilistic task pruning
CN111666179A (en) Intelligent replication system and server for multi-point data disaster tolerance
CN113391911B (en) Dynamic scheduling method, device and equipment for big data resources
CN116820769A (en) Task allocation method, device and system
CN111709723A (en) RPA business process intelligent processing method, device, computer equipment and storage medium
CN110750350A (en) Large resource scheduling method, system, device and readable storage medium
CN108200185B (en) Method and device for realizing load balance
CN109670932B (en) Credit data accounting method, apparatus, system and computer storage medium
CN107391262B (en) Job scheduling method and device
Ungureanu et al. Deferred assignment scheduling in cluster-based servers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Patentee after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Patentee before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Patentee after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Patentee before: Alibaba Group Holding Ltd.