CN112181622A - Task scheduling method and device, computer equipment and storage medium - Google Patents

Task scheduling method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112181622A
CN112181622A CN202011058257.4A CN202011058257A CN112181622A CN 112181622 A CN112181622 A CN 112181622A CN 202011058257 A CN202011058257 A CN 202011058257A CN 112181622 A CN112181622 A CN 112181622A
Authority
CN
China
Prior art keywords
task
queue
processed
workers
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011058257.4A
Other languages
Chinese (zh)
Inventor
智鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Dajiaying Information Technology Co Ltd
Original Assignee
Suzhou Dajiaying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Dajiaying Information Technology Co Ltd filed Critical Suzhou Dajiaying Information Technology Co Ltd
Priority to CN202011058257.4A priority Critical patent/CN112181622A/en
Publication of CN112181622A publication Critical patent/CN112181622A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Factory Administration (AREA)

Abstract

The application relates to a task scheduling method, a task scheduling device, computer equipment and a storage medium, wherein a first task queue is monitored through a scheduler, and if a task to be processed exists in the first task queue, the task to be processed is obtained from the first task queue through the scheduler; and distributing the task to be processed to a target worker in the work queue, and executing the task to be processed through the target worker. The method and the device for scheduling tasks utilize the first task queue and the work queue to achieve double queues to complete the work of the scheduler, and task scheduling is achieved efficiently. And explicitly specifies the concurrency number of the work protocol by the number of workers in the work queue. Therefore, the processing capacity of the program is improved, the execution efficiency of task scheduling is improved, and the response time is shortened. Further, with the increase of the processing capacity, the scale of the server can be reduced and the hardware cost can be reduced under the condition that the number of the requests is the same.

Description

Task scheduling method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer application technologies, and in particular, to a task scheduling method and apparatus, a computer device, and a storage medium.
Background
With the development of internet technology, a background system of a To B-type enterprise generally needs To provide a list-type query function and a corresponding data export function. The staff can do secondary processing according to the exported files such as excel and the like or provide the exported files to suppliers for reconciliation, accounting and the like, so that the time period and the data volume which need to be supported by the export function are large.
At present, for self-protection, a data interface provided by a background system can limit the maximum data volume returned once, and when data is pulled in a sequential execution mode, an upper layer needs to circularly call the data interface for data splicing for many times in order to pull enough data. Therefore, task scheduling in the conventional art has a technical problem of low execution efficiency.
Disclosure of Invention
In view of the above, it is necessary to provide a task scheduling method, a task scheduling apparatus, a computer device, and a storage medium capable of improving execution efficiency.
A method of task scheduling, the method comprising:
listening, by a dual queue based scheduler, for a first task queue, the dual queue comprising the first task queue and a work queue;
if the to-be-processed task exists in the first task queue, acquiring the to-be-processed task from the first task queue through the scheduler according to a first-in first-out sequence;
distributing the task to be processed to target workers in the work queue according to the number of the workers in the work queue, wherein the target workers are idle workers in the work queue;
executing, by the target worker, the task to be processed.
In one embodiment, the number of workers is a maximum number of concurrencies; the allocating the task to be processed to the target workers in the work queue according to the number of workers in the work queue comprises:
comparing the number of workers in the occupied state to the maximum concurrency number;
and if the number of the workers in the occupied state is less than the maximum concurrent number, distributing the tasks to be processed to the target workers in the work queue.
And if the number of the workers in the occupied state is equal to the maximum concurrent number, waiting for the workers in the occupied state to be released, and distributing the tasks to be processed to the target workers in the work queue after the workers in the occupied state are released.
In one embodiment, the worker is provided with a second task queue; prior to the performing, by the target worker, the pending task, the method further comprises:
monitoring whether the task to be processed exists in a second task queue of the target worker through the target worker;
the executing, by the target worker, the task to be processed includes:
and if the task to be processed is monitored, executing the task to be processed through the target worker.
In one embodiment, after the performing the pending task by the target worker, the method further comprises:
and after the target worker completes the task to be processed, adding the target worker into the work queue again.
In one embodiment, the scheduler is provided with a pointer to the work queue; the allocating the pending task to a target worker in the work queue comprises:
and distributing the task to be processed to the target worker in the work queue through the pointer of the work queue.
In one embodiment, the target worker is provided with a pointer to the work queue; the rejoining the target worker to the work queue includes:
and adding the target worker into the work queue again through the pointer of the work queue.
In one embodiment, after the performing the pending task by the target worker, the method further comprises:
when the application program exits, the scheduler exits through an exit object and notifies the worker in the occupied state to prepare to exit based on a signaling mechanism of the context object, so that the worker in the occupied state exits after completing the current task.
A task scheduling apparatus, the apparatus comprising:
the task monitoring module is used for monitoring a first task queue through a scheduler based on a double queue, wherein the double queue comprises the first task queue and a work queue;
a task obtaining module, configured to obtain a to-be-processed task from the first task queue through the scheduler according to a first-in first-out order if it is monitored that the to-be-processed task exists in the first task queue;
the task allocation module is used for allocating the tasks to be processed to target workers in the work queue according to the number of the workers in the work queue, and the target workers are idle workers in the work queue;
and the task execution module is used for executing the task to be processed through the target worker.
A task scheduling device is applied to a scheduler, the scheduler is provided with a double queue, and the double queue comprises a first task queue and a work queue; the device comprises:
the monitoring module is used for monitoring the first task queue;
the acquisition module is used for acquiring the tasks to be processed from the first task queue according to a first-in first-out sequence if the tasks to be processed exist in the first task queue;
and the distribution module is used for distributing the tasks to be processed to target workers in the work queue according to the number of the workers in the work queue, and the target workers are idle workers in the work queue.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the task scheduling method in any of the above embodiments when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the task scheduling method of any of the above embodiments.
According to the task scheduling method, the task scheduling device, the computer equipment and the storage medium, a scheduler monitors a first task queue, and if a task to be processed exists in the first task queue, the scheduler acquires the task to be processed from the first task queue; and distributing the tasks to be processed to target workers in the work queue according to the number of the workers in the work queue, and executing the tasks to be processed through the target workers. Therefore, the tasks to be processed are concurrently executed through the work queues, and a sequential execution mode is avoided, so that the execution efficiency can be improved; further, the method and the device for scheduling tasks utilize the first task queue and the work queue to achieve double-queue mode to achieve work of the scheduler innovatively, and task scheduling is achieved efficiently. And the concurrent number of the work coroutines is explicitly appointed according to the number of the workers in the work queue, so that the processing capacity of the program is improved, the execution efficiency of task scheduling is further improved, and the response time is shortened. And with the improvement of the processing capacity, under the condition that the request number is the same, the scale of the server can be reduced, and the hardware cost is reduced.
Drawings
FIG. 1 is a diagram of an application environment of a task scheduling method in one embodiment;
FIG. 2 is a flowchart illustrating a task scheduling method according to an embodiment;
FIG. 3a is a flowchart illustrating step S230 according to an embodiment;
FIG. 3b is a flowchart illustrating a task scheduling method according to an embodiment;
FIG. 4 is a flowchart illustrating a task scheduling method according to another embodiment;
FIG. 5a is a block diagram of a task scheduler in one embodiment;
FIG. 5b is a block diagram of a task scheduler in one embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In conventional technology, on one hand, some To B type companies typically have a large number of list class queries and corresponding data export class functions in the backend systems. Because secondary processing is required to be performed according to the exported files such as excel and the like or the exported files are provided for suppliers to perform account checking, accounting and the like, the time period and the data volume required to be supported by the export function are huge. The interface A providing data is self-protected, and the maximum data volume returned in a single time is limited, so that the upper layer needs to circularly call the interface A for data splicing for multiple times in order to pull enough data.
On the other hand, when the number of requests is increased sharply, message queues MQ are often introduced for peak elimination and valley filling, although asynchronous requests are obtained through disassembly, messages backlogged in MQ servers also need to be processed quickly, and it is critical to increase the consumption speed of consumers through multitask parallel.
On the other hand, for a service depending on a third-party service, the third-party service sets a QPS limit to its clients, for example, the number of simultaneous requests per second is at most 10, and when the third-party is invoked, the third-party service is executed sequentially, and although the frequency limit policy of the third-party is not triggered, the service execution time is lengthened. If the third party is called concurrently and the control on the concurrency number is lacked, the frequency limiting strategy of the third party is probably triggered, so that the calling is failed.
Through the analysis of the three scenes, the inventor finds that: (1) this sequential execution causes two problems: the first is that the execution efficiency is low, the waiting time of the user is long, and the experience is poor. Secondly, the capability of the multi-core computing node is not utilized, the utilization rate of the system is low, and the system resources are wasted. (2) Under a large data volume or high concurrency scenario, system resources may be exhausted through a concurrent execution mode, so that other programs deployed from the same source cannot acquire enough system resources.
Based on this, the present application provides a task scheduling method, which can be applied in the application environment as shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 is configured to receive an operation instruction of a user, such as a list query instruction, a data export instruction, and the like, and the terminal 102 initiates a data request to the server 104 through the operation instruction. In response to the data request, the server 104 monitors a first task queue through a dual queue-based scheduler, the first task queue and the work queue forming a dual queue; if the fact that the tasks to be processed exist in the first task queue is monitored, the tasks to be processed are obtained from the first task queue through a scheduler; distributing the tasks to be processed to target workers in the work queue according to the number of the workers in the work queue, wherein the target workers are idle workers in the work queue; the task to be processed is performed by the target worker.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In an embodiment, as shown in fig. 2, a task scheduling method is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step S210, a first task queue is monitored by the dual queue based scheduler.
In step S220, if it is monitored that the to-be-processed task exists in the first task queue, the to-be-processed task is obtained from the first task queue through the scheduler.
Among other things, a scheduler is a task scheduler for scheduling processes to automatically perform related tasks. And a process is a concept that is virtualized by the operating system to organize tasks in the computer. A queue is a linear memory structure with strict requirements on "storing" and "fetching" of data. The data in and out of the queue follows the principle of "first-in first-out", i.e. the data elements of the most first-in queue are also the first-out queue. The scheduler is a component that sends tasks in the task (job) queue to a worker (worker) who executes the tasks. The task queue serves as a buffer for the newly added task. The scheduler is provided with a work queue. The work queue is a processing mode of multiple workers, and in the processing process, tasks to be processed are distributed to workers in the work queue. The number of workers in the work queue is the maximum concurrent number, and the first task queue and the work queue form a double queue of the scheduler.
Specifically, the first task queue may be used as a buffer for the newly added task, and when the newly added task is received, the newly added task may be added to the first task queue, and the scheduler monitors the first task queue, and monitors whether the first task queue has the task to be processed through the dual-queue-based scheduler. And if the scheduler monitors that the tasks to be processed exist in the first task queue, acquiring one task to be processed from the first task queue according to the first-in first-out sequence.
Step S230, according to the number of workers in the work queue, allocating the task to be processed to the target worker in the work queue.
Step S240, the task to be processed is executed by the target worker.
Wherein the work queue includes a number of workers. Each worker may be a separate coroutine (goroutine) whose execution does not affect and is not aware of each other. The target worker is a worker that is free in the work queue. Each worker in the work queue can be an independent coroutine, and the execution of each worker is not influenced and is not sensed. Specifically, after monitoring that a task to be processed exists in the first task queue and acquiring the task to be processed from the first task queue, monitoring the work queue, monitoring whether an idle worker exists in the work queue, and if an idle worker is monitored, determining the idle worker as a target worker, where the target worker is responsible for executing the task to be processed. And the scheduler distributes the acquired tasks to be processed to the target workers. Executing the acquired task to be processed by the target worker.
In the task scheduling method, a first task queue is monitored through a scheduler, and if the first task queue is monitored to have a task to be processed, the task to be processed is obtained from the first task queue through the scheduler; and distributing the tasks to be processed to target workers in the work queue according to the number of the workers in the work queue, and executing the tasks to be processed through the target workers. Therefore, the tasks to be processed are concurrently executed through the work queues, and a sequential execution mode is avoided, so that the execution efficiency can be improved; further, the method and the device for scheduling tasks utilize the first task queue and the work queue to achieve double-queue mode to achieve work of the scheduler innovatively, and task scheduling is achieved efficiently. And the concurrent number of the work coroutines is explicitly appointed according to the number of the workers in the work queue, so that the processing capacity of the program is improved, the execution efficiency of task scheduling is further improved, and the response time is shortened. And with the improvement of the processing capacity, under the condition that the request number is the same, the scale of the server can be reduced, and the hardware cost is reduced.
In one embodiment, the number of workers is the maximum number of concurrencies. As shown in FIG. 3a, in step S230, a task to be processed is assigned to a target worker in the work queue according to the number of workers in the work queue, comprising the steps of:
step S310a, compare the number of workers in the occupied state with the maximum number of concurrencies.
In step S320a, if the number of workers in the occupied state is less than the maximum concurrency number, the pending task is assigned to the target worker in the work queue.
In step S330a, if the number of workers in the occupied state is equal to the maximum concurrency number, waiting for the workers in the occupied state to be released, and after the workers in the occupied state are released, allocating the task to be processed to the target workers in the work queue.
Where the scheduler defines the number of workers to be initialized. The number of workers can be set according to actual conditions. Specifically, the number of workers in an occupied state in the work queue is determined, the number of workers in the occupied state is compared with the maximum concurrent number, if the number of workers in the occupied state is smaller than the maximum concurrent number, it is indicated that idle workers exist in the work queue, the idle workers are determined to be target workers, and accordingly tasks to be processed are distributed to the target workers in the work queue through the dispatcher. If the number of workers in the occupied state is equal to the maximum concurrent number, which indicates that all the workers in the work queue are in the occupied state, the current task to be processed is executed only when the workers are released, and therefore the workers waiting in the occupied state are released. After the worker finishes executing the current task, the worker in the occupied state is released, the worker is determined to be a target worker, and the task to be processed is distributed to the target worker through the dispatcher.
In this embodiment, the number of workers is defined by the scheduler, the maximum concurrent number is controlled, and the excessive number of co-processes in an occupied state is avoided, so that the system resource is prevented from being exhausted, it is determined that other programs deployed at the same source can also obtain sufficient system resources, and further, the situation that the application program is killed by the operating system by mistake is avoided.
In one embodiment, as shown in FIG. 3b, the worker is provided with a second task queue. Before performing the pending task by the target worker, the method further comprises:
step S310b, the target worker listens to the second task queue of the target worker for the pending task.
Performing, by a target worker, a task to be processed, comprising:
in step S320b, if the task to be processed is monitored, the task to be processed is executed by the target worker.
And each worker is provided with a respective second task queue, and the second task queue is used for placing the tasks to be processed distributed to the workers by the scheduler. The length of the second task queue may be 1. If the second task queue is empty, that is, there is no task to be processed in the second task queue, the idle worker refers to a worker whose second task queue is empty. Specifically, after the scheduler allocates the task to be processed to the target worker, the scheduler completes its own work and continues to monitor the first task queue. And the target worker monitors the second task queue of the target worker, and monitors whether the second task queue has a task to be processed distributed by the scheduler. If the pending task is monitored, which means that the scheduler has assigned the pending task to itself (the target worker), the pending task is executed by the target worker.
In this embodiment, the target worker monitors whether the to-be-processed task exists in the second task queue of the target worker, and if the to-be-processed task is monitored, the target worker executes the to-be-processed task. The present embodiment enables efficient execution of a task to be processed by a target worker.
In one embodiment, after performing the pending task by the target worker, the method further comprises: and after the target worker completes the task to be processed, adding the target worker into the work queue again.
Specifically, the scheduler allocates the tasks to be processed to the target worker, and the target worker monitors a second task queue of the target worker and monitors whether the second task queue has the tasks to be processed allocated by the scheduler. If the pending task is monitored, which means that the scheduler has assigned the pending task to itself (the target worker), the pending task is executed by the target worker. After the target worker completes the task to be processed, the target worker is added into the work queue again, the resource occupied by the target work is released, and the target worker adds the target worker into the work queue of the scheduler again and circularly reciprocates in sequence to inform that the scheduler is idle.
In this embodiment, after the target worker completes the task to be processed, the target worker is added to the work queue again. The target worker can not only inform the scheduler that the scheduler is idle again, but also release the resources occupied by the target worker, so that the waste of system resources is reduced.
In one embodiment, the scheduler is provided with a pointer to the work queue. Assigning the pending task to a target worker in a work queue, comprising: and allocating the tasks to be processed to the target workers in the work queue through the pointers of the work queue.
Where the pointer is the memory address and the scheduler holds the pointer to the work queue. Specifically, when the scheduler allocates the task to be processed, the memory address of the target worker is determined by the pointer of the work queue, so that the task to be processed is allocated to the target worker in the work queue.
In one embodiment, the target worker is provided with a pointer to the work queue. Rejoining the target worker to the work queue, including: the target worker is added to the work queue again through the pointer of the work queue.
Wherein the worker holds a pointer to the work queue of the scheduler. Specifically, after the target worker completes the task to be processed, the target worker needs to be added into the work queue again, and the target worker determines the memory address of the work queue through the pointer of the work queue. Through the memory address of the work queue, the target worker joins the target worker into the work queue of the scheduler again, and the work queue is circulated in sequence to inform that the scheduler is idle.
In one embodiment, after performing the pending task by the target worker, the method further comprises: when the application exits, the scheduler is exited by the exit object and the busy state workers are notified of the readiness to exit based on the signaling mechanism of the context object so that the busy state workers exit after completing the current task.
Wherein the exit (quick) object may be triggered by a Stop (Stop) method of the scheduler. The lifetime of a context object is typically only one processing cycle of a request. Namely, a Context variable is created for a request; after the request processing is finished, the Context variable is cancelled, and resources are released. Specifically, when an application exits, the scheduler's Stop method triggers an exit object to cause the scheduler to exit. Through the use of a context object based signaling mechanism, the occupied workers managed by the signaling mechanism are notified of the readiness to exit, such that the occupied workers exit after completing the current task. Thereby completing the complete resource release of the scheduler and avoiding the waste of system resources.
The exit mechanism of the scheduler is illustratively described. When an application program needs to quit, the scheduler in this embodiment is used by the application program, and it needs to ensure that the coroutines of its process space quit together after processing the current task, that is, it is ensured that the system resources are released on the premise of not violently interrupting the task in progress. Thus, in the design of the scheduler, the Stop method has three roles: the first is to close the entrance of the first task queue of the scheduler and not receive new tasks any more; secondly, waiting for all the tasks in the first task queue to be executed; and thirdly, setting the context object held by the scheduler to a termination state. The method comprises the steps of closing an inlet of a first task queue of a scheduler through a Stop method of the scheduler, waiting for the tasks in the first task queue to be executed completely, setting a context object held by the scheduler to be in a termination state, monitoring whether the context object enters the termination state or not by each worker, and automatically exiting after the current task is executed by each worker when the context object is monitored to enter the termination state, so as to release coroutine resources. By this time the scheduler component has released all the resources it has occupied.
A context object based signaling mechanism is illustratively described. The signaling mechanism includes two layers of meaning, the first layer meaning that the application notifies the scheduler component, triggered by the Stop method. The second layer means that the scheduler notifies all workers of exit through the context object. The application holds a pointer to the scheduler and the worker is an internal component of the scheduler, so the application only notifies the scheduler and the worker inside the scheduler is notified and controlled by the scheduler itself.
In one embodiment, the scheduler is in a componentized form, and the scheduler defines the tasks in the first task queue as abstract interfaces.
Illustratively, defining tasks in the first task queue as abstract interfaces
type Job interface{
Run()error
}
In the embodiment, the high-performance asynchronous scheduler in a componentized form is universal at the language level and has wide application scenes. The task is defined as an abstract interface, so that the intrusiveness to the original program is small, and the reconstruction cost is low.
In one embodiment, the present application provides a task scheduling method, as shown in fig. 4, the method includes the following steps:
step S410 listens to the first task queue through the dual queue based scheduler.
The scheduler adopts a componentization form, and defines the tasks in the first task queue as abstract interfaces. The scheduler is configured with a work queue, the number of workers in the work queue is the maximum concurrent number, and the first task queue and the work queue form a double queue.
In step S420, if it is monitored that the to-be-processed task exists in the first task queue, the to-be-processed task is obtained from the first task queue through the scheduler.
And step S430, distributing the tasks to be processed to the target workers in the work queue according to the number of the workers in the work queue.
Wherein the target worker is a worker who is free in the work queue; the worker is provided with a second task queue; the scheduler is provided with pointers to the work queues. Specifically, the tasks to be processed are allocated to the target workers in the work queue in a first-in first-out order through the pointers of the target queue.
And step S440, monitoring whether the task to be processed exists in the second task queue of the target worker through the target worker.
Step S450, if the task to be processed is monitored, the task to be processed is executed through the target worker.
And step S460, after the target workers complete the tasks to be processed, adding the target workers into the work queue again.
Wherein, the target worker is provided with a pointer of a work queue; specifically, the target worker is added to the work queue again through the pointer of the work queue.
In step S470, when the application program exits, the scheduler exits through the exit object, and notifies the worker in the occupied state to prepare to exit based on the signaling mechanism of the context object, so that the worker in the occupied state exits after completing the current task.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
In one embodiment, as shown in fig. 5a, there is provided a task scheduling apparatus 500a, including: a task listening module 510a, a task obtaining module 520a, a task allocating module 530a, and a task executing module 540a, wherein:
a task listening module 510a, configured to listen to a first task queue through a dual queue-based scheduler, where the dual queue includes the first task queue and a work queue;
a task obtaining module 520a, configured to obtain, by the scheduler, a to-be-processed task from the first task queue according to a first-in first-out order if it is monitored that the to-be-processed task exists in the first task queue;
a task allocating module 530a, configured to allocate the task to be processed to a target worker in the work queue according to the number of workers in the work queue, where the target worker is a worker that is idle in the work queue;
a task execution module 540a, configured to execute the task to be processed by the target worker.
In one embodiment, the number of workers is a maximum number of concurrencies; a task allocation module 530a further configured to compare the number of workers in the occupied state with the maximum concurrency number; if the number of the workers in the occupied state is smaller than the maximum concurrent number, distributing the tasks to be processed to target workers in the work queue; and if the number of the workers in the occupied state is equal to the maximum concurrent number, waiting for the workers in the occupied state to be released, and distributing the tasks to be processed to the target workers in the work queue after the workers in the occupied state are released.
In one embodiment, the worker is provided with a second task queue; the device also comprises a work task monitoring module used for monitoring whether the task to be processed exists in a second task queue of the target worker;
the task execution module 540a is further configured to execute the to-be-processed task through the target worker if the to-be-processed task is monitored.
In one embodiment, the apparatus further comprises a worker join module configured to join the target worker to the work queue again after the target worker completes the pending task.
In one embodiment, the scheduler is provided with a pointer to the work queue; the task allocation module 530a is further configured to allocate the task to be processed to the target worker in the work queue through the pointer of the work queue.
In one embodiment, the target worker is provided with a pointer to the work queue. And the worker joining module is also used for joining the target worker into the work queue again through the pointer of the work queue.
In one embodiment, the apparatus further comprises an exit module; and the exit module is used for exiting the scheduler through the exit object when the application program exits, and informing the workers in the occupied state to prepare for exiting on the basis of a signaling mechanism of the context object so as to enable the workers in the occupied state to exit after the current task is finished.
In one embodiment, the scheduler is in a componentized form and the scheduler defines the tasks in the first task queue as abstract interfaces.
In one embodiment, as shown in fig. 5b, a task scheduling apparatus 500b is applied to a scheduler, the scheduler is provided with dual queues, and the dual queues include the first task queue and a work queue; the device includes:
a monitoring module 510b, configured to monitor a first task queue;
an obtaining module 520b, configured to obtain a task to be processed from the first task queue according to a first-in first-out sequence if it is monitored that the task to be processed exists in the first task queue;
an allocating module 530b, configured to allocate the task to be processed to a target worker in the work queue according to the number of workers in the work queue, where the target worker is a worker that is idle in the work queue.
Wherein the scheduler is a component that sends the tasks in the task (job) queue to a worker (worker) that executes the tasks. The task queue serves as a buffer for the newly added task. The scheduler is provided with double queues, and the double queues comprise the first task queue and a work queue; the work queue is a processing mode of multiple workers, and in the processing process, tasks to be processed are distributed to workers in the work queue. The number of workers in the work queue is the maximum number of concurrencies. The work queue includes a number of workers. The target worker is a worker that is free in the work queue. Each worker in the work queue can be an independent coroutine, and the execution of each worker is not influenced and is not sensed.
In particular, the first task queue may serve as a buffer for the newly added task, which may be added to the first task queue when the newly added task is received. By monitoring the first task queue, the scheduler monitors whether the first task queue has the task to be processed. And if the scheduler monitors that the tasks to be processed exist in the first task queue, acquiring one task to be processed from the first task queue according to the first-in first-out sequence. And the dispatcher monitors the work queue, monitors whether idle workers exist in the work queue, and determines the idle workers as target workers if the idle workers are monitored, wherein the target workers are responsible for executing the tasks to be processed. And the scheduler distributes the acquired tasks to be processed to the target workers. Executing the acquired task to be processed by the target worker.
The task scheduling device innovatively utilizes the mode that the first task queue and the work queue realize double queues to realize the work of the scheduler, and efficiently realizes task scheduling. And the concurrent number of the work coroutines is explicitly appointed according to the number of the workers in the work queue, so that the processing capacity of the program is improved, the execution efficiency of task scheduling is further improved, and the response time is shortened. And with the improvement of the processing capacity, under the condition that the request number is the same, the scale of the server can be reduced, and the hardware cost is reduced.
For specific limitations of the task scheduling device, reference may be made to the above limitations of the task scheduling method, which is not described herein again. The modules in the task scheduling device can be implemented in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of task scheduling. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program: listening, by a dual queue based scheduler, for a first task queue, the dual queue comprising the first task queue and a work queue; if the to-be-processed task exists in the first task queue, acquiring the to-be-processed task from the first task queue through the scheduler according to a first-in first-out sequence; distributing the task to be processed to target workers in the work queue according to the number of the workers in the work queue, wherein the target workers are idle workers in the work queue; executing, by the target worker, the task to be processed.
In one embodiment, the number of workers is a maximum number of concurrencies; the processor, when executing the computer program, further performs the steps of: comparing the number of workers in the occupied state to the maximum concurrency number; if the number of the workers in the occupied state is smaller than the maximum concurrent number, distributing the tasks to be processed to target workers in the work queue; and if the number of the workers in the occupied state is equal to the maximum concurrent number, waiting for the workers in the occupied state to be released, and distributing the tasks to be processed to the target workers in the work queue after the workers in the occupied state are released.
In one embodiment, the worker is provided with a second task queue; the processor, when executing the computer program, further performs the steps of: monitoring whether the task to be processed exists in a second task queue of a target worker through the target worker; and if the task to be processed is monitored, executing the task to be processed through the target worker.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and after the target worker completes the task to be processed, adding the target worker into the work queue again.
In one embodiment, the scheduler is provided with a pointer to the work queue; the processor, when executing the computer program, further performs the steps of: and distributing the task to be processed to the target worker in the work queue through the pointer of the work queue.
In one embodiment, the target worker is provided with a pointer to the work queue; the processor, when executing the computer program, further performs the steps of: and adding the target worker into the work queue again through the pointer of the work queue.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the application program exits, the scheduler exits through an exit object and notifies the worker in the occupied state to prepare to exit based on a signaling mechanism of the context object, so that the worker in the occupied state exits after completing the current task.
In one embodiment, the scheduler is in a componentized form and the scheduler defines the tasks in the first task queue as abstract interfaces.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
listening, by a dual queue based scheduler, for a first task queue, the dual queue comprising the first task queue and a work queue; if the to-be-processed task exists in the first task queue, acquiring the to-be-processed task from the first task queue through the scheduler according to a first-in first-out sequence; distributing the task to be processed to target workers in the work queue according to the number of the workers in the work queue, wherein the target workers are idle workers in the work queue; executing, by the target worker, the task to be processed.
In one embodiment, the number of workers is a maximum number of concurrencies; the computer program when executed by the processor further realizes the steps of: comparing the number of workers in the occupied state to the maximum concurrency number; if the number of the workers in the occupied state is smaller than the maximum concurrent number, distributing the tasks to be processed to target workers in the work queue; and if the number of the workers in the occupied state is equal to the maximum concurrent number, waiting for the workers in the occupied state to be released, and distributing the tasks to be processed to the target workers in the work queue after the workers in the occupied state are released.
In one embodiment, the worker is provided with a second task queue; the computer program when executed by the processor further realizes the steps of: monitoring whether the task to be processed exists in a second task queue of a target worker through the target worker; and if the task to be processed is monitored, executing the task to be processed through the target worker.
In one embodiment, the computer program when executed by the processor further performs the steps of: and after the target worker completes the task to be processed, adding the target worker into the work queue again.
In one embodiment, the scheduler is provided with a pointer to the work queue; the computer program when executed by the processor further realizes the steps of: and distributing the task to be processed to the target worker in the work queue through the pointer of the work queue.
In one embodiment, the target worker is provided with a pointer to the work queue; the computer program when executed by the processor further realizes the steps of: and adding the target worker into the work queue again through the pointer of the work queue.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the application program exits, the scheduler exits through an exit object and notifies the worker in the occupied state to prepare to exit based on a signaling mechanism of the context object, so that the worker in the occupied state exits after completing the current task.
In one embodiment, the scheduler is in a componentized form and the scheduler defines the tasks in the first task queue as abstract interfaces.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for task scheduling, the method comprising:
listening, by a dual queue based scheduler, for a first task queue, the dual queue comprising the first task queue and a work queue;
if the to-be-processed task exists in the first task queue, acquiring the to-be-processed task from the first task queue through the scheduler according to a first-in first-out sequence;
distributing the task to be processed to target workers in the work queue according to the number of the workers in the work queue, wherein the target workers are idle workers in the work queue;
executing, by the target worker, the task to be processed.
2. The method of claim 1, wherein the number of workers is a maximum number of concurrencies; the allocating the task to be processed to the target workers in the work queue according to the number of workers in the work queue comprises:
comparing the number of workers in the occupied state to the maximum concurrency number;
if the number of the workers in the occupied state is smaller than the maximum concurrent number, distributing the tasks to be processed to target workers in the work queue;
and if the number of the workers in the occupied state is equal to the maximum concurrent number, waiting for the workers in the occupied state to be released, and distributing the tasks to be processed to the target workers in the work queue after the workers in the occupied state are released.
3. The method of claim 1, wherein the worker is provided with a second task queue; prior to the performing, by the target worker, the pending task, the method further comprises:
monitoring whether the task to be processed exists in a second task queue of the target worker through the target worker;
the executing, by the target worker, the task to be processed includes:
and if the task to be processed is monitored, executing the task to be processed through the target worker.
4. The method of claim 3, wherein after the performing of the pending task by the target worker, the method further comprises:
and after the target worker completes the task to be processed, adding the target worker into the work queue again.
5. The method of claim 1, wherein the scheduler is provided with a pointer to the work queue, and wherein the target worker is provided with a pointer to the work queue; the allocating the pending task to a target worker in the work queue comprises:
distributing the task to be processed to a target worker in the work queue through a pointer of the work queue;
the rejoining the target worker to the work queue includes:
and adding the target worker into the work queue again through the pointer of the work queue.
6. The method of any one of claims 1 to 5, wherein after the performance of the pending task by the target worker, the method further comprises:
when the application program exits, the scheduler exits through an exit object and notifies the worker in the occupied state to prepare to exit based on a signaling mechanism of the context object, so that the worker in the occupied state exits after completing the current task.
7. A task scheduling apparatus, characterized in that the apparatus comprises:
the task monitoring module is used for monitoring a first task queue through a scheduler based on a double queue, wherein the double queue comprises the first task queue and a work queue;
a task obtaining module, configured to obtain a to-be-processed task from the first task queue through the scheduler according to a first-in first-out order if it is monitored that the to-be-processed task exists in the first task queue;
the task allocation module is used for allocating the tasks to be processed to target workers in the work queue according to the number of the workers in the work queue, and the target workers are idle workers in the work queue;
and the task execution module is used for executing the task to be processed through the target worker.
8. The task scheduling device is applied to a scheduler, wherein the scheduler is provided with a double queue, and the double queue comprises a first task queue and a work queue; the device comprises:
the monitoring module is used for monitoring the first task queue;
the acquisition module is used for acquiring the tasks to be processed from the first task queue according to a first-in first-out sequence if the tasks to be processed exist in the first task queue;
and the distribution module is used for distributing the tasks to be processed to target workers in the work queue according to the number of the workers in the work queue, and the target workers are idle workers in the work queue.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202011058257.4A 2020-09-30 2020-09-30 Task scheduling method and device, computer equipment and storage medium Pending CN112181622A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011058257.4A CN112181622A (en) 2020-09-30 2020-09-30 Task scheduling method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011058257.4A CN112181622A (en) 2020-09-30 2020-09-30 Task scheduling method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112181622A true CN112181622A (en) 2021-01-05

Family

ID=73946141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011058257.4A Pending CN112181622A (en) 2020-09-30 2020-09-30 Task scheduling method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112181622A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110515710A (en) * 2019-08-06 2019-11-29 深圳市随手科技有限公司 Asynchronous task scheduling method, apparatus, computer equipment and storage medium
CN111562922A (en) * 2020-04-29 2020-08-21 北京中大唯信科技有限公司 Method, system and electronic equipment for modularizing command line program and cloud-end method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110515710A (en) * 2019-08-06 2019-11-29 深圳市随手科技有限公司 Asynchronous task scheduling method, apparatus, computer equipment and storage medium
CN111562922A (en) * 2020-04-29 2020-08-21 北京中大唯信科技有限公司 Method, system and electronic equipment for modularizing command line program and cloud-end method and system

Similar Documents

Publication Publication Date Title
US10467725B2 (en) Managing access to a resource pool of graphics processing units under fine grain control
CN107241281B (en) Data processing method and device
US11231955B1 (en) Dynamically reallocating memory in an on-demand code execution system
CN107729139B (en) Method and device for concurrently acquiring resources
CN112486648A (en) Task scheduling method, device, system, electronic equipment and storage medium
US9015724B2 (en) Job dispatching with scheduler record updates containing characteristics combinations of job characteristics
CN113243005A (en) Performance-based hardware emulation in on-demand network code execution systems
US20130061220A1 (en) Method for on-demand inter-cloud load provisioning for transient bursts of computing needs
WO2017070900A1 (en) Method and apparatus for processing task in a multi-core digital signal processing system
US9021138B2 (en) Performance of multi-processor computer systems
CN110383764B (en) System and method for processing events using historical data in a serverless system
CN109564528B (en) System and method for computing resource allocation in distributed computing
CN111406250A (en) Provisioning using prefetched data in a serverless computing environment
US9448862B1 (en) Listening for externally initiated requests
CN112685148A (en) Asynchronous communication method and device of mass terminals, computer equipment and storage medium
US9384050B2 (en) Scheduling method and scheduling system for multi-core processor system
CN112905334A (en) Resource management method, device, electronic equipment and storage medium
CN110659131A (en) Task processing method, electronic device, computer device, and storage medium
CN111586140A (en) Data interaction method and server
CN111290842A (en) Task execution method and device
US11614957B1 (en) Native-hypervisor based on-demand code execution system
CN115562846A (en) Resource scheduling method and device and computing node
CN112181622A (en) Task scheduling method and device, computer equipment and storage medium
US20230393782A1 (en) Io request pipeline processing device, method and system, and storage medium
CN114924888A (en) Resource allocation method, data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105