CN116048756A - Queue scheduling method and device and related equipment - Google Patents

Queue scheduling method and device and related equipment Download PDF

Info

Publication number
CN116048756A
CN116048756A CN202310007056.9A CN202310007056A CN116048756A CN 116048756 A CN116048756 A CN 116048756A CN 202310007056 A CN202310007056 A CN 202310007056A CN 116048756 A CN116048756 A CN 116048756A
Authority
CN
China
Prior art keywords
queue
thread
processed
tasks
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310007056.9A
Other languages
Chinese (zh)
Inventor
郭小源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huawei Cloud Computing Technology Co ltd
Original Assignee
Shenzhen Huawei Cloud Computing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huawei Cloud Computing Technology Co ltd filed Critical Shenzhen Huawei Cloud Computing Technology Co ltd
Priority to CN202310007056.9A priority Critical patent/CN116048756A/en
Publication of CN116048756A publication Critical patent/CN116048756A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a queue scheduling method which is used for improving the utilization rate of threads on the basis that normal execution of tasks to be processed is not affected. The queue scheduling method is used for scheduling queues corresponding to threads Chi Zhongxian, the thread pool comprises a plurality of threads in an execution state, each thread in the plurality of threads corresponds to at least two queues, the plurality of threads comprise a first thread and a second thread, and the method comprises the following steps: decoupling a first queue from the first thread, wherein the first queue is a standby queue of the first thread, and the first queue comprises a plurality of tasks to be processed; and adjusting the thread associated with the first queue to be the second thread, wherein the total number of tasks to be processed in a plurality of queues corresponding to the second thread is smaller than or equal to a first threshold value. The application also provides corresponding apparatus, computing device clusters, chips, computer-readable storage media, and computer program products.

Description

Queue scheduling method and device and related equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a queue scheduling method, apparatus, and related devices.
Background
Queues are a common data structure with first-in, first-out characteristics. Queue scheduling refers to scheduling elements in a queue among multiple queues. In a multi-threaded application scenario, tasks may be processed through multiple queues. Specifically, each thread of the plurality of threads may independently process a task to be processed, and each thread may correspond to a queue. After a new task to be processed is acquired, the task to be processed may be added to a queue. The threads process tasks in the queue in order. In this way, different threads can process multiple tasks to be processed in sequence.
With the running of a thread, there may be situations where the pending tasks in the queue are not evenly distributed. For example, there may be a queue in which tasks to be processed are processed faster, while other queues in which tasks to be processed are processed relatively slower. Thus, the number of tasks to be processed in the queue corresponding to the faster thread is relatively small, while the number of tasks to be processed in the queue corresponding to the slower thread is relatively large. Therefore, the processing time of part of tasks to be processed is long, and the normal processing of the tasks is affected.
To solve the above problem, a task to be processed may be scheduled among a plurality of queues. Specifically, the tasks to be processed in the queue with a large number of tasks to be processed may be transferred to the queue with a small number of tasks to be processed. In this way, the number of tasks to be processed between the different queues can be balanced. However, there may be a dependency between the task to be processed in the queue and the thread. As such, adjusting the task to be processed from one queue to another may affect the normal progress of the task to be processed.
Disclosure of Invention
In view of this, the present application provides a queue scheduling method for improving the utilization rate of threads on the basis that normal execution of a task to be processed is not affected. The application also provides corresponding apparatus, computing device clusters, chips, computer-readable storage media, and computer program products.
In a first aspect, the present application provides a method for scheduling a queue, where the method may be applied to a queue scheduling device, and is used to schedule a queue corresponding to a thread Chi Zhongxian. The thread pool comprises a plurality of threads in an execution state, and each thread in the plurality of threads corresponds to at least two queues. The at least two queues include a primary queue and at least one standby queue. And in the thread running process, the task to be processed can be taken out from the main queue and processed. The plurality of threads includes a first thread and a second thread. Specifically, when executing the queue scheduling method provided in the present application, the queue scheduling device may first decouple the first queue from the first thread. The first queue is a standby queue of the first thread, and the first queue comprises a plurality of tasks to be processed. After decoupling the first queue from the first thread, the queue scheduling apparatus may adjust the thread associated with the first queue to a second thread. The total number of tasks to be processed in the queues corresponding to the second thread is smaller than or equal to a first threshold value. If the total number of the tasks to be processed in the plurality of queues corresponding to the second thread is determined to be smaller than or equal to the first threshold value, the possibility that the second queue idles is indicated, and waste of computing resources can occur. The queue scheduling means associates the first queue with the second thread and the second thread can obtain the task to be processed from the first queue and process it. The second thread is not in an idle state because there are enough tasks to be processed. Thus, the waste of computing resources is avoided. On the other hand, even if the task to be processed added to the first queue generates a dependency relationship with the first queue, the above method associates the entire first queue with the second thread, and can maintain the dependency relationship between the task to be processed in the first queue and the first queue. That is, the present application enables the first queue and all pending tasks in the first queue to be integrally migrated under the second thread by decoupling the first queue from the first thread and adjusting the threads associated with the first queue. Therefore, on one hand, normal execution of tasks to be processed in the queue can be guaranteed, on the other hand, thread idling can be avoided, and the utilization rate of computing resources is improved.
In some possible embodiments, the queue scheduling apparatus may determine that the total number of tasks to be processed in the plurality of queues corresponding to the second thread is less than or equal to the first threshold. Specifically, the queue scheduling device may periodically or aperiodically acquire the total number of tasks to be processed in a plurality of queues corresponding to each thread in the thread pool, and determine whether the total number of tasks to be processed in the plurality of queues corresponding to each queue is less than or equal to a first threshold. After determining that the total number of tasks to be processed in the plurality of queues corresponding to the second queue is less than or equal to the first threshold, the queue scheduling device may begin decoupling the first queue and the first thread.
In some possible embodiments, the queue scheduling means may also start scheduling new queues for the threads according to their operating state. For example, after decoupling the first queue from the first thread and scheduling the first queue to the second thread, the queue scheduling device may obtain a working state of each thread in the thread pool, and determine whether the working state of the thread is an idle state. If a thread, e.g. a second thread, is in an idle state, the queue scheduling means may schedule a queue for the device for which there are pending tasks.
In some possible implementations, the queue scheduling means may set the first queue as the primary queue of the second thread. For example, if no task to be processed exists in the multiple queues corresponding to the second thread, the queue scheduling device may set the first queue as a main queue of the second thread, so that the second thread processes the task to be processed in the first queue. Or if the main queue corresponding to the second thread does not have the task to be processed, the queue scheduling device can set the first queue as the main queue of the second thread.
In some possible implementations, the queue scheduling means may also set the first queue as a standby queue for the second thread. For example, if the primary queue corresponding to the second thread has a task to be processed, the queue scheduling device may set the first queue as a standby queue of the second thread.
In some possible embodiments, the queue scheduling device may switch roles of a plurality of queues corresponding to the threads, so that the threads process the tasks to be processed in different queues. For example, assume that the first thread also corresponds to the second queue and the third queue, respectively. The second queue is a main queue of the first thread, the third queue is a standby queue of the first thread, and the third queue comprises at least one task to be processed. The first thread may first process the task to be processed in the second queue. If the second queue does not have the task to be processed, the queue scheduling device can adjust the second queue to be a standby queue of the first thread, and adjust the third queue to be a main queue of the first thread. Thus, the first thread can process the task to be processed in the third queue. Alternatively, if the first queue is a standby queue of the first thread, the queue scheduling device may also adjust the first queue to be a main queue of the first thread.
In some possible embodiments, the queue scheduling means may add the task to be processed to the active queue or the standby queue of the thread according to the number of active queues of the thread. Specifically, taking a first task to be processed to be added to the target thread as an example, after receiving a task adding request for adding the first task to be processed to the target thread, the queue scheduling device may determine whether the number of tasks to be processed in the main queue of the target thread is less than a second threshold. If the number of tasks to be processed in the main queue of the target thread is smaller than the second threshold value, the queue scheduling device may add the first task to be processed to the main queue of the target thread. If the number of the tasks to be processed in the main queue of the target thread is not less than the second threshold value, the queue scheduling device may add the first task to be processed to the standby queue of the target thread.
In a second aspect, the present application provides a queue scheduling apparatus, where the apparatus is configured to schedule queues corresponding to threads Chi Zhongxian, the thread pool includes a plurality of threads in an executing state, each thread in the plurality of threads corresponds to at least two queues, and the plurality of threads includes a first thread and a second thread, and the apparatus includes: the decoupling module is used for decoupling a first queue and the first thread, wherein the first queue is a standby queue of the first thread, and the first queue comprises a plurality of tasks to be processed; and the association module is used for adjusting the thread associated with the first queue into the second thread, and the total number of the tasks to be processed in the queues corresponding to the second thread is smaller than or equal to a first threshold value.
In some possible embodiments, the apparatus further includes a first scheduling initiation module, where the first scheduling initiation module is configured to obtain a total number of tasks to be processed in a plurality of queues corresponding to each thread in the plurality of threads; and determining that the total number of the tasks to be processed in the plurality of queues corresponding to the second thread is smaller than or equal to a first threshold according to the total number of the tasks to be processed in the plurality of queues corresponding to each thread.
In some possible implementations, the apparatus further includes a second schedule start module that determines that the second thread is in an idle state.
In some possible embodiments, the total number of tasks to be processed in the plurality of queues corresponding to the second thread is zero; the association module is specifically configured to set the first queue as a main queue corresponding to the second thread.
In some possible implementations, the first thread further corresponds to a second queue and a third queue, the second queue is a main queue of the first thread, the third queue is a standby queue of the first thread, the third thread includes at least one task to be processed, and the apparatus further includes a queue adjustment module, configured to set the third queue as the main queue of the first thread in response to the task to be processed in the second queue being completed by the first thread.
In some possible implementations, the apparatus further includes an acquisition module and a queue joining module; the acquisition module is used for acquiring a task adding request, wherein the task adding request is used for adding a first task to be processed to a queue corresponding to a target thread; the queue adding module is configured to add the first task to be processed to the main queue of the target thread if the number of tasks to be processed in the main queue of the target thread is less than a second threshold; and if the number of the tasks to be processed in the main queue of the target thread is not less than a second threshold value, adding the first tasks to be processed into the standby queue of the target thread.
In a third aspect, the present application provides a cluster of computing devices, the computing devices comprising at least one computing device, the at least one computing device comprising at least one processor and at least one memory; the at least one memory is configured to store instructions that the at least one processor executes to cause the cluster of computing devices to perform the method of queue scheduling in the first aspect or any one of the possible implementations of the first aspect. It should be noted that the memory may be integrated into the processor or may be independent of the processor. The at least one computing device may also include a bus. The processor is connected with the memory through a bus. The memory may include a readable memory and a random access memory, among others.
In a fourth aspect, the present application provides a computer readable storage medium having instructions stored therein which, when run on at least one computing device, cause the at least one computing device to perform the queue scheduling method of the first aspect or any implementation of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising instructions which, when run on at least one computing device, cause the at least one computing device to perform the queue scheduling method of the first aspect or any implementation of the first aspect.
Drawings
Fig. 1 is a schematic diagram of an application scenario of a queue scheduling device provided in an embodiment of the present application;
fig. 2 is a signaling interaction diagram of a queue scheduling method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a computing device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computing device cluster according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings in the present application.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which the embodiments of the application described herein have been described for objects of the same nature.
In a multi-threaded application scenario, each thread may process one task to be processed at the same time. Thus, a plurality of threads execute in parallel, and a plurality of tasks to be processed can be processed at the same time. One queue may currently be set for each thread, considering that the system may receive multiple pending tasks in a short period of time. After receiving the task to be processed, the task to be processed can be added to the tail of a queue corresponding to one thread. For example, the task to be processed may be added to the tail of the queue where the number of tasks to be processed is minimal. And taking out a new task to be processed from the head of the queue corresponding to the thread for processing after each thread finishes processing one task to be processed. Thus, even if a plurality of tasks to be processed are received in a short time, the tasks to be processed can be processed in time by using the queue as a buffer.
With the execution of the threads, the queues corresponding to some threads may not have tasks to be processed, while the queues corresponding to other threads may have more tasks to be processed. Thus, threads with no tasks to be processed in the queue are in an idle state, and threads with a large number of tasks to be processed in the queue are in a busy state. And the idle thread occupies the computing resources of the computing device (or the computing device cluster), resulting in resource waste.
To solve this problem, scheduling of tasks to be processed can currently be performed between queues. For example, in some possible implementations, the number of tasks to be processed in each queue may be counted. If no task to be processed exists in the queue corresponding to a certain thread or the number of the tasks to be processed in a certain queue is less than a preset threshold value, the tasks to be processed in other queues can be transferred to the queue. In this way, the thread may be prevented from being in an idle state to increase the utilization of computing resources.
For example, assume that thread A, thread B, and thread C execute in parallel, and that thread A corresponds to queue a, thread B corresponds to queue B, and thread C corresponds to queue C. With the running of the thread, the task to be processed in the queue a is rapidly consumed by the thread a until the number of tasks to be processed in the queue a is smaller than a preset threshold. The scheduler may count the number of tasks to be processed in each queue. Since the number of tasks to be processed in the queue a is smaller than the preset threshold, the scheduler may take the tasks to be processed in the queue b (or the queue c) out of the queue b (or the queue c) and add the tasks to the queue a. In this way, thread A can be avoided from being in an idle state, thereby avoiding waste of computing resources.
The scheduling method is to transfer the task to be processed in different queues so as to adjust the thread for processing the task to be processed. However, in some possible application scenarios, the task to be processed may have an association relationship with the thread. Specifically, after a task to be processed is added to a certain queue, there may be an association relationship between the task to be processed and a thread corresponding to the queue. Thus, if the task to be processed is added to another queue, the task to be processed is processed by another thread, which may result in that the task to be processed cannot be processed normally.
The description will be made with the above examples. Assuming that the task to be processed X is added to the queue b, a context dependency may be generated between the task to be processed X and the queue b. The dependency of the context between the task to be processed X and the queue b is again necessary for processing the task to be processed X. Thus, if the number of the tasks to be processed in the queue a is smaller than the preset threshold value, the scheduler adds the tasks to be processed X in the queue b to the queue a, and since there is a context dependency between the tasks to be processed X and the queue b and no context dependency between the tasks to be processed X and the queue a, when the thread a processes the tasks to be processed X, the task to be processed X fails to process due to lack of the context dependency.
That is, the conventional queue scheduling method can be used only in a scenario where there is no association between a queue and a thread. If there is an association between the queues and threads, the pending tasks may not be smoothly adjusted between the queues. As such, there may still be situations where threads are idling, affecting the utilization of computing resources.
Based on this, embodiments of the present application provide a queue scheduling method that may be performed by a computing device or a cluster of computing devices. In particular, the method may be performed by a queue scheduling apparatus running on a computing device or a cluster of computing devices to increase the utilization of computing resources of the computing device (or the cluster of computing devices). Specifically, for the first thread and the second thread that execute in parallel, a plurality of queues may be set for the first thread and the second thread, respectively. That is, the first thread corresponds to a plurality of queues and the second thread corresponds to a plurality of queues. One queue of the queues corresponding to each thread is a main queue, and the other queues are standby queues. The thread can select a task to be processed from the main queue for processing. If the queue scheduling device determines that the total number of the tasks to be processed in the plurality of queues corresponding to the second thread is smaller than or equal to the first threshold value, the possibility that the second queue idles is indicated, and waste of computing resources may occur. In this way, the queue scheduler may decouple the first thread from a certain standby queue (hereinafter first queue) of the first thread. The first queue comprises a plurality of tasks to be processed. The queue scheduling means may then adjust the thread associated with the first queue to the second thread, e.g. may set the active queue or the standby queue of the second thread to the first queue. Thus, the second thread can acquire the task to be processed from the first queue and process the task. The second thread is not in an idle state because there are enough tasks to be processed. Thus, the waste of computing resources is avoided. On the other hand, even if the task to be processed added to the first queue generates a dependency relationship with the first queue, the above method associates the entire first queue with the second thread, and can maintain the dependency relationship between the task to be processed in the first queue and the first queue. That is, the present application enables the first queue and all pending tasks in the first queue to be integrally migrated under the second thread by decoupling the first queue from the first thread and adjusting the threads associated with the first queue. Therefore, on one hand, normal execution of tasks to be processed in the queue can be guaranteed, on the other hand, thread idling can be avoided, and the utilization rate of computing resources is improved.
As an example, the queue scheduling apparatus may be disposed on a computing device, for example, on a computer or a server. As another example, the queue scheduling apparatus described above may be deployed on multiple computing devices, such as one or more servers in a distributed processing system. Alternatively, if the queue scheduling apparatus is deployed to multiple computing devices, different computing devices may be used to perform different steps. Alternatively, the computing device or cluster of computing devices deployed with the queue scheduling apparatus and the computing device or cluster of computing devices deployed with the parallel threads (e.g., the aforementioned first thread and second thread) may be the same or different.
For example, in the application scenario shown in fig. 1, the queue scheduling apparatus 100 may specifically include a decoupling module 110 and an association module 120. Wherein the decoupling module 110 is connected with the association module 120. The queue scheduler 100 is in turn coupled to a thread pool 200. At least a first thread 210 and a second thread 220 are included in the thread pool 200. Specifically, the decoupling module 110 may be configured to decouple the first queue from the first thread 210. The association module 120 may be configured to adjust the thread associated with the first queue to the second thread 220. In practical application, the queue scheduling apparatus 100 may be implemented by software or may be implemented by hardware.
Queue scheduler 100 may include code running on a computing instance as an example of a software functional unit. The computing instance may include at least one of a physical host (computing device), a virtual machine, and a container, among others. Further, the above-described computing examples may be one or more. For example, the information delivery device may include code running on multiple hosts/virtual machines/containers. It should be noted that, multiple hosts/virtual machines/containers for running the code may be distributed in the same region (region), or may be distributed in different regions. Further, multiple hosts/virtual machines/containers for running the code may be distributed in the same availability zone (availability zone, AZ) or may be distributed in different AZs, each AZ comprising a data center or multiple geographically close data centers. Wherein typically a region may comprise a plurality of AZs.
Also, multiple hosts/virtual machines/containers for running the code may be distributed in the same virtual private cloud (virtual private cloud, VPC) or in multiple VPCs. In general, one VPC is disposed in one region, and a communication gateway is disposed in each VPC for implementing inter-connection between VPCs in the same region and between VPCs in different regions.
Queue scheduler 100 as an example of a hardware functional unit, the queue scheduler 100 may include at least one computing device, such as a server or the like. Alternatively, the queue scheduling apparatus 100 may be a device implemented using an application-specific integrated circuit (ASIC) or a programmable logic device (programmable logic device, PLD). The PLD may be implemented as a complex program logic device (complex programmable logical device, CPLD), a field-programmable gate array (FPGA), a general-purpose array logic (generic array logic, GAL), or any combination thereof.
The plurality of computing devices included in the queue scheduling apparatus 100 may be distributed in the same region or may be distributed in different regions. The plurality of computing devices included in the queue scheduling apparatus 100 may be distributed in the same AZ or may be distributed in different AZ. Likewise, the plurality of computing devices included in the queue scheduling apparatus 100 may be distributed in the same VPC or may be distributed in a plurality of VPCs. Wherein the plurality of computing devices may be any combination of computing devices such as servers, ASIC, PLD, CPLD, FPGA, and GAL.
Next, various non-limiting embodiments of the queue scheduling process are described in detail.
Referring to fig. 2, a flow chart of a queue scheduling method in an embodiment of the present application is shown. The method can be applied to the application scenario shown in fig. 1, or can be applied to other applicable application scenarios. The following description will be given by taking an application scenario as an example. It should be noted that, in the application scenario shown in fig. 1, the queue scheduling apparatus 100 not only includes the decoupling module 110 and the association module 120, but also further includes an obtaining module 130, a queue joining module 140, and a scheduling starting module 150. Since the above-described acquisition module 130, queue entry module 140, and schedule initiation module 150 are optional, they are represented by dashed lines in fig. 1 and 2 (if present). And the function of each module is described in detail in the following embodiments.
The queue scheduling method shown in fig. 2 specifically may include:
s201: the acquisition module 130 acquires a task addition request.
In the embodiment of the present application, if there is a new task to be processed, the obtaining module 130 in the queue scheduling apparatus 100 may obtain the task adding request. The task adding request is used for adding the task to be processed to the threads of the thread pool. In this embodiment, the new task to be processed may be referred to as a first task to be processed. The first task to be processed may be processed by any of the threads in thread pool 200. Alternatively, the first task to be processed may be processed by a specified thread. For example, the task addition request may include an identification of the first thread for adding the first task to be processed to a queue corresponding to the first thread.
Alternatively, the first task to be processed, and other tasks to be processed described later, may be coroutines. Coroutines may also be referred to as lightweight threads (Light Weight Thread, LWT).
After acquiring the first task to be processed, the acquisition module 130 may send a task addition request to the queue joining module 140.
S202: queue entry module 140 entries the first task to be processed into the first queue.
After the first task to be processed is obtained, the queue adding module 140 may add the first task to be processed to a queue corresponding to a thread in the thread pool 200. Wherein the queue to which the first task to be processed joins is referred to as a first target queue.
For convenience of explanation, the queue joining module 140 joins the first task to be processed to the first queue in this embodiment. In step S202, the first queue is a queue corresponding to the first thread 210 in the thread pool 200, for example, may be a main queue of the first thread 210 or may be a standby queue of the first thread 210.
The primary and backup queues are described below.
Specifically, each thread in thread pool 200 may correspond to at least two queues. One queue of at least two queues corresponding to each thread is called a main queue, and the other queues are called standby queues. In the process of thread operation, the thread firstly takes out the task to be processed from the main queue for processing, and does not take out the task to be processed from the standby queue. If the task to be processed does not exist in the main queue of a certain thread, the queue scheduling device 100 or the control module in the thread pool 200 may adjust the main queue without the task to be processed to be a standby queue, and adjust a standby queue corresponding to the thread with the task to be processed to be the main queue, so that the thread takes out the task to be processed from the new queue and processes the task.
For example. It is assumed that the first thread 210 corresponds to the first queue, the second queue, and the third queue, respectively. Where the first queue is a standby queue for the first thread 210, the second queue is a primary queue for the first thread 210, and the third queue is a standby queue for the first thread 210. Then the first thread 210 may fetch the task to be processed from the second queue during the running of the first thread 210, thereby processing the task to be processed in the second queue.
With the running of the first thread, the task to be processed in the second queue is gradually consumed until no task to be processed exists in the second queue. If the second queue does not have any tasks to be processed, the queue scheduling apparatus 100 (or the control module in the thread pool 200) may adjust the second queue to be a standby queue of the first thread 210, and adjust a standby queue of the first thread 210, except the second queue, where tasks to be processed are present to be a main queue of the first thread 210. For example, if there are pending tasks in the third queue, the third queue may be adjusted to be the primary queue of the first thread 210. Alternatively, if the third queue has no task to be processed, the first queue has a task to be processed, and the first queue is still a standby queue of the first thread 210, the first queue may be adjusted to be a main queue of the first thread 210. After determining the new active queue, the first thread 210 may fetch the task to be processed from the newly determined active queue, thereby processing the task to be processed. Therefore, by adjusting the role relation between the main queue and the standby queue, the task to be processed in the main queue and the task to be processed in the standby queue can be ensured to be processed on the basis of not changing the queue to which the task to be processed belongs.
Alternatively, the above steps of adjusting the roles of the primary queue and the standby queue may be performed by a queue adjustment module (not shown in fig. 1 and 2) in the queue scheduling apparatus 100, or may be performed by a control module in the thread pool 200.
The main queue and the standby queue are described above, and the principle that the queue joining module 140 joins the first task to be processed to the first target queue is described below.
When adding the first task to be processed to the queue, the queue adding module 140 may first select a first target thread from a plurality of threads in the thread pool 200, and then select the first target queue from a plurality of queues corresponding to the first target thread. In the embodiment corresponding to fig. 2, the first target thread is the first thread 210, and the first target queue is the first queue of the first thread 210.
In some possible implementations, when the first target thread is selected, the queue joining module 140 may count the total number of tasks to be processed in the plurality of queues corresponding to each thread in the thread pool 200, so as to determine the first target thread according to the total number of tasks to be processed in the plurality of queues corresponding to the thread. The total number of tasks to be processed in the queues corresponding to the threads may be simply referred to as the total number of tasks to be processed of the threads, which refers to the combination of the number of tasks to be processed in all queues corresponding to the threads. For example, if the first thread 210 corresponds to a first queue, a second queue, and a third queue, then the total number of tasks to be processed by the first thread 210 is the sum of the number of tasks to be processed in the first queue, the number of tasks to be processed in the second queue, and the number of tasks to be processed in the third queue. Alternatively, the queue joining module 140 may determine the thread with the smallest total number of outbound tasks as the first target thread. That is, in the corresponding implementation of FIG. 2, the first thread 210 is the thread in the thread pool 200 that has the smallest total number of tasks to be processed.
After determining the first thread 210 as the first target thread, the queue joining module 140 may select one queue from a plurality of queues corresponding to the first thread 210 as the first target queue.
For example, in some possible implementations, the queue joining module 140 may determine the first target queue based on the number of tasks to be processed in the active queue of the first thread 210. For example, if the number of tasks to be processed in the primary queue of the first thread is less than the second threshold, the queue joining module 140 may join the first task to be processed in the primary queue of the first thread 210. Alternatively, if the number of tasks to be processed in the primary queue of the first thread is not less than the second threshold, the queue entry module 140 may enter the first task to be processed into the standby queue of the first thread 210. The second threshold may be preset, and the number of tasks to be processed that can be accommodated in the main queue of the first thread is limited.
Alternatively, in some other possible implementations, the queue joining module 140 may select, as the first target queue, a queue with the smallest number of tasks to be processed from a plurality of queues corresponding to the first thread 210.
Alternatively, in some other possible implementations, the queue joining module 140 may also select, as the first target queue, a queue having a number of tasks to be processed close to an average number of tasks to be processed from a plurality of queues corresponding to the first thread 210.
Alternatively, in some other possible implementations, the queue joining module 140 may also select, as the first target queue, a queue with the smallest number of tasks to be processed from at least one standby queue corresponding to the first thread 210.
Alternatively, in some other possible implementations, the queue joining module 140 may also determine the primary queue of the first thread 210 as the first target queue. In different application scenarios, the queue joining module 140 may determine the first target queue according to different rules, which will not be described herein.
Alternatively, in some other possible implementations, the queue joining module 140 may also determine that a queue for which no pending task exists is the first target queue. Thus, the number of the tasks to be processed in different queues can be balanced, and the speed of the tasks to be processed is improved.
It will be appreciated that the six implementations described above are by way of example only, and illustrate the manner in which the queue joining module 140 determines the first target queue. In some other possible implementations, the first target Du Lie may also be determined by other ways, which are not described herein.
The above steps S201 and S202 describe a process of adding a new task to be processed to a certain queue corresponding to a certain thread in the thread pool 200. As the thread runs, the tasks to be processed in the thread's active queue are gradually consumed. If the task to be processed in a certain queue is consumed at a rate greater than the rate at which the task to be processed is added to the queue, the number of tasks to be processed in the queue is gradually reduced. If the number of tasks to be processed in the plurality of queues of a thread is in a reduced state, the total number of tasks to be processed for the thread is less than or equal to the first threshold. In this way, the queue scheduling apparatus 100 may schedule queues of other threads to threads for which the total number of tasks to be processed is less than or equal to the first threshold.
The above procedure is described below taking the example that the total number of tasks to be processed in the plurality of queues corresponding to the second thread 220 is less than or equal to the first threshold, and the queue scheduling apparatus 100 schedules the first queue of the first thread 210 to the second thread 220.
S203: the dispatch initiation module 150 determines that the total number of tasks to be processed in the plurality of queues corresponding to the second thread 220 is less than or equal to the first threshold.
The schedule initiation module 150 in the queue scheduling apparatus 200 may monitor whether the total number of tasks to be processed by the plurality of threads in the thread pool 200, respectively, is less than or equal to a first threshold. If the schedule initiation module 150 determines that the total number of tasks to be processed for a thread in the thread pool 200 is less than or equal to the first threshold, the schedule initiation module 150 may initiate the decoupling module 110 and the association module 120 in the queue scheduling apparatus 100 to begin scheduling the queues of the threads in the thread pool 200, thereby providing new tasks to be processed for threads having a total number of tasks to be processed less than or equal to the first threshold, to avoid the thread from idling. In the embodiment corresponding to fig. 2, the second thread 220 is a thread with the total number of tasks to be processed being less than or equal to the first threshold.
The first threshold may be a preset minimum value of the total number of tasks to be processed for queue scheduling. If the total number of tasks to be processed of a thread is lower than the first threshold, the queue scheduling device 200 schedules the queues corresponding to other threads and having tasks to be processed to the thread. Alternatively, the first threshold may be 0.
In some possible implementations, the dispatch initiation module 150 may periodically (or aperiodically) obtain the total number of tasks to be processed for each thread in the thread pool 200 and determine whether the total number of tasks to be processed for each thread is less than or equal to the first threshold, respectively, so as to schedule a new queue for the second thread 220 when the total number of tasks to be processed for the second thread 220 is less than or equal to the first threshold.
Alternatively, in some other possible implementations, the threads in the thread pool 200 may actively count the total number of tasks to be processed in the queues corresponding to the threads, and send a notification message to the scheduling initiation module 150 when the total number of tasks to be processed in the queues corresponding to the threads is less than or equal to the first threshold, so that the scheduling initiation module 150 controls other modules in the queue scheduling apparatus 100 to schedule a new queue in which the tasks to be processed exist for the threads.
Alternatively, in some other possible implementations, the dispatch initiation module 150 may also monitor the execution status of threads in the thread pool 200. If the dispatch initiation module 150 determines that the second thread 210 is in an idle state, the dispatch initiation module 150 may control other modules in the queue dispatcher 100 to dispatch a new queue for the second thread 220 for which there are pending tasks. In this implementation, the first threshold is equal to 0.
Alternatively, in some other possible implementations, the scheduling module 150 may also monitor the number of tasks to be processed in the standby queue for each thread in the thread pool 200. If the scheduling module 150 determines that no pending task exists in all of the standby queues of the second thread 220, the scheduling module 150 may control other modules in the queue scheduling apparatus 100 to schedule a new queue for the second thread 220 in which a pending task exists.
It will be appreciated that the four implementations described above are merely examples, and illustrate the manner in which the schedule initiation module 150 controls other modules in the queue scheduling apparatus 100 to schedule a new queue for the second thread 220 for which there are pending tasks. In some other possible implementations, it may also be determined by other manners that the total number of tasks to be processed of the second thread 220 is less than or equal to the first threshold, which is not described herein. In addition, the queue scheduling apparatus 100 may employ any two or more of the above four implementations simultaneously. For example, a plurality of schedule enabling modules, such as a first schedule enabling module and a second schedule enabling module, may be included in the queue scheduling apparatus 100. Different scheduling initiation modules may control other modules in the queue scheduling apparatus 100 to schedule a new queue for the second thread 220 for which there are tasks to be processed according to different rules.
After determining that the total number of tasks to be processed by the second thread 220 is less than or equal to the first threshold, the dispatch initiation module 150 may send a first notification message to the decoupling module 110 to notify the decoupling module 110 to decouple the first queue from the first thread 210.
S204: the decoupling module 110 decouples the first queue from the first thread 210.
To avoid wasting computing resources by the second thread 220 idling, the second thread 220 may be scheduled with a queue in which pending tasks are present. It will be appreciated that the queue scheduled to the second thread 220 is a queue corresponding to a thread other than the second thread 220 before being scheduled to the second thread 220. That is, the queue scheduling apparatus 100 may schedule the queues of the threads other than the second thread 220 to the second thread 220. For this purpose, the decoupling module 110 in the queue scheduling apparatus 100 may first decouple the queue to be scheduled to the second thread 220 from its corresponding original thread. In the embodiment corresponding to fig. 2, the queue to be scheduled to the second thread 220 is the first queue. The first queue corresponds to the first thread 210 before being dispatched to the second thread 220.
The queue scheduling apparatus 100 determines a queue to be scheduled to the second thread 220 before decoupling the first queue from the first thread 210. In the corresponding embodiment of fig. 2, the queue is a second queue. Specifically, the queue scheduling apparatus 100 may select a queue that does not belong to the second thread 220 and has at least one task to be processed from a plurality of queues corresponding to a plurality of threads in the thread pool 200. Specifically, the queue scheduling apparatus 100 may determine the second target thread from the thread pool, and then determine the second target queue to be scheduled to the second thread 220 from a plurality of queues corresponding to the second target thread. In the embodiment corresponding to fig. 2, the second target thread is the first thread 210 and the second target queue is the first queue.
A method of determining the second target thread and the second target queue is described below.
If the total number of tasks to be processed of the second target thread is less than or equal to the first threshold value after the target queue in which tasks to be processed exist is scheduled to the second thread 220, the queue scheduling apparatus 100 repeatedly performs queue scheduling among the plurality of threads. Thus, to avoid repeated scheduling of queues, the total number of tasks to be processed of the second target thread should be relatively large. Alternatively, the queue scheduling apparatus 100 may determine a thread whose total number of tasks to be processed is higher than the third threshold value as the second target thread. Wherein the third threshold is greater than the first threshold.
After determining that the first thread 210 is the second target thread, the queue scheduling apparatus 220 may select one queue from a plurality of queues corresponding to the first thread 210 as the second target queue to schedule to the second thread 220. Wherein the second target queue comprises at least one task to be processed. Alternatively, the second target queue may be a standby queue of the first thread 210 for processing tasks to be processed in order. That is, the queue scheduling apparatus 100 may schedule the standby queue including at least one task to be processed of the first thread 210 as a standby queue to the second thread 220.
Alternatively, in order to reduce the number of scheduling times, the queue scheduling apparatus 100 may schedule a queue in which there are more tasks to be processed as the second target queue to the second thread 220. Thus, since there are more tasks to be processed in the newly added queue of the second thread 220, the probability of the second thread 220 entering idle is smaller.
In addition, to avoid repeated scheduling of queues, the total number of tasks to be processed by the first thread 210 is still greater than the first threshold after the second target queue is scheduled to the second thread 220.
The foregoing examples are presented. If the first thread 210 corresponds to a first queue, a second queue, and a third queue, where the first queue has 2 tasks to be processed, the second queue has 1 task to be processed, the third queue has 4 tasks to be processed, and the first threshold is 4. Thus, if the third queue is scheduled to the second thread 220, the total number of tasks to be processed of the first thread 210 after scheduling is 3, which is smaller than the first threshold value 4, resulting in that the queue scheduling apparatus 100 needs to schedule the queues of the other threads to the first thread 210 again. If the first queue is scheduled to the second thread 220, then the total number of tasks to be processed for the first thread 210 after scheduling is 5, greater than the first threshold 4. The queue scheduler 100 does not need to schedule queues of other threads to the first thread 210. To this end, the first queue may be determined to be the second target queue.
It will be appreciated that if the number of tasks to be processed in the thread pool 200 is relatively small, there may not be a second target thread for which the total number of tasks to be processed after the second target queue has been scheduled is greater than the first threshold. To this end, the queue scheduling apparatus 100 may select a thread for which a standby queue exists for a task to be processed from the thread pool 200 as the second target thread. For example, the thread with the largest number of tasks to be processed in the standby queue may be selected as the second target thread, or the thread with the largest number of tasks to be processed in the main queue and the tasks to be processed in the standby queue may be selected as the second target thread.
After determining the first queue as the second target queue, the decoupling module 150 may decouple the first queue from the first thread 210 to re-associate the first queue with the second thread 210. Decoupling the first queue from the first thread 210 refers to decoupling the association between the first queue and the first thread 210. After decoupling the first queue from the first thread 210, the first thread 210 cannot obtain the task to be processed from the first queue.
After decoupling the first queue from the first thread 210, the decoupling module 110 may send a second notification message to the association module 120 to inform the association module 120 to adjust the thread associated with the first queue to the second thread 220.
S205: the association module 120 adjusts the thread associated with the first queue to the second thread 220.
After decoupling the first queue from the first thread 210, the association module 120 may adjust the thread associated with the first queue to the second thread 220. Alternatively, if there are pending tasks in the primary queue of the second thread 220, the association module 120 may set the first queue as a standby queue for the second thread. If there are no pending tasks in the primary queue of the second thread 220, the association module 120 may set the first queue as the primary queue of the second thread 220. It will be appreciated that if no pending task exists in any of the corresponding queues of the second thread 220, the association module 120 sets the first queue as the active queue of the second thread 220
Thus, since the first queue is associated with the second thread 220, the second thread 220 can fetch the task to be processed from the first queue and process the task, thereby avoiding the second thread 220 being in an idle state. On the other hand, even if the task to be processed added to the first queue generates a dependency relationship with the first queue, the above method associates the entire first queue with the second thread, and can maintain the dependency relationship between the task to be processed in the first queue and the first queue. That is, the present application enables the first queue and all pending tasks in the first queue to be integrally migrated under the second thread by decoupling the first queue from the first thread and adjusting the threads associated with the first queue. Therefore, on one hand, normal execution of tasks to be processed in the queue can be guaranteed, on the other hand, thread idling can be avoided, and the utilization rate of computing resources is improved.
In this embodiment, the division and the functional description of each module in the queue scheduling apparatus are only one example. For example, in other embodiments, the decoupling module 110 may be configured to perform any of the steps of the above-described queue scheduling method, and similarly, the association module 120, the obtaining module 130, the queue joining module 140, and the scheduling initiation module 150 may be configured to perform any of the steps of the above-described queue scheduling method, and the steps that the decoupling module 110, the association module 120, the obtaining module 130, the queue joining module 140, and the scheduling initiation module 150 may be responsible for implementing may be specified as needed, and implementing all functions of the queue scheduling apparatus by implementing different steps of the queue scheduling method through the decoupling module 110, the association module 120, the obtaining module 130, the queue joining module 140, and the scheduling initiation module 150, respectively.
In the embodiment shown in fig. 2, the queue scheduling apparatus (including the decoupling module 110, the association module 120, the obtaining module 130, the queue joining module 140, and the scheduling start module 150) related to the queue scheduling process may be software configured on a computing device or a computing device cluster, and by running the software on the computing device or the computing device cluster, the computing device or the computing device cluster may implement functions of the queue scheduling apparatus. The queue scheduling device involved in the queue scheduling process is described in detail below based on the hardware device implementation angle.
Fig. 3 shows a schematic structural diagram of a computing device, where the queue scheduling apparatus may be deployed on the computing device, where the computing device may be a computing device in a cloud environment (such as a server), or a computing device in an edge environment, or a terminal device, and the like may be specifically configured to implement the functions of the decoupling module 110, the association module 120, the acquisition module 130, the queue joining module 140, and the scheduling initiation module 150 in the embodiment shown in fig. 2.
As shown in fig. 3, computing device 300 includes a processor 310, a memory 320, a communication interface 330, and a bus 340. Communication between processor 310, memory 320 and communication interface 330 is via bus 340. Bus 340 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 3, but not only one bus or one type of bus. The communication interface 330 is used for communication with the outside, for example, for acquiring a task addition request or the like.
Processor 310 may be, among other things, a central processing unit (central processing unit, CPU), an application specific integrated circuit (application specific integrated circuit, ASIC), a graphics processor (graphics processing unit, GPU), or one or more integrated circuits. The processor 310 may also be an integrated circuit chip with signal processing capabilities. In implementation, the functions of the various modules in the queue scheduling apparatus may be performed by integrated logic circuits of hardware in the processor 310 or by instructions in the form of software. The processor 310 may also be a general purpose processor, a data signal processor (digital signal process, DSP), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic devices, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The method disclosed in the embodiments of the present application may be directly embodied in a hardware decoding processor or may be implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in memory 320 and processor 310 reads information in memory 320 and, in combination with its hardware, performs some or all of the functions in the queue scheduling apparatus.
The memory 320 may include volatile memory (RAM), such as random access memory (random access memory). The memory 320 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, an HDD, or an SSD.
The memory 320 has stored therein executable code that the processor 310 executes to perform the methods performed by the queue scheduling apparatus described above.
Specifically, in the case where the embodiment shown in fig. 2 is implemented, and where the decoupling module 110, the association module 120, the acquisition module 130, the queue joining module 140, and the scheduling start module 150 described in the embodiment shown in fig. 2 are implemented by software, software or program codes required for executing the functions of the decoupling module 110, the association module 120, the acquisition module 130, the queue joining module 140, and the scheduling start module 150 in fig. 2 are stored in the memory 320, interactions of the acquisition module 130 with other devices are implemented through the communication interface 330, and the processor is configured to execute instructions in the memory 320 to implement a method executed by the queue scheduling apparatus.
Fig. 4 illustrates a schematic diagram of a computing device cluster. Wherein the computing device cluster 40 shown in fig. 4 includes a plurality of computing devices, the queue scheduling apparatus may be distributed and deployed on the plurality of computing devices in the computing device cluster 40. As shown in fig. 4, the computing device cluster 40 includes a plurality of computing devices 400, each computing device 400 including a memory 420, a processor 410, a communication interface 430, and a bus 440, wherein the memory 420, the processor 410, and the communication interface 430 implement a communication connection with each other through the bus 440.
Processor 410 may employ CPU, GPU, ASIC or one or more integrated circuits. The processor 410 may also be an integrated circuit chip with signal processing capabilities. In implementation, some of the functions of the queue scheduling apparatus may be performed by integrated logic circuits of hardware in the processor 410 or by instructions in the form of software. The processor 410 may also be a DSP, FPGA, general purpose processor, other programmable logic device, discrete gate or transistor logic device, discrete hardware components, and may implement or perform some of the methods, steps, and logic blocks disclosed in the embodiments of the present application. The steps of the method disclosed in the embodiments of the present application may be directly embodied in a hardware decoding processor or may be implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 420, and in each computing device 400, the processor 410 reads information in the memory 420, and in combination with its hardware, may perform part of the functions of the queue scheduler.
The memory 420 may include ROM, RAM, static storage devices, dynamic storage devices, hard disks (e.g., SSD, HDD), etc. The memory 420 may store program code, for example, part or all of the program code for implementing the decoupling module 110, part or all of the program code for implementing the association module 120, part or all of the program code for implementing the acquisition module 130, part or all of the program code for implementing the queue entry module 140, part or all of the program code for implementing the schedule initiation module 150, and so forth. For each computing device 400, when the program code stored in the memory 420 is executed by the processor 410, the processor 410 performs a portion of the methods performed by the queue scheduling apparatus based on the communication interface 430, such as a portion of the computing devices 400 may be used to perform the methods performed by the decoupling module 110, the association module 120, and the schedule start module 150 described above, and another portion of the computing devices 400 may be used to perform the methods performed by the acquisition module 130 and the queue joining module 140 described above. Memory 420 may also store data such as: intermediate data or result data generated by the processor 410 during execution, such as the identification of the first target queue and the identification of the second target queue described above.
The communication interface 403 in each computing device 400 is used to communicate with the outside, such as interacting with other computing devices 400, etc.
Bus 440 may be a peripheral component interconnect standard bus or an extended industry standard architecture bus, among others. For ease of illustration, the bus 440 within each computing device 400 in FIG. 4 is represented by only one thick line, but does not represent only one bus or one type of bus.
Communication paths are established between the plurality of computing devices 400 through a communication network to realize the functions of the queue scheduling apparatus. Any computing device may be a computing device in a cloud environment (e.g., a server), or a computing device in an edge environment, or a terminal device.
In addition, the embodiment of the application further provides a computer readable storage medium, where instructions are stored, when the computer readable storage medium runs on one or more computing devices, and the one or more computing devices execute the method executed by each module of the queue scheduling apparatus provided by the embodiment.
Further, embodiments of the present application provide a computer program product that, when executed by one or more computing devices, performs any of the aforementioned methods of queue scheduling. The computer program product may be a software installation package which may be downloaded and executed on a computer in case any of the previously described methods of queue scheduling is required.
It should be further noted that the above-described apparatus embodiments are merely illustrative, and that the units described as separate units may or may not be physically separate, and that units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the application, the connection relation between the modules represents that the modules have communication connection therebetween, and can be specifically implemented as one or more communication buses or signal lines.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general purpose hardware, or of course may be implemented by dedicated hardware including application specific integrated circuits, dedicated CPUs, dedicated memories, dedicated components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions can be varied, such as analog circuits, digital circuits, or dedicated circuits. However, a software program implementation is a preferred embodiment in many cases for the present application. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk or an optical disk of a computer, etc., including several instructions for causing a computer device (which may be a personal computer, a training device, or a network device, etc.) to perform the method described in the embodiments of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, training device, or data center to another website, computer, training device, or data center via a wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a training device, a data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.

Claims (15)

1. A method for scheduling queues corresponding to threads Chi Zhongxian, wherein the thread pool includes a plurality of threads in an execution state, each thread in the plurality of threads corresponds to at least two queues, and the plurality of threads includes a first thread and a second thread, and the method includes:
decoupling a first queue from the first thread, wherein the first queue is a standby queue of the first thread, and the first queue comprises a plurality of tasks to be processed;
and adjusting the thread associated with the first queue to be the second thread, wherein the total number of tasks to be processed in a plurality of queues corresponding to the second thread is smaller than or equal to a first threshold value.
2. The method of claim 1, wherein prior to decoupling the first queue from the first thread, the method further comprises:
acquiring the total number of tasks to be processed in a plurality of queues corresponding to each thread in the plurality of threads;
and determining that the total number of the tasks to be processed in the plurality of queues corresponding to the second thread is smaller than or equal to a first threshold according to the total number of the tasks to be processed in the plurality of queues corresponding to each thread.
3. The method of claim 1, wherein prior to decoupling the first queue from the first thread, the method further comprises:
determining that the second thread is in an idle state.
4. A method according to any one of claims 1-3, wherein the total number of tasks to be processed in the plurality of queues corresponding to the second thread is zero;
the adjusting the thread associated with the first queue to the second thread includes:
and setting the first queue as a main queue corresponding to the second thread.
5. The method of any of claims 1-4, wherein the primary queue of the first thread is a second queue, the standby queue of the first thread further comprises a third queue, the third thread comprises at least one task to be processed, and the method further comprises:
and setting the third queue as a main queue of the first thread in response to the task to be processed in the second queue being processed by the first thread.
6. The method according to any one of claims 1-5, further comprising:
acquiring a task adding request, wherein the task adding request is used for adding a first task to be processed to a queue corresponding to a target thread;
If the number of the tasks to be processed in the main queue of the target thread is smaller than a second threshold value, adding the first task to be processed into the main queue of the target thread;
and if the number of the tasks to be processed in the main queue of the target thread is not less than a second threshold value, adding the first tasks to be processed into the standby queue of the target thread.
7. A queue scheduling apparatus, wherein the apparatus is configured to schedule queues corresponding to threads Chi Zhongxian, the thread pool includes a plurality of threads in an execution state, each thread in the plurality of threads corresponds to at least two queues, and the plurality of threads includes a first thread and a second thread, and the apparatus includes:
the decoupling module is used for decoupling a first queue and the first thread, wherein the first queue is a standby queue of the first thread, and the first queue comprises a plurality of tasks to be processed;
and the association module is used for adjusting the thread associated with the first queue into the second thread, and the total number of the tasks to be processed in the queues corresponding to the second thread is smaller than or equal to a first threshold value.
8. The apparatus of claim 7, further comprising a first schedule enabling module,
The first scheduling starting module is used for acquiring the total number of tasks to be processed in a plurality of queues corresponding to each thread in the plurality of threads; and determining that the total number of the tasks to be processed in the plurality of queues corresponding to the second thread is smaller than or equal to a first threshold according to the total number of the tasks to be processed in the plurality of queues corresponding to each thread.
9. The apparatus of claim 7, further comprising a second schedule enabling module,
the second scheduling starting module is used for determining that the second thread is in an idle state.
10. The apparatus according to any one of claims 7-9, wherein a total number of tasks to be processed in the plurality of queues corresponding to the second thread is zero;
the association module is specifically configured to set the first queue as a main queue corresponding to the second thread.
11. The apparatus of any of claims 7-10, wherein the first thread further corresponds to a second queue and a third queue, the second queue being a primary queue for the first thread, the third queue being a standby queue for the first thread, the third thread including at least one task to be processed therein, the apparatus further comprising a queue adjustment module,
The queue adjusting module is configured to set the third queue as a main queue of the first thread in response to the task to be processed in the second queue being processed by the first thread.
12. The apparatus according to any one of claims 7-11, further comprising an acquisition module and a queue joining module;
the acquisition module is used for acquiring a task adding request, wherein the task adding request is used for adding a first task to be processed to a queue corresponding to a target thread;
the queue adding module is configured to add the first task to be processed to the main queue of the target thread if the number of tasks to be processed in the main queue of the target thread is less than a second threshold; and if the number of the tasks to be processed in the main queue of the target thread is not less than a second threshold value, adding the first tasks to be processed into the standby queue of the target thread.
13. A cluster of computing devices, the cluster of computing devices comprising at least one computing device, each computing device comprising a processor and memory:
the memory is used for storing instructions;
the processor is configured to cause the cluster of computing devices to perform the method of any of claims 1-6 in accordance with the instructions.
14. A computer readable storage medium having instructions stored therein which, when run on a computing device, cause the computing device to perform the method of any of claims 1-6.
15. A computer program product containing instructions which, when run on a computing device, cause the computing device to perform the method of any of claims 1-6.
CN202310007056.9A 2023-01-03 2023-01-03 Queue scheduling method and device and related equipment Pending CN116048756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310007056.9A CN116048756A (en) 2023-01-03 2023-01-03 Queue scheduling method and device and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310007056.9A CN116048756A (en) 2023-01-03 2023-01-03 Queue scheduling method and device and related equipment

Publications (1)

Publication Number Publication Date
CN116048756A true CN116048756A (en) 2023-05-02

Family

ID=86121378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310007056.9A Pending CN116048756A (en) 2023-01-03 2023-01-03 Queue scheduling method and device and related equipment

Country Status (1)

Country Link
CN (1) CN116048756A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117614906A (en) * 2024-01-23 2024-02-27 珠海星云智联科技有限公司 Method, computer device and medium for multi-thread multi-representation oral package
CN117614906B (en) * 2024-01-23 2024-04-19 珠海星云智联科技有限公司 Method, computer device and medium for multi-thread multi-representation oral package

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117614906A (en) * 2024-01-23 2024-02-27 珠海星云智联科技有限公司 Method, computer device and medium for multi-thread multi-representation oral package
CN117614906B (en) * 2024-01-23 2024-04-19 珠海星云智联科技有限公司 Method, computer device and medium for multi-thread multi-representation oral package

Similar Documents

Publication Publication Date Title
US11941434B2 (en) Task processing method, processing apparatus, and computer system
US20190377604A1 (en) Scalable function as a service platform
US8826291B2 (en) Processing system
US10733019B2 (en) Apparatus and method for data processing
US8924977B2 (en) Sequential cooperation between map and reduce phases to improve data locality
US9772879B2 (en) System and method for isolating I/O execution via compiler and OS support
US20160378570A1 (en) Techniques for Offloading Computational Tasks between Nodes
US20170068574A1 (en) Multiple pools in a multi-core system
CN109697122B (en) Task processing method, device and computer storage medium
US20050080962A1 (en) Hardware management of JAVA threads
WO2018108001A1 (en) System and method to handle events using historical data in serverless systems
WO2022068697A1 (en) Task scheduling method and apparatus
US20200218453A1 (en) Upgrade management method and scheduling node, and storage system
US20120297216A1 (en) Dynamically selecting active polling or timed waits
CN107515781B (en) Deterministic task scheduling and load balancing system based on multiple processors
US10545890B2 (en) Information processing device, information processing method, and program
CN115167996A (en) Scheduling method and device, chip, electronic equipment and storage medium
CN111679900B (en) Task processing method and device
US20240054021A1 (en) Resource scheduling method and server
WO2021212967A1 (en) Task scheduling for distributed data processing
CN103823712A (en) Data flow processing method and device for multi-CPU virtual machine system
CN109766168B (en) Task scheduling method and device, storage medium and computing equipment
CN116048756A (en) Queue scheduling method and device and related equipment
US11301255B2 (en) Method, apparatus, device, and storage medium for performing processing task
US9483317B1 (en) Using multiple central processing unit cores for packet forwarding in virtualized networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination