CN113986497B - Queue scheduling method, device and system based on multi-tenant technology - Google Patents

Queue scheduling method, device and system based on multi-tenant technology Download PDF

Info

Publication number
CN113986497B
CN113986497B CN202111255682.7A CN202111255682A CN113986497B CN 113986497 B CN113986497 B CN 113986497B CN 202111255682 A CN202111255682 A CN 202111255682A CN 113986497 B CN113986497 B CN 113986497B
Authority
CN
China
Prior art keywords
task
task queue
queue
processed
tenant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111255682.7A
Other languages
Chinese (zh)
Other versions
CN113986497A (en
Inventor
高升洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111255682.7A priority Critical patent/CN113986497B/en
Publication of CN113986497A publication Critical patent/CN113986497A/en
Application granted granted Critical
Publication of CN113986497B publication Critical patent/CN113986497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

The utility model provides a queue scheduling method, a device and a system based on multi-tenant technology, which relate to the technical field of big data in computer technology, the method is applied to scheduling equipment in a multi-tenant system, the multi-tenant system comprises the scheduling equipment and at least one computer cluster, and the method comprises the following steps: responding to a task calling request initiated by a target tenant, wherein the task calling request is used for indicating to process a task to be processed, determining a task queue corresponding to the task to be processed from a preset task queue list, the task queue list is determined according to queue state information of each task queue in a multi-tenant system, and sending a task processing request to a computer cluster corresponding to the target tenant, the task processing request is used for indicating the task queue corresponding to the task to be processed, and the task processing request is used for indicating to process the task to be processed by adopting resources of the task queue corresponding to the task to be processed, so that the rationality and the sufficiency of queue scheduling are improved.

Description

Queue scheduling method, device and system based on multi-tenant technology
Technical Field
The present disclosure relates to the field of big data technology in computer technologies, and in particular, to a queue scheduling method, apparatus, and system based on a multi-tenant technology.
Background
The multi-tenant technology can provide services for a plurality of tenants, and the computer cluster and the scheduling equipment are involved in the multi-tenant technology. In the multi-tenant technology, a plurality of task queues are configured for each tenant in advance, and resources are provided for each task queue; therefore, the scheduling device can directly call the task queue for the processing task initiated by the tenant, and process the processing task initiated by the tenant based on the resource of the task queue.
In the prior art, because a plurality of task queues are configured for each tenant in advance, when the scheduling device receives a processing task initiated by the tenant, the scheduling device can randomly schedule the processing task initiated by the tenant to a certain task queue of the tenant; the computer cluster can then execute the task queue to process the processing task initiated by the tenant.
However, with the random scheduling method, the queues of some tenants are highly loaded, and the queues of other tenants are idle, so that the queues cannot be fully utilized, which results in low output efficiency of queue scheduling and incapability of timely executing tasks to be processed.
Disclosure of Invention
The disclosure provides a queue scheduling method, device and system based on a multi-tenant technology for improving resource utilization rate.
According to a first aspect of the present disclosure, a queue scheduling method based on a multi-tenant technology is provided, and the method is applied to a scheduling device in a multi-tenant system, where the multi-tenant system includes the scheduling device and at least one computer cluster, and the method includes:
responding to a task calling request initiated by a target tenant, wherein the task calling request is used for indicating to process a task to be processed, and determining a task queue corresponding to the task to be processed from a preset task queue list; the task queue list comprises at least one task queue, and the task queue list is determined according to queue state information of each task queue in the multi-tenant system;
and sending a task processing request to a computer cluster corresponding to the target tenant, wherein the task processing request is used for indicating a task queue corresponding to the task to be processed, and the task processing request is used for indicating that the task to be processed is processed by adopting resources of the task queue corresponding to the task to be processed.
According to a second aspect of the present disclosure, there is provided a queue scheduling apparatus based on a multi-tenant technology, the apparatus being applied to a scheduling device in a multi-tenant system, the multi-tenant system including the scheduling device and at least one computer cluster, the apparatus including:
the task scheduling method includes a first determining unit, configured to respond to a task scheduling request initiated by a target tenant, where the task scheduling request is used to instruct to process a to-be-processed task, and determine a task queue corresponding to the to-be-processed task from a preset task queue list; the task queue list comprises at least one task queue, and the task queue list is determined according to queue state information of each task queue in the multi-tenant system;
and a second sending unit, configured to send a task processing request to a computer cluster corresponding to the target tenant, where the task processing request is used to indicate a task queue corresponding to the to-be-processed task, and the task processing request is used to indicate that resources of the task queue corresponding to the to-be-processed task are used to process the to-be-processed task.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, execution of the computer program by the at least one processor causing the electronic device to perform the method of the first aspect.
According to a sixth aspect of the present disclosure, there is provided a scheduling apparatus comprising the apparatus according to the second aspect.
According to a seventh aspect of the present disclosure, there is provided a multi-tenant system based on a multi-tenant technology, including: a scheduling apparatus and at least one computer cluster, to which the apparatus of the second aspect applies.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a scene diagram of a queue scheduling method based on multi-tenant technology, in which an embodiment of the present disclosure may be implemented;
FIG. 2 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 6 is a schematic illustration according to a fifth embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a scheduling apparatus according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing a queue scheduling method based on a multi-tenant technology according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The multi-tenant technology relates to a computer cluster and a scheduling device, and the computer cluster and the scheduling device are combined to provide services for each tenant, such as providing query services for the tenant, specifically providing services for querying order data, and the like.
The computer clusters may be computer clusters distributed in different regions, such as computer clusters distributed in different cities, or computer clusters distributed in different districts of the same city.
The computer cluster and scheduling device based on multi-tenant technology may be referred to as a multi-tenant system 110. As shown in fig. 1, the multi-tenant system 110 includes computer clusters 1 through N, each of which establishes communication with the scheduling device 120. Wherein N is a positive integer greater than 1.
As shown in fig. 1, if a tenant needs to read data stored in the data platform 130, the data can be implemented based on interaction between the user device 140 of the tenant and the multi-tenant system 110, and interaction between the multi-tenant system 110 and the data platform 130.
For example, the tenant sends a task scheduling request to the multi-tenant system 110 through the user device 140 to instruct reading of data stored in the data platform 130 by the task scheduling request.
Accordingly, the scheduling device 120 in the multi-tenant system 110 receives the scheduling task request sent by the user device 140, determines a task queue corresponding to the task to be processed, and sends a task processing request to the computer cluster corresponding to the tenant, for example, sends a task processing request to the computer cluster 1, so that the computer cluster 1 acquires the storage data corresponding to the scheduling task request from the data platform 130, so that the multi-tenant system 110 feeds back the acquired storage data to the user device 140.
It should be understood that the above application scenarios are only used for exemplary illustration, and the application scenarios to which the queue scheduling method based on the multi-tenant technology of the present embodiment may be applied are not to be construed as limitations of the application scenarios.
For example, in other examples, the data platform may be a component in a multi-tenant system, such as a storage unit in the multi-tenant system.
In the multi-tenant technology, a multi-tenant system configures a plurality of task queues for each tenant in advance, and the number of the task queues can be determined based on the request of the tenant and the resources of the computer cluster.
In the related art, however, with the increase of tenants and the relative fixed resources of a computer cluster, how to improve the task output efficiency of a multi-tenant system becomes an urgent problem to be solved.
In the prior art, a queue scheduling method based on a multi-tenant technology is a random scheduling method, and includes: the scheduling device randomly schedules the task processing request initiated by the user device to a certain task queue of the tenant, and the corresponding computer cluster executes the operation corresponding to the task processing request, so that the tenant can obtain the data corresponding to the task processing request.
However, by using the random scheduling method, the load of the task queue of a part of tenants is usually high, and the task queues of other parts of tenants are idle, so that the task queues cannot be fully utilized, thereby resulting in low output efficiency of task scheduling and causing the task to be processed not to be executed in time.
In order to avoid at least one of the above technical problems, the inventors of the present disclosure propose the inventive concept of the present disclosure: and determining a task queue list by combining the queue state information of the task queue of the target tenant initiating the task to be processed and the queue state information of the task queues of other tenants, determining the task queue corresponding to the task to be processed based on the task queue list, and processing the task to be processed based on the resources of the task queue determined based on the corresponding computer cluster.
Based on the invention concept, the invention provides a queue scheduling method, device and system based on a multi-tenant technology, which are applied to the technical field of big data in the computer technology to improve the resource utilization rate.
Fig. 2 is a schematic diagram according to a first embodiment of the present disclosure, and as shown in fig. 1, a method for queue scheduling based on a multi-tenant technology provided in an embodiment of the present disclosure includes:
s201: responding to a task calling request initiated by a target tenant, wherein the task calling request is used for indicating to process a task to be processed, and determining a task queue corresponding to the task to be processed from a preset task queue list. The task queue list comprises at least one task queue, and the task queue list is determined according to queue state information of each task queue in the multi-tenant system.
The queue scheduling method based on the multi-tenant technology is applied to scheduling equipment in a multi-tenant system, and the multi-tenant system comprises the scheduling equipment and at least one computer cluster. And (6) scheduling. The scheduling device may be a processor or a chip, and this embodiment is not limited.
In connection with the application scenario shown in fig. 1, the multi-tenant system based on the multi-tenant technology may include a scheduling device and a plurality of computer clusters (e.g., N shown in fig. 1).
The task queue may include tasks that need to be processed, or may be an empty task queue, i.e., no tasks that need to be processed. The queue state information of the task queue refers to information related to task processing of the task queue, such as load of a task to be processed by the task queue, mount (quota) information of the task queue, priority of the task queue, utilization rate of a processor (CPU) of the task queue, and the like, which are not listed here. That is, according to the queue status information of a task queue, it is possible to determine the relevant situation of the task that needs to be processed by the task queue (e.g., resource consumption), and also determine the relevant situation that the task queue can reprocess the task (e.g., the number of tasks that can be processed).
In one example, the scheduling device may construct the task queue list according to the queue status information of each task queue in the multi-tenant system after receiving the task scheduling request initiated by the target tenant.
In another example, the scheduling device may also construct the task queue list according to the queue status information of each task queue in the multi-tenant system based on the preset time interval.
In another example, the scheduling device may also construct the task queue list in real time according to the queue status information of each task queue in the multi-tenant system.
In another example, after the multi-tenant system is initialized, the scheduling device constructs a task queue list according to queue state information of each task queue in the multi-tenant system, and then updates the task queue list based on a time interval or in real time.
It should be noted that, in this embodiment, the determination of the task queue list combines the queue state information of the task queue of the target tenant and the queue state information of the task queues of other tenants, so that the task queues of all tenants are considered comprehensively, and the task queues are fully utilized.
S202: and sending a task processing request to a computer cluster corresponding to the target tenant. The task processing request is used for indicating a task queue corresponding to the task to be processed, and the task processing request is used for indicating that the task to be processed is processed by adopting resources of the task queue corresponding to the task to be processed.
In this embodiment, the computer cluster corresponding to the target tenant may be determined by the scheduling device based on a deployment location of each computer cluster, a load of each computer cluster, and the like, and of course, the scheduling device may also allocate a computer cluster to each tenant in the multi-tenant system in advance, even a computer in the computer cluster, and the like, which is not limited in this embodiment.
For example, in the application scenario shown in fig. 1, the scheduling platform may determine, based on the deployment location of each of the computer clusters 1 to N, that the computer cluster corresponding to the target tenant is the computer cluster 1, and send a task processing request to the computer cluster 1. Accordingly, the computer cluster 1 obtains data corresponding to the task scheduling request from the data platform based on the task request.
Based on the above analysis, an embodiment of the present disclosure provides a queue scheduling method based on a multi-tenant technology, where the method is applied to a scheduling device in a multi-tenant system, the multi-tenant system includes the scheduling device and at least one computer cluster, and the method includes: responding to a task calling request initiated by a target tenant, wherein the task calling request is used for instructing to process a task to be processed, and determining a task queue corresponding to the task to be processed from a preset task queue list, wherein the task queue list includes at least one task queue, the task queue list is determined according to queue state information of each task queue in a multi-tenant system, and sending a task processing request to a computer cluster corresponding to the target tenant, wherein the task processing request is used for instructing the task queue corresponding to the task to be processed, and the task processing request is used for instructing to process the task to be processed by using resources of the task queue corresponding to the task to be processed, in the embodiment, the method includes the steps of: the task queue list is determined according to the queue state information of each task queue in the multi-tenant system, and the task queue corresponding to the task to be processed is determined according to the task queue list, so that the technical characteristics that the task queue corresponding to the task to be processed is processed by purple clouds of the task queue are adopted.
Fig. 3 is a schematic diagram according to a second embodiment of the present disclosure, and as shown in fig. 3, the method for queue scheduling based on multi-tenant technology provided in the embodiment of the present disclosure includes:
s301: and responding to a task calling request initiated by a target tenant, acquiring queue state information of a task queue corresponding to the target tenant, and acquiring queue state information of the task queue of each other tenant except the target tenant in the multi-tenant system.
The queue scheduling method based on the multi-tenant technology is applied to scheduling equipment in a multi-tenant system, and the multi-tenant system comprises the scheduling equipment and at least one computer cluster. And (5) scheduling.
Regarding the same contents in this embodiment as those in the above embodiments, such as the execution main body of this embodiment, the queue state information of the task queue in this step, and the like, details are not described in this embodiment.
The multi-tenant system may include a plurality of tenants, and the plurality of tenants includes a target tenant and other tenants, and the number of the other tenants may be one or more, which is not limited in this embodiment.
If the number of other tenants is one, the step can be understood as follows: the queue state information of the task queue corresponding to the target tenant is obtained, and the queue state information of the task queue of the other tenant is obtained.
If the number of other tenants is multiple, the step can be understood as follows: the queue state information of the task queue corresponding to the target tenant is obtained, and the queue state information of the task queue of each other tenant in the plurality of other tenants is also obtained.
S302: and determining candidate task queues from the task queue corresponding to the target tenant and the task queues corresponding to other tenants according to the queue state information of the task queue of the target tenant and the queue state information of the task queues of other tenants.
The candidate task queues may include a task queue corresponding to a target tenant; task queues corresponding to one or more other tenants may also be included; the tenant management system may also include a task queue corresponding to the target tenant and a task queue corresponding to one or more other tenants.
In some embodiments, S302 may include the steps of:
the first step is as follows: and judging whether the queue state information of the task queue of the target tenant meets the resource requirement for processing the task to be processed, if so, executing the second step, and if not, executing the third step.
The second step: and determining candidate task queues from the task queues corresponding to the target tenants.
The third step: and determining candidate task queues from the task queues corresponding to other tenants.
That is, it is considered that a candidate task queue is determined from the task queue of the target tenant preferentially, and when the task queue of the target tenant may have no way to complete the task scheduling request or the efficiency of completing the task scheduling request is low, the candidate task queue is determined from the task queues corresponding to other tenants, so as to preferentially utilize the resource of the target tenant, avoid occupying the resource of other tenants, meet the task scheduling requirement of the target tenant and the task scheduling requirement of other tenants as far as possible, avoid interference in the task scheduling process, and improve the technical effect of reliability of the task scheduling process.
In some embodiments, the second step may comprise the sub-steps of:
the first substep: and determining the resource utilization rate of each task queue in the task queues corresponding to the target tenants.
The second sub-step: and determining candidate task queues from the task queues corresponding to the target tenants according to the utilization rate of each resource.
For example, the resource utilization rate with the highest resource utilization rate is determined from various resource utilization rates (which may include one or more of CPU utilization rate, memory utilization rate and application utilization rate), and the task queue with the highest resource utilization rate is determined as the candidate task queue.
For another example, the task queues corresponding to the target tenants are sorted in a descending order based on the resource utilization rate from high to low, and the task queues with the resource utilization rate greater than a preset utilization rate threshold (which may be set based on demand, history, and experiments, etc.) are determined as candidate task queues.
In the embodiment, the candidate task queues are determined based on the resource utilization rate, so that the resource utilization rate can be improved, and the technical effect of improving the reliability of queue scheduling can be improved.
Based on the above analysis, candidate task queues may be determined from the dimension of resource utilization, in other embodiments, the task queues may be divided into normal task queues and core task queues, and the efficiency of the core task queues is higher than that of the normal task queues, and then the second step may include the following sub-steps:
the first sub-step: and acquiring the grade attribute information of the task to be processed.
The level attribute information of the to-be-processed task is used for representing, the importance degree of the to-be-processed task, that is, the priority required to be processed when the to-be-processed task is processed, generally speaking, the higher the priority represented by the level attribute information is, the higher the importance degree of the to-be-processed task is, the higher the possibility that the to-be-processed task needs to be processed preferentially is.
In other words, the task may be divided into a normal task and a core task, the level attribute information of the normal task represents a priority lower than that of the core task, the level attribute information of the core task represents a priority higher than that of the core task, the normal task may be processed according to a general processing schedule, and the core task generally needs to be processed quickly and efficiently.
The second sub-step: and if the level attribute information represents that the task to be processed is a task with high priority, determining a candidate task queue from the core task queue corresponding to the target tenant.
The third substep: and if the level attribute information represents that the task to be processed is a task with low priority, determining a candidate task queue from the common task queue corresponding to the target tenant.
For example, if the task to be processed is a normal task, a candidate task queue may be determined from a normal task queue corresponding to the target tenant, and if the task to be processed is a core task, a candidate task queue may be determined from a core task queue corresponding to the target tenant.
If the number of the ordinary task queues and/or the core task queues is multiple, candidate task queues can be determined from the ordinary task queues and/or the core task queues based on resource utilization rate and the like.
Conversely, for example, if the number of core task queues is one, the one core task queue may be determined as a candidate task queue.
It should be noted that, in this embodiment, by determining a candidate task queue in combination with the level attribute information of the to-be-processed task, the core task queue, and the common task queue, flexible processing of tasks with different levels of attribute information can be achieved, flexibility and diversity of task scheduling are improved, and a technical effect of meeting requirements on processing efficiency of tasks with different levels of attribute information is achieved.
In other embodiments, the task queue may be divided into a normal task queue and a core task queue, and the core task queue has a corresponding time interval, that is, relatively speaking, for any core task queue, the time of the core task queue processing the tasks is concentrated, the time of the core task queue processing the tasks is referred to as the time interval of the core task queue processing the tasks, and at other times outside the time interval, the core task queue generally processes the idle state.
In this embodiment, the task scheduling request includes a scheduling time, so as to determine a candidate task queue by combining the scheduling time and each time interval. For example:
determining the covering relation between the scheduling time and each time interval, if the scheduling time is covered by the time interval of a certain core task queue, namely the scheduling time is in the time interval, and because the core task queue usually needs to process tasks during the time interval, the task scheduling request executed based on the task queue is not considered.
If the scheduling time is not covered by the time interval of a certain core task queue, that is, the scheduling time is not in the time interval, and because the core task queue does not need to process tasks during the time interval, the core task queue is in an idle state, the core task queue can be determined as a candidate task queue.
It should be noted that, in this embodiment, by determining candidate task queues in combination with the scheduling time and each time interval, the disadvantage that the core task queue is idle and other task queues are congested can be avoided, and the technical effect of reasonable utilization of resources is improved.
S303: and constructing a task queue list according to the candidate task queues.
As can be known from the above analysis, the task queue list may include both the task queue corresponding to the target tenant and the task queue corresponding to one or more other tenants. Therefore, the task queues of the target tenant and other tenants can be scheduled in a cross mode, so that the flexibility and diversity of queue scheduling are realized, and the technical effect of improving the efficiency of task scheduling requests when the task scheduling requests are processed based on the task queue list is achieved.
S304: and determining a task queue corresponding to the task to be processed from the task queue list according to the inclusion relationship between the task queue list and each tenant. The inclusion relation represents whether the task queue corresponding to each tenant exists in the task queue list.
Based on the above analysis, the task queue list is determined based on the task queues of all tenants in the multi-tenant system, and therefore, in this embodiment, the task queue corresponding to the task to be processed may be determined from the task queue list based on the inclusion relationship.
Because the task queue list is determined based on the task queues of all the tenants under the multi-tenant system, the determined task queue corresponding to the task to be processed has wider selectivity, and the technical effect of fully utilizing queue resources can be improved.
In some embodiments, S304 may be implemented by different embodiments, as in the first embodiment, S304 may include the steps of:
the first step is as follows: and judging whether the task queue list comprises a queue of a target tenant, if so, executing the second step, and if not, executing the third step.
The second step: and determining the task queue corresponding to the target tenant as the task queue corresponding to the task to be processed.
In the same way, the task list corresponding to the target tenant in the task queue list is determined as the task queue corresponding to the task to be processed preferentially, so that resources of other tenants can be avoided being occupied, task scheduling requirements of the target tenant and task scheduling requirements of other tenants can be met as far as possible, interference in the task scheduling processing process is avoided, and the technical effect of improving the reliability of task scheduling processing is improved.
In some embodiments, if the number of the task lists corresponding to the target tenant in the task queue list is multiple, and it can be known by combining the above analysis that, in the task queue list, each task list corresponding to the target tenant may be sorted based on resource utilization, if the sorting is a descending order, the task queue corresponding to the first target tenant in the task queue list may be determined as the task queue corresponding to the task to be processed; if the sorting is ascending, the task queue corresponding to the last target tenant in the task queue list may be determined as the task queue corresponding to the task to be processed.
The third step: and if the task queue list comprises the task queue corresponding to at least one other tenant, acquiring the information of the task to be executed of each task queue in the task queue list.
The fourth step: and if the task queue of the task queue list comprises the task queue of the task which does not need to be processed according to the information of each task to be executed, determining the task queue of the task which does not need to be processed as the task queue corresponding to the task to be processed.
The executed task information of each task queue in the task queue list may be understood as that, for any task queue in the task queue list, the task queue may include a task that needs to be executed, or may be an idle task queue, that is, there is no task that needs to be executed.
In this embodiment, by using the information of each to-be-executed task, the task queue that does not need to process the task is determined as the task queue corresponding to the to-be-processed task, so that the technical effect of improving the efficiency of processing the to-be-processed task can be achieved.
In some embodiments, the fourth step may comprise the sub-steps of:
the first substep: and judging whether the queue task in the task queue list is a task queue of the task being processed or not according to the information of each task to be executed, if so, executing the second substep, and otherwise, executing the third substep.
The second substep: and acquiring the residual resources of each task queue in the task queues for processing the tasks, and determining the task queue corresponding to the largest residual resource as the task queue corresponding to the task to be processed.
By arranging the task queue corresponding to the largest remaining resource as the task queue corresponding to the task to be processed, the technical effect of efficiency when the task to be processed is processed can be met as much as possible.
As can be known from the above analysis, in some embodiments, when the task queue list is created, the task queue list may be created in an ascending manner or a descending manner, and if the task queues in the task queue list are sorted in the ascending manner based on the remaining resources, a first task queue in the task queue list may be determined as a task queue corresponding to the task to be processed; if the task queues in the task queue list are sorted based on the descending manner of the remaining resources, the last task queue in the task queue list may be determined as the task queue corresponding to the task to be processed.
The third substep: and determining the task queue without the task being processed as the task queue corresponding to the task to be processed.
If the number of the task queues without the task being processed is multiple, the remaining resources of each task queue without the task being processed can be determined, and the task queue without the task being processed, which has the largest remaining resources, is determined as the task queue corresponding to the task to be processed.
In some embodiments, when the task queue is determined for the task to be processed and the determined task queue is a task queue of another tenant, the task to be processed may be identified, for example, the priority of the task to be processed is degraded, and the identified task to be processed is scheduled to the determined task queue, so as to avoid the influence of efficiency reduction and the like when the determined task queue processes the task of the tenant corresponding to the determined task queue, and improve the technical effect of stable and reliable operation of the multi-tenant system.
In other embodiments, in the second embodiment of S304, in combination with the above analysis, it may be known that the task queue may be divided into a normal task queue and a core task queue, and accordingly, S304 may include the following steps:
the first step is as follows: and judging whether the task queue list comprises a core task queue, if so, executing the second step, and if not, executing the sixth step.
The second step is as follows: and acquiring the information of the tasks to be executed of each core task queue in the task queue list.
The third step: and judging whether each core task queue of the task queue list comprises a core task queue which does not need to process the task or not according to the information of each task to be executed, if so, executing the fourth step, and if not, executing the fifth step.
The fourth step: and determining the core task queue which does not need to process the task as a task queue corresponding to the task to be processed.
The fifth step: and acquiring the residual resources of each task queue in the core task queues of the processing tasks, and determining the core task queue corresponding to the maximum residual resources as the core task queue for processing the tasks to be processed.
Similarly, the core task queue corresponding to the maximum remaining resource is the task queue corresponding to the task to be processed, so that the technical effect of efficiency when the task to be processed is processed can be met as much as possible.
A sixth step: and determining a task queue corresponding to the task to be processed from the common task queue in the task queue list.
The implementation principle of the sixth step may refer to the foregoing embodiments, and details are not described here.
It should be noted that, in this embodiment, by preferentially determining the task queue corresponding to the to-be-processed task from the common task queues in the task queue list, the technical effect of the to-be-processed task on the processing efficiency can be improved.
In other embodiments, the two embodiments described above with respect to S304 may be integrated into one embodiment. For example:
after determining that the first task queue in the task queue list is the task queue corresponding to the task to be processed based on the first embodiment of S304, the first embodiment of S304 may be further performed, such as determining whether the first task queue is a core task queue.
And in some embodiments, the first embodiment and the second embodiment of S304 may be performed by using a preprocessing method. For example:
through the first embodiment of S304, it is determined that a first task queue in the task queue list is a task queue corresponding to a task to be processed, a load of the first task queue is obtained, and it is determined whether the load of the first task queue reaches a preset load threshold (e.g., 90%), and if the load of the first task queue does not reach the load threshold, the task to be processed is scheduled to the first task queue; and if the load of the first task queue reaches a load threshold value, judging whether the first task queue is a core task queue or not, and executing subsequent operation.
The above preprocessing based on the load of the first task queue is only for explaining the possible preprocessing manner, and is not to be understood as a limitation to the preprocessing manner. Preprocessing can also be performed based on the frequency and number of submitted task scheduling requests of the target tenants, and the like.
S305: and sending a task processing request to a computer cluster corresponding to the target tenant. The task processing request is used for indicating a task queue corresponding to the task to be processed, and the task processing request is used for indicating that the task to be processed is processed by adopting resources of the task queue corresponding to the task to be processed.
In some embodiments, because the number of the to-be-processed tasks submitted by each tenant is different, the situation that each tenant uses the task queue is also different, and in order to prevent the same tenant from submitting a large number of to-be-processed tasks and the like to cause an excessive amount of resources occupied by the task queue and block the processing of other to-be-processed tasks (particularly, the to-be-processed tasks represented by the level attribute information are high-priority tasks), the scheduling platform can control the resources of the task queue of the to-be-processed tasks of each tenant. For example, the scheduling platform may limit the number of tasks to be processed by the tenant; the running number of the task queues of the common tasks can be limited; it is also possible to perform a downgrade processing on a task to be processed with a large task amount, i.e. to process a task to be processed with a large task amount after a task to be processed with a relatively small task amount.
Fig. 4 is a schematic diagram according to a third embodiment of the present disclosure, and as shown in fig. 4, a queue scheduling apparatus 400 based on a multi-tenant technology provided in the embodiment of the present disclosure is applied to a scheduling device in a multi-tenant system, where the multi-tenant system includes the scheduling device and at least one computer cluster, and includes:
a first determining unit 401, configured to respond to a task invoking request initiated by a target tenant, where the task invoking request is used to instruct to process a to-be-processed task, and determine a task queue corresponding to the to-be-processed task from a preset task queue list; the task queue list comprises at least one task queue, and the task queue list is determined according to queue state information of each task queue in the multi-tenant system.
A second sending unit 402, configured to send a task processing request to a computer cluster corresponding to a target tenant, where the task processing request is used to indicate a task queue corresponding to a to-be-processed task, and the task processing request is used to indicate that a resource of the task queue corresponding to the to-be-processed task is used to process the to-be-processed task.
Fig. 5 is a schematic diagram according to a fourth embodiment of the present disclosure, and as shown in fig. 5, a queue scheduling apparatus 500 based on a multi-tenant technology provided in the embodiment of the present disclosure is applied to a scheduling device in a multi-tenant system, where the multi-tenant system includes the scheduling device and at least one computer cluster, and includes:
a first obtaining unit 501, configured to obtain queue state information of a task queue corresponding to a target tenant in response to a task invoking request initiated by the target tenant.
A second obtaining unit 502, configured to obtain queue state information of a task queue of each tenant other than the target tenant in the multi-tenant system.
A second determining unit 503, configured to determine a candidate task queue from the task queue corresponding to the target tenant and the task queues corresponding to other tenants according to the queue state information of the task queue of the target tenant and the queue state information of the task queues of other tenants.
As can be seen in fig. 5, in some embodiments, the second determining unit 503 includes:
a sixth determining sub-unit 5031, configured to determine, if it is determined that the queue state information of the task queue of the target tenant meets the resource requirement for processing the task to be processed, a candidate task queue from the task queue corresponding to the target tenant.
In some embodiments, if it is determined that the queue state information of the task queue of the target tenant meets the resource requirement for processing the task to be processed, the sixth determining sub-unit 5031 includes:
the first determining module is used for determining the resource utilization rate of each task queue in the task queues corresponding to the target tenant if the queue state information of the task queues of the target tenant is determined to meet the resource requirement for processing the tasks to be processed.
And the second determining module is used for determining candidate task queues from the task queues corresponding to the target tenants according to the resource utilization rates.
In some embodiments, the task queue corresponding to each tenant includes a common task queue and a core task queue, and the efficiency of processing tasks by the core task queue is higher than that of the common task queue; if it is determined that the queue state information of the task queue of the target tenant meets the resource requirement for processing the task to be processed, the sixth determining subunit 5031 includes:
and the acquisition module is used for acquiring the grade attribute information of the task to be processed.
And the third determining module is used for determining a candidate task queue from the core task queue corresponding to the target tenant if the level attribute information represents that the task to be processed is a task with high priority.
And the fourth determining module is used for determining a candidate task queue from the common task queue corresponding to the target tenant if the level attribute information represents that the task to be processed is a low-priority task.
A seventh determining sub-unit 5032, configured to determine, if it is determined that the queue state information of the task queue of the target tenant does not meet the resource requirement for processing the to-be-processed task, a candidate task queue from task queues corresponding to other tenants.
As can be seen from fig. 5, in some embodiments, the task queue corresponding to each tenant includes a common task queue and a core task queue, and the efficiency of the task of the core task queue is higher than that of the common task queue; the task scheduling request also comprises scheduling time; the second determination unit 503 includes:
the fifth obtaining sub-unit 5033 is configured to obtain a time interval when each core task queue processes a task.
An eighth determining unit 5034, configured to determine, if the scheduling time is not in the at least one time interval, a candidate core task queue from the core task queues corresponding to the at least one time interval.
The constructing unit 504 is configured to construct a task queue list according to each candidate task queue.
A first determining unit 505, configured to respond to a task invoking request initiated by a target tenant, where the task invoking request is used to instruct to process a to-be-processed task, and determine a task queue corresponding to the to-be-processed task from a preset task queue list; the task queue list comprises at least one task queue, and the task queue list is determined according to queue state information of each task queue in the multi-tenant system.
In some embodiments, the first determining unit 505 is configured to determine the task queue corresponding to the task to be processed from the task queue list according to an inclusion relationship between the task queue list and each tenant, where the inclusion relationship indicates whether the task queue corresponding to each tenant exists in the task queue list.
As can be seen in fig. 5, in some embodiments, the first determining unit 505 includes:
the first obtaining sub-unit 5051 is configured to obtain to-be-executed task information of each task queue in the task queue list if the task queue list includes a task queue corresponding to at least one other tenant and does not include a task queue corresponding to a target tenant.
The first determining subunit 5052 is configured to determine, if the task queue of the task queue list includes a task queue of a task that does not need to be processed according to the information of each task to be executed, that the task queue of the task that does not need to be processed is a task queue corresponding to the task to be processed.
The second obtaining sub-unit 5053 is configured to obtain remaining resources of each task queue in the task queue of the processing task if it is determined that all the queue tasks in the task queue list are task queues of the processing task according to the information of each task to be executed.
A second determining sub-unit 5054 is configured to determine the task queue corresponding to the largest remaining resource as the task queue corresponding to the task to be processed.
In some embodiments, the first determining unit 505 is configured to determine, if the task queue list includes a task queue corresponding to a target tenant, that the task queue corresponding to the target tenant is a task queue corresponding to a to-be-processed task.
In some embodiments, the task queues in the task queue list include a common task queue and a core task queue, and the efficiency of processing tasks by the core task queue is higher than that of the common task queue; as can be seen from fig. 5, the first determining unit 505 includes:
the third obtaining sub-unit 5055 is configured to obtain to-be-executed task information of each core task queue in the task queue list if the task queue list has a core task queue.
A third determining subunit 5056 is configured to determine, if each core task queue of the task queue list includes a core task queue that does not need to process a task according to the information of each to-be-executed task, that the core task queue that does not need to process a task is a task queue corresponding to the to-be-processed task.
A fourth determining subunit 5057 is configured to determine, if the task queue list does not have a core task queue, a task queue corresponding to the task to be processed based on the common task queue in the task queue list.
The fourth obtaining subunit 5058 is configured to obtain the remaining resources of each task queue in the core task queues of the processing task, if it is determined that the core task queues of the task queue list are all the core task queues of the processing task according to the information of each to-be-executed task.
A fifth determining subunit 5059 is configured to determine the core task queue corresponding to the largest remaining resource as the core task queue for processing the task to be processed.
A second sending unit 506, configured to send a task processing request to the computer cluster corresponding to the target tenant, where the task processing request is used to indicate a task queue corresponding to the to-be-processed task, and the task processing request is used to indicate that the to-be-processed task is processed by using resources of the task queue corresponding to the to-be-processed task.
Fig. 6 is a schematic diagram according to a fifth embodiment of the present disclosure, and as shown in fig. 6, an electronic device 600 in the present disclosure may include: a processor 601 and a memory 602.
A memory 602 for storing programs; the Memory 602 may include a volatile Memory (RAM), such as a Static Random Access Memory (SRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (DDR SDRAM), and the like; the memory may also comprise a non-volatile memory, such as a flash memory. The memory 602 is used to store computer programs (e.g., application programs, functional modules, etc. that implement the methods described above), computer instructions, etc., which may be stored in partitions in one or more of the memories 602. And the above-described computer programs, computer instructions, data, etc. may be called by the processor 601.
The computer programs, computer instructions, etc. described above may be stored in one or more memories 602 in a partitioned manner. And the above-mentioned computer program, computer instruction, etc. can be called by the processor 601.
A processor 601, configured to execute the computer program stored in the memory 602 to implement the steps in the method according to the foregoing embodiments.
Reference may be made in particular to the description relating to the preceding method embodiment.
The processor 601 and the memory 602 may be separate structures or may be an integrated structure integrated together. When the processor 601 and the memory 602 are separate structures, the memory 602 and the processor 601 may be coupled by a bus 603.
The electronic device of this embodiment may execute the technical solution in the method, and the specific implementation process and the technical principle are the same, which are not described herein again.
According to another aspect of the embodiments of the present disclosure, there is provided a scheduling apparatus, including: the queue scheduling apparatus based on multi-tenant technology as in any one of the above embodiments.
In some embodiments, as shown in fig. 7, the scheduling apparatus 700 includes:
the flow control module 701 is configured to receive task scheduling requests initiated by each tenant, and perform flow control processing on each task scheduling request.
Wherein the fluidic processing comprises as shown in fig. 7: request number limit, run number limit, task type limit. The request number limitation is a limitation process of the number of task scheduling requests to the tenant. The run-number limitation is a limitation process for limiting the run number of the task of the tenant. The task type limitation refers to a degradation process for a task of a relatively large task amount.
Scheduling policy module 702 for incorporating, as shown in fig. 7: real-time resource amount, task queue priority, cross-tenant task queue calling, time period task queue calling and task queue list construction.
The implementation resource amount may be a resource usage rate in the above embodiment. The task queue priority may be the priority of the normal task queue and the core task queue in the above embodiment. The cross-tenant task queue call may be to construct a task queue list and the like for the queue state information based on the task queue of each other tenant in the above embodiments. The time period task queue call may be to construct a task queue list and the like for each time interval based on the scheduling time of the task to be processed in the above embodiment.
In some embodiments, a computer cluster queue pool may be built on a computer cluster by computer cluster basis. As shown in fig. 7, the computer cluster queue pool 703 includes a task queue list corresponding to each computer cluster. For example, the task queue list of the computer cluster a includes task queues A1 and A2 up to a task queue An, the task queue list of the computer cluster B includes task queues B1 and B2 up to a task queue Bn, and the task queue list of the computer cluster C includes task queues C1 and C2 up to a task queue Cn. Wherein n is a positive integer greater than 1.
It should be noted that the division of the modules is only an exemplary illustration of the scheduling device, and is not to be understood as a limitation of the scheduling device. For example, as shown in fig. 7, in other embodiments, the scheduling apparatus may further include a fetching module 704, configured to obtain queue status information such as load, CPU utilization, memory utilization, and the like of each task queue, so that the scheduling policy module constructs a task queue list based on the queue status information.
According to another aspect of the embodiments of the present disclosure, a multi-tenant system based on a multi-tenant technology is provided, including: a scheduling device and at least one computer cluster.
The computer cluster comprises at least one computer, and the computer executes tasks in the task queue allocated to the computer.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The computing unit 801 performs the various methods and processes described above, such as queue scheduling methods, apparatus and systems based on multi-tenant technology. For example, in some embodiments, the multi-tenant technology-based queue scheduling methods, apparatus, and systems may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more steps of the multi-tenant technology-based queue scheduling methods, apparatus, and systems described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the multi-tenant technology-based queue scheduling method, apparatus, and system by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and VPS service ("Virtual Private Server", or "VPS" for short). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (26)

1. A queue scheduling method based on multi-tenant technology is applied to a scheduling device in a multi-tenant system, the multi-tenant system comprises the scheduling device and at least one computer cluster, and the method comprises the following steps:
responding to a task calling request initiated by a target tenant, wherein the task calling request is used for indicating to process a task to be processed, and determining a task queue corresponding to the task to be processed from a preset task queue list; the task queue list comprises at least one task queue, and the task queue list is determined according to queue state information of each task queue in the multi-tenant system;
sending a task processing request to a computer cluster corresponding to the target tenant, wherein the task processing request is used for indicating a task queue corresponding to the task to be processed, and the task processing request is used for indicating that the task to be processed is processed by adopting resources of the task queue corresponding to the task to be processed;
before determining the task queue corresponding to the task to be processed from a preset task queue list, the method further includes:
responding to a task calling request initiated by a target tenant, acquiring queue state information of a task queue corresponding to the target tenant, and acquiring queue state information of a task queue of each tenant other than the target tenant in the multi-tenant system;
determining candidate task queues from the task queue corresponding to the target tenant and the task queues corresponding to other tenants according to the queue state information of the task queue of the target tenant and the queue state information of the task queues of other tenants;
and constructing the task queue list according to each candidate task queue.
2. The method of claim 1, wherein determining a task queue corresponding to the task to be processed from a preset task queue list comprises:
and determining the task queue corresponding to the task to be processed from the task queue list according to the inclusion relationship between the task queue list and each tenant, wherein the inclusion relationship represents whether the task queue corresponding to each tenant exists in the task queue list.
3. The method according to claim 2, wherein determining a task queue corresponding to the task to be processed from the task queue list according to an inclusion relationship between the task queue list and each tenant comprises:
if the task queue list comprises at least one task queue corresponding to other tenants and does not comprise the task queue corresponding to the target tenant, acquiring information of the task to be executed of each task queue in the task queue list;
and if the task queue of the task queue list comprises the task queue of the task which does not need to be processed according to the information of each task to be executed, determining the task queue of the task which does not need to be processed as the task queue corresponding to the task to be processed.
4. The method of claim 3, further comprising:
if the queue tasks in the task queue list are determined to be the task queues of the tasks being processed according to the information of the tasks to be executed, acquiring the residual resources of each task queue in the task queues of the processing tasks, and determining the task queue corresponding to the largest residual resource as the task queue corresponding to the tasks to be processed.
5. The method according to claim 2, wherein determining the task queue corresponding to the task to be processed from the task queue list according to an inclusion relationship between the task queue list and each tenant comprises:
and if the task queue list comprises the task queue corresponding to the target tenant, determining the task queue corresponding to the target tenant as the task queue corresponding to the task to be processed.
6. The method according to claim 2, wherein the task queues in the task queue list include a common task queue and a core task queue, and the efficiency of processing tasks by the core task queue is higher than that of the common task queue; determining a task queue corresponding to the task to be processed from a preset task queue list, wherein the determining comprises the following steps:
if the task queue list has a core task queue, acquiring to-be-executed task information of each core task queue in the task queue list, and if the core task queues of the task queue list include a core task queue which does not need to process a task according to the to-be-executed task information, determining the core task queue which does not need to process the task as a task queue corresponding to the to-be-processed task;
and if the task queue list does not have a core task queue, determining a task queue corresponding to the task to be processed based on a common task queue in the task queue list.
7. The method of claim 6, further comprising:
and if the core task queues of the task queue list are all the core task queues of the tasks being processed according to the information of the tasks to be executed, acquiring the residual resources of each task queue in the core task queues of the processing tasks, and determining the core task queue corresponding to the maximum residual resource as the core task queue for processing the tasks to be processed.
8. The method according to any one of claims 1 to 7, wherein determining a candidate task queue from the task queue corresponding to the target tenant and the task queue corresponding to each of the other tenants according to the queue status information of the task queue of the target tenant and the queue status information of the task queue of each of the other tenants comprises:
if the queue state information of the task queue of the target tenant is determined to meet the resource requirement for processing the task to be processed, determining a candidate task queue from the task queue corresponding to the target tenant;
and if the queue state information of the task queue of the target tenant is determined not to meet the resource requirement for processing the task to be processed, determining a candidate task queue from the task queues corresponding to the other tenants.
9. The method of claim 8, wherein if it is determined that the queue status information of the task queue of the target tenant meets the resource requirement for processing the task to be processed, determining a candidate task queue from the task queue corresponding to the target tenant comprises:
if the queue state information of the task queue of the target tenant is determined to meet the resource requirement for processing the task to be processed, determining the resource utilization rate of each task queue in the task queue corresponding to the target tenant, and determining a candidate task queue from the task queue corresponding to the target tenant according to each resource utilization rate.
10. The method according to claim 8, wherein the task queue corresponding to each tenant comprises a common task queue and a core task queue, and the efficiency of processing tasks by the core task queue is higher than that of the common task queue; if it is determined that the queue state information of the task queue of the target tenant meets the resource requirement for processing the task to be processed, determining a candidate task queue from the task queue corresponding to the target tenant, including:
acquiring the grade attribute information of the task to be processed;
if the level attribute information represents that the task to be processed is a high-priority task, determining a candidate task queue from a core task queue corresponding to the target tenant;
and if the level attribute information represents that the task to be processed is a low-priority task, determining a candidate task queue from common task queues corresponding to the target tenant.
11. The method according to claim 9 or 10, wherein the task queue corresponding to each tenant includes a common task queue and a core task queue, and the efficiency of the task of the core task queue is higher than that of the common task queue; the task scheduling request also comprises scheduling time; the method further comprises the following steps:
acquiring a time interval for processing tasks by each core task queue;
and if the scheduling time is not in at least one time interval, determining a candidate core task queue from the core task queues corresponding to the at least one time interval.
12. A queue scheduling apparatus based on multi-tenant technology, the apparatus being applied to a scheduling device in a multi-tenant system, the multi-tenant system comprising the scheduling device and at least one computer cluster, the apparatus comprising:
the task scheduling method comprises a first determining unit, a second determining unit and a processing unit, wherein the first determining unit is used for responding to a task scheduling request initiated by a target tenant, the task scheduling request is used for indicating to process a task to be processed, and a task queue corresponding to the task to be processed is determined from a preset task queue list; the task queue list comprises at least one task queue, and the task queue list is determined according to queue state information of each task queue in the multi-tenant system;
a second sending unit, configured to send a task processing request to a computer cluster corresponding to the target tenant, where the task processing request is used to indicate a task queue corresponding to the to-be-processed task, and the task processing request is used to indicate that resources of the task queue corresponding to the to-be-processed task are used to process the to-be-processed task;
the first acquisition unit is used for responding to a task calling request initiated by a target tenant and acquiring queue state information of a task queue corresponding to the target tenant;
the second acquisition unit is used for acquiring the queue state information of the task queue of each tenant other than the target tenant in the multi-tenant system;
a second determining unit, configured to determine candidate task queues from the task queue corresponding to the target tenant and the task queue corresponding to each of the other tenants according to the queue state information of the task queue of the target tenant and the queue state information of the task queue of each of the other tenants;
and the constructing unit is used for constructing the task queue list according to each candidate task queue.
13. The apparatus according to claim 12, wherein the first determining unit is configured to determine the task queue corresponding to the to-be-processed task from the task queue list according to an inclusion relationship between the task queue list and each tenant, where the inclusion relationship indicates whether the task queue corresponding to each tenant exists in the task queue list.
14. The apparatus of claim 13, wherein the first determining unit comprises:
the first acquiring subunit is configured to acquire to-be-executed task information of each task queue in the task queue list if the task queue list includes a task queue corresponding to at least one other tenant and does not include a task queue corresponding to the target tenant;
and the first determining subunit is configured to determine that the task queue of the task that does not need to be processed is the task queue corresponding to the task to be processed if it is determined that the task queue of the task queue list includes the task queue of the task that does not need to be processed according to the information of each task to be executed.
15. The apparatus of claim 14, the first determination unit, further comprising:
the second acquiring subunit is configured to acquire the remaining resources of each task queue in the task queue of the processing task if it is determined that all the queue tasks in the task queue list are the task queues of the processing task according to the information of each to-be-executed task;
and the second determining subunit is configured to determine that the task queue corresponding to the maximum remaining resource is the task queue corresponding to the to-be-processed task.
16. The apparatus according to claim 13, wherein the first determining unit is configured to determine, if the task queue list includes the task queue corresponding to the target tenant, that the task queue corresponding to the target tenant is the task queue corresponding to the to-be-processed task.
17. The apparatus according to claim 13, wherein the task queues in the task queue list include a normal task queue and a core task queue, and the efficiency of processing tasks by the core task queue is higher than that of the normal task queue; the first determination unit includes:
a third obtaining subunit, configured to obtain to-be-executed task information of each core task queue in the task queue list if the task queue list has a core task queue;
a third determining subunit, configured to determine, if it is determined that a core task queue that does not need to process a task is included in each core task queue of the task queue list according to each to-be-executed task information, that the core task queue that does not need to process a task is a task queue corresponding to the to-be-processed task;
and a fourth determining subunit, configured to determine, if there is no core task queue in the task queue list, a task queue corresponding to the to-be-processed task based on a common task queue in the task queue list.
18. The apparatus of claim 17, the first determination unit, further comprising:
a fourth obtaining subunit, configured to obtain, if it is determined according to the information of each to-be-executed task that all core task queues of the task queue list are core task queues of a processing task, remaining resources of each task queue in the core task queues of the processing task;
and the fifth determining subunit is configured to determine that the core task queue corresponding to the maximum remaining resource is the core task queue for processing the to-be-processed task.
19. The apparatus according to any of claims 12-18, the second determining unit comprising:
a sixth determining subunit, configured to determine a candidate task queue from the task queue corresponding to the target tenant if it is determined that the queue state information of the task queue of the target tenant meets a resource requirement for processing the task to be processed;
and a seventh determining subunit, configured to determine, if it is determined that the queue state information of the task queue of the target tenant does not meet the resource requirement for processing the task to be processed, a candidate task queue from task queues corresponding to the other tenants.
20. The apparatus of claim 19, wherein if it is determined that queue status information of the task queue of the target tenant meets resource requirements for processing the pending task, the sixth determining subunit includes:
the first determining module is used for determining the resource utilization rate of each task queue in the task queues corresponding to the target tenant if the queue state information of the task queues of the target tenant is determined to meet the resource requirement for processing the tasks to be processed;
and the second determining module is used for determining candidate task queues from the task queues corresponding to the target tenants according to the utilization rates of all the resources.
21. The apparatus according to claim 19, wherein the task queue corresponding to each tenant includes a normal task queue and a core task queue, and the efficiency of processing tasks by the core task queue is higher than that of the normal task queue; if it is determined that the queue state information of the task queue of the target tenant meets the resource requirement for processing the task to be processed, the sixth determining subunit includes:
the acquisition module is used for acquiring the grade attribute information of the task to be processed;
a third determining module, configured to determine a candidate task queue from a core task queue corresponding to the target tenant if the level attribute information indicates that the to-be-processed task is a high-priority task;
and the fourth determining module is used for determining a candidate task queue from the common task queue corresponding to the target tenant if the level attribute information represents that the task to be processed is a task with low priority.
22. The apparatus according to claim 20 or 21, wherein the task queue corresponding to each tenant includes a common task queue and a core task queue, and the efficiency of the task of the core task queue is higher than that of the common task queue; the task scheduling request also comprises scheduling time; the second determining unit further includes:
the fifth acquiring subunit is used for acquiring the time interval of processing the task by each core task queue;
an eighth determining unit, configured to determine a candidate core task queue from the core task queues corresponding to the at least one time interval if the scheduling time is not in the at least one time interval.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
25. A scheduling apparatus, comprising: the apparatus of any one of claims 12-22.
26. A multi-tenant system based on multi-tenant technology, comprising:
the scheduling apparatus of claim 25;
at least one cluster of computers.
CN202111255682.7A 2021-10-27 2021-10-27 Queue scheduling method, device and system based on multi-tenant technology Active CN113986497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111255682.7A CN113986497B (en) 2021-10-27 2021-10-27 Queue scheduling method, device and system based on multi-tenant technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111255682.7A CN113986497B (en) 2021-10-27 2021-10-27 Queue scheduling method, device and system based on multi-tenant technology

Publications (2)

Publication Number Publication Date
CN113986497A CN113986497A (en) 2022-01-28
CN113986497B true CN113986497B (en) 2022-11-22

Family

ID=79742536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111255682.7A Active CN113986497B (en) 2021-10-27 2021-10-27 Queue scheduling method, device and system based on multi-tenant technology

Country Status (1)

Country Link
CN (1) CN113986497B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858133B (en) * 2023-03-01 2023-05-02 北京仁科互动网络技术有限公司 Batch data processing method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9847950B1 (en) * 2017-03-16 2017-12-19 Flexera Software Llc Messaging system thread pool
CN109034396A (en) * 2018-07-11 2018-12-18 北京百度网讯科技有限公司 Method and apparatus for handling the deep learning operation in distributed type assemblies
CN111488224A (en) * 2020-03-30 2020-08-04 武汉时波网络技术有限公司 Distributed metering charging method and system
CN111679900A (en) * 2020-06-15 2020-09-18 杭州海康威视数字技术股份有限公司 Task processing method and device
CN111866045A (en) * 2019-04-29 2020-10-30 京东数字科技控股有限公司 Information processing method and device, computer system and computer readable medium
CN113204433A (en) * 2021-07-02 2021-08-03 上海钐昆网络科技有限公司 Dynamic allocation method, device, equipment and storage medium for cluster resources
CN113424152A (en) * 2019-08-27 2021-09-21 微软技术许可有限责任公司 Workflow-based scheduling and batching in a multi-tenant distributed system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8584124B2 (en) * 2010-04-20 2013-11-12 Salesforce.Com, Inc. Methods and systems for batch processing in an on-demand service environment
US11579932B2 (en) * 2016-08-29 2023-02-14 Vmware, Inc. Tiered backup archival in multi-tenant cloud computing system
US10728166B2 (en) * 2017-06-27 2020-07-28 Microsoft Technology Licensing, Llc Throttling queue for a request scheduling and processing system
CN111831420B (en) * 2020-07-20 2023-08-08 北京百度网讯科技有限公司 Method for task scheduling, related device and computer program product

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9847950B1 (en) * 2017-03-16 2017-12-19 Flexera Software Llc Messaging system thread pool
CN109034396A (en) * 2018-07-11 2018-12-18 北京百度网讯科技有限公司 Method and apparatus for handling the deep learning operation in distributed type assemblies
CN111866045A (en) * 2019-04-29 2020-10-30 京东数字科技控股有限公司 Information processing method and device, computer system and computer readable medium
CN113424152A (en) * 2019-08-27 2021-09-21 微软技术许可有限责任公司 Workflow-based scheduling and batching in a multi-tenant distributed system
CN111488224A (en) * 2020-03-30 2020-08-04 武汉时波网络技术有限公司 Distributed metering charging method and system
CN111679900A (en) * 2020-06-15 2020-09-18 杭州海康威视数字技术股份有限公司 Task processing method and device
CN113204433A (en) * 2021-07-02 2021-08-03 上海钐昆网络科技有限公司 Dynamic allocation method, device, equipment and storage medium for cluster resources

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Pankaj Saha ; Angel Beltre ; Madhusudhan Govindaraju.Tromino: Demand and DRF Aware Multi-Tenant Queue Manager for Apache Mesos Cluster.《IEEE Xplore》.2019,全文. *
一种面向多租户中间件的应用级并发控制方法;赵占平,王伟,张文博,蒲卫,范国闯;《计算机应用研究》;20120831;第29卷(第8期);全文 *
面向SaaS应用的数据访问QoS保证方法;孙鹏,尹建伟;《计算机应用与软件》;20110531;第28卷(第5期);全文 *

Also Published As

Publication number Publication date
CN113986497A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
US9329901B2 (en) Resource health based scheduling of workload tasks
US10506024B2 (en) System and method for equitable processing of asynchronous messages in a multi-tenant platform
CN113849312B (en) Data processing task allocation method and device, electronic equipment and storage medium
CN113641457A (en) Container creation method, device, apparatus, medium, and program product
CN112783659B (en) Resource allocation method and device, computer equipment and storage medium
CN112559182A (en) Resource allocation method, device, equipment and storage medium
CN114911598A (en) Task scheduling method, device, equipment and storage medium
CN114936173B (en) Read-write method, device, equipment and storage medium of eMMC device
CN114461393A (en) Multitask scheduling method, multitask scheduling device, electronic equipment, multitask scheduling system and automatic driving vehicle
CN112860974A (en) Computing resource scheduling method and device, electronic equipment and storage medium
CN112905314A (en) Asynchronous processing method and device, electronic equipment, storage medium and road side equipment
CN114356547B (en) Low-priority blocking method and device based on processor virtualization environment
CN113986497B (en) Queue scheduling method, device and system based on multi-tenant technology
CN112887407B (en) Job flow control method and device for distributed cluster
CN113590329A (en) Resource processing method and device
CN112860401A (en) Task scheduling method and device, electronic equipment and storage medium
CN112527509A (en) Resource allocation method and device, electronic equipment and storage medium
CN115952054A (en) Simulation task resource management method, device, equipment and medium
CN114217977B (en) Resource allocation method, device, equipment and storage medium
CN113032092B (en) Distributed computing method, device and platform
CN115328612A (en) Resource allocation method, device, equipment and storage medium
CN114265692A (en) Service scheduling method, device, equipment and storage medium
CN114327897A (en) Resource allocation method and device and electronic equipment
CN113867920A (en) Task processing method and device, electronic equipment and medium
CN113971082A (en) Task scheduling method, device, equipment, medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant