CN114265699A - Task scheduling method and device, electronic device and readable storage medium - Google Patents

Task scheduling method and device, electronic device and readable storage medium Download PDF

Info

Publication number
CN114265699A
CN114265699A CN202111676892.3A CN202111676892A CN114265699A CN 114265699 A CN114265699 A CN 114265699A CN 202111676892 A CN202111676892 A CN 202111676892A CN 114265699 A CN114265699 A CN 114265699A
Authority
CN
China
Prior art keywords
task
instance
task instance
computing resource
execution queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111676892.3A
Other languages
Chinese (zh)
Inventor
黄练纲
方君虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCI China Co Ltd
Original Assignee
CCI China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCI China Co Ltd filed Critical CCI China Co Ltd
Priority to CN202111676892.3A priority Critical patent/CN114265699A/en
Publication of CN114265699A publication Critical patent/CN114265699A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a task scheduling method, which comprises the following steps: acquiring at least one computing resource, creating at least one execution queue corresponding to each computing resource, and distributing the resource occupation ratio of the corresponding computing resource for each execution queue; acquiring at least one task instance and task information of each task instance, and allocating each task instance to a specified execution queue of specified computing resources according to the task information; obtaining the selected probability of each candidate task instance according to the resource occupation ratio of all available execution queues in the same computing resource; and selecting a candidate task instance from all available execution queues corresponding to the same computing resource to run according to the selection probability of each candidate task instance. The method determines the priority through the resource proportion of the execution queue, and selects tasks from different execution queues to execute each time by adopting a polling mechanism, thereby ensuring that all task instances are classified and executed in sequence according to the set priority and giving consideration to the condition that the low-priority tasks still have the opportunity to be executed preferentially.

Description

Task scheduling method and device, electronic device and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a task scheduling method and apparatus, an electronic apparatus, and a readable storage medium.
Background
With the rapid development of computer technology and internet technology, users can easily access the internet and submit tasks to a server on the internet, and the server can provide corresponding services for the users by executing the tasks submitted by the users. However, when the server is accessed and used by a large number of users in the same time period, the server may receive the task request instructions continuously, and usually, all the task request instructions are stored in the message queue, the tasks are stacked, and then, through some scheduling policies, a certain task is selected from the message queue continuously to be executed.
Patent application publication No. CN107423120A discloses a task scheduling method and device, specifically, a method and device for setting a selected probability for each message queue, and randomly selecting one message queue according to the set selected probability to sequentially execute all tasks in the message queue, wherein the selected probability of the message queue is positively correlated with the priority of the message queue and the number of tasks currently included, and the priority of the message queue is determined according to the priority of the included tasks. That is, the higher the priority of tasks contained in a message queue and the greater the number of tasks contained in the message queue, the higher the probability that the message queue will be selected. This has the disadvantage that tasks in the lower priority message queue are difficult to execute to, and tasks at the end of the lower priority message queue are more difficult to execute to. Therefore, the task scheduling method not only has single consideration and low execution efficiency, but also is easy to ignore tasks in the low-priority message queue, and reduces user experience.
Disclosure of Invention
The embodiment of the application provides a task scheduling method, which determines priority through the resource proportion of each execution queue, introduces a task polling mechanism and selects tasks from different execution queues to execute each time.
In a first aspect, an embodiment of the present application provides a task scheduling method, including the following steps:
acquiring at least one computing resource, creating at least one execution queue corresponding to each computing resource, and distributing the resource proportion of the corresponding computing resource to each execution queue;
acquiring at least one task instance and task information of each task instance, and allocating each task instance to a specified execution queue of specified computing resources according to the task information;
acquiring the selection probability of each candidate task instance according to the resource occupation ratio of all available execution queues in the same computing resource, wherein the candidate task instance is a task instance positioned at the first position in each available execution queue, and the available execution queue is an execution queue in an enabled state;
and selecting one candidate task instance from all the available execution queues corresponding to the same computing resource to run according to the selection probability of each candidate task instance.
In some embodiments, obtaining the selected probability for each candidate task instance based on the resource fraction of all available execution queues in the same computing resource comprises: the method comprises the steps of obtaining a first resource proportion of an available execution queue where each candidate task instance is located, obtaining a second resource proportion of the sum of the resource proportions of all available execution queues corresponding to appointed computing resources of each candidate task instance, and obtaining the selection probability of the candidate task according to the ratio of the first resource proportion and the second resource proportion corresponding to the same candidate task instance.
In some embodiments, the task information of each of the task instances comprises: unique identification of task instance, submission time, unique identification of submitter, appointed computing resource information and appointed execution queue information; after "acquiring at least one computing resource", the method comprises the following steps: and setting at least one limiting parameter for each computing resource, wherein the limiting parameter is any one of a time period for allowing the task to be submitted, a task termination time point, a concurrency upper limit value, a single task queuing number upper limit value and a single task running number upper limit value.
In some embodiments, assigning each of the task instances to a designated execution queue of a corresponding designated computing resource based on the task information includes: and judging whether the task instance meets the queuing condition according to the task information of each task instance, and distributing each task instance meeting the queuing condition to a specified execution queue in the specified computing resources of the task instance and locating at the last bit of the specified execution queue of the task instance.
In some embodiments of the application, "determining whether the task instance meets the queuing condition" includes:
acquiring a first number of queuing task instances corresponding to the unique identifier of the submitter of the task instance in the appointed computing resource of the task instance, wherein the queuing task instances are the task instances queued in any one execution queue;
if the first quantity does not reach the upper limit value of the queuing number of the single task set by the appointed computing resource of the task instance, the task instance does not accord with the queuing condition;
when the first number is smaller than the upper limit value of the single task queue number set by the appointed computing resource of the task instance, acquiring a second number of all running task instances corresponding to the unique identifier of the submitter of the task instance and a third number of all running instance tasks in the appointed computing resource of the task instance, wherein the running task instance is a running task instance in the appointed computing resource of the task instance;
and if the second quantity does not reach the upper limit value of the running number of the single task set by the specified computing resource of the task instance and the third quantity does not reach the upper limit value of the concurrency set by the specified computing resource of the task instance, the task instance meets the queuing condition.
In some embodiments, before obtaining the first number of queued task instances that correspond to the unique identifier of the submitter of the task instance, the method includes: and if the submission time of the task instance does not meet the allowable submission task time period set by the specified computing resource of the task instance, the task instance does not meet the queuing condition and the task instance is terminated.
In some embodiments, after "polling and selecting one of the candidate task instances from all available execution queues corresponding to the same computing resource" for running, the method includes: and if any task instance in any available execution queue is not operated before the task termination time point set by the computing resource corresponding to the available execution queue, terminating all task instances in the available execution queue.
In a second aspect, an embodiment of the present application provides a task scheduling apparatus, configured to implement the task scheduling method in the first aspect, where the apparatus includes the following modules:
the initialization module is used for acquiring at least one computing resource, creating at least one execution queue corresponding to each computing resource, and distributing the resource occupation ratio of the corresponding computing resource to each execution queue;
the task allocation module is used for acquiring at least one task instance and task information of each task instance and allocating each task instance to a specified execution queue of specified computing resources according to the task information;
the priority adjusting module is used for acquiring the selection probability of each candidate task instance according to the resource occupation ratio of all available execution queues in the same computing resource, wherein the candidate task instance is a task instance positioned at the head in each available execution queue, and the available execution queues are execution queues in an enabling state;
and the task scheduling module is used for selecting one candidate task instance from all the available execution queues corresponding to the same computing resource to run according to the selection probability of each candidate task instance.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the task scheduling method according to any of the embodiments of the present application.
In a fourth aspect, the present application provides a computer program product, which includes software code portions for performing the task scheduling method according to any one of the above application embodiments when the computer program product is run on a computer.
In a fifth aspect, the present application provides a readable storage medium, in which a computer program is stored, where the computer program includes a program code for controlling a process to execute a process, and the process includes a task scheduling method according to any of the above application embodiments.
The main contributions and innovation points of the embodiment of the application are as follows: the task scheduling method provided by the embodiment of the application determines the priority through the resource proportion of each execution queue, and introduces a task polling mechanism, and selects a task from different execution queues to execute each time, that is, the higher the resource proportion of the execution queues is, the higher the priority is, the task is located in the execution queue with the higher priority, the higher the probability of being selected to execute is, and the task in the execution queue with the low priority is ensured, the probability is still preferentially executed when the execution queue with the higher priority exists, it is ensured that all task instances are sequentially executed according to the established priority, and the low-priority tasks still have the opportunity to be preferentially executed.
In some application embodiments, besides the influence of the resource proportion of the execution queue, various limiting parameters are introduced to influence the right whether a task instance is preferentially executed or not, and the limiting parameters comprise common parameters and user parameters, so that the unfairness of influence of a single factor is avoided.
In some application embodiments, the influence of relevant parameters of a submitter is introduced, so that the phenomenon that a task submitted by a single person occupies too much computing resources is avoided, the fairness of the occupation of the computing resources is ensured, and the user experience of a task scheduling platform is improved.
In some application embodiments, a task waiting area is set up, and on the premise of ensuring that a single person does not occupy too much computing resources, the tedious operation that a user needs to submit tasks repeatedly is avoided.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a task scheduling method according to an embodiment of the present application;
fig. 2 is a block diagram of a task scheduling apparatus according to an embodiment of the present application;
fig. 3 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Example one
The embodiment provides a task scheduling method, determining priority by the resource proportion of each execution queue, and introducing a task polling mechanism, and selecting a task from different execution queues to execute each time, that is, the higher the resource proportion of the execution queues is, the higher the priority is, the task is located in the execution queue with the higher priority, the higher the probability of being selected to execute is, and the task in the execution queue with low priority is ensured, and the probability is still preferentially executed when the execution queue with higher priority is available, so that not only is the priority definition of the execution queue effective ensured, but also the execution queue with low priority is considered, and the task instance is ensured to be sequentially and effectively executed.
Referring specifically to fig. 1, the task scheduling method includes steps S1-S4:
step S1: acquiring at least one computing resource, creating at least one execution queue corresponding to each computing resource, and distributing the resource proportion of the corresponding computing resource to each execution queue;
step S2: acquiring at least one task instance and task information of each task instance, and allocating each task instance to a specified execution queue of specified computing resources according to the task information;
step S3: acquiring the selection probability of each candidate task instance according to the resource occupation ratio of all available execution queues in the same computing resource, wherein the candidate task instance is a task instance positioned at the first position in each available execution queue, and the available execution queue is an execution queue in an enabled state;
step S4: and selecting one candidate task instance from all the available execution queues corresponding to the same computing resource to run according to the selection probability of each candidate task instance.
It should be noted that, in this embodiment, a task scheduling platform is constructed according to the task scheduling method, and the task scheduling platform is used to provide task scheduling policies for various users.
In step S1, a resource ratio is set for the execution queue corresponding to each computing resource.
First, various types of computing resources are registered to a task scheduling platform, such as a big data computing base, various servers, and the like. That is, different computing resources may come from different big data computing libraries or servers, thus requiring a corresponding execution queue to be created for each computing resource.
Specifically, when the execution queues are created, a unique identifier of each execution queue needs to be generated and the resource occupation ratio needs to be set. The sum of the resource occupation ratios of all execution queues corresponding to the same computing resource cannot be 100%, and the resource occupation ratio of each execution queue needs to be more than 0% and less than 100%. For example, if 3 execution queues are created for a certain computing resource, the 3 execution queues are set to acquire resource occupation ratios of 25%, 30%, and 45% of the computing resource, respectively.
In step S2, the acquired task instance is assigned to the corresponding execution queue.
First, at least one task instance and task information of each task instance are obtained.
Specifically, the task instances mainly come from two places, namely the task instances generated by the task plan periodic scheduling and the single task instances initiated by the user or the submitter immediately. The task scheduling platform provided by this embodiment may determine whether to generate an instance task according to the period configuration of the task planning period. And if a new task instance is created, acquiring task information of the task instance, and then distributing the task instance to a specified execution queue of specified computing resources according to the task information of each task instance.
Since the specified computing resource information and the specified execution queue information are already included in the task information, the specified computing resource of the task instance can be determined according to the specified computing resource information, and the specified execution queue of the task instance can be determined according to the specified execution queue information.
Specifically, whether the task instance meets the queuing condition is judged according to the task information of each task instance, and each task instance meeting the queuing condition is allocated to a specified execution queue in the specified computing resource of the task instance and located at the last bit of the specified execution queue of the task instance.
In particular, in some embodiments, a number of limiting parameters are set for each use of computing resources, which advantageously allows the task scheduling policy to be adjusted based on the task information of the task instance. Specifically, the task information of the task instance mainly comprises a task instance unique identifier, submission time, a submitter unique identifier, appointed computing resource information, appointed execution queue information and the like; after "acquiring at least one computing resource", the method comprises the following steps: and setting at least one limiting parameter for each computing resource, wherein the limiting parameter is any one of a time period for allowing the task to be submitted, a task termination time point, a concurrency upper limit value, a single task queuing number upper limit value and a single task running number upper limit value.
And if the time period is not within the time period, the newly-built task instance can not enter the appointed execution queue of the task scheduling platform.
The task termination time point is that a certain task instance is not executed after being delayed after entering the designated execution queue, which indicates that excessive task instances may be accumulated in the execution queue, and a time point is set for terminating all queued task instances in the execution queue, mainly to prevent the excessive queued task instances from affecting the normal operation of subsequent computing resources.
The concurrency upper limit value is the maximum concurrency number of tasks in the computing resources and is mainly used for preventing the computing resources from running in an overload mode to cause resource faults.
The single task queuing number upper limit value and the single task running number upper limit value are designed for the same user or the same submitter, so that the situation that a certain user or the submitter excessively occupies computing resources to reduce the user experience of other users or the submitters is avoided, the single task queuing number upper limit value refers to the maximum number of queuing task instances allowed by the same submitter in the same computing resources, and the single task running number upper limit value refers to the maximum number of running task instances allowed by the same submitter in the same computing resources.
And judging whether the task instances meet the queuing condition of entering the specified execution queue or not according to the task information of each task instance and the limiting parameters set by the specified computing resources of the task instances.
In some embodiments, the method for determining whether the task instance meets the queuing condition includes steps S21-S25:
step S21: acquiring a first number of queuing task instances corresponding to the unique identifier of the submitter of the task instance in the appointed computing resource of the task instance, wherein the queuing task instances are the task instances queued in any one execution queue;
step S22: if the first quantity does not reach the upper limit value of the queuing number of the single task set by the appointed computing resource of the task instance, the task instance does not accord with the queuing condition, and the task instance is placed in a task waiting area;
step S23: when the first number is smaller than the upper limit value of the single task queue number set by the appointed computing resource of the task instance, acquiring a second number of all running task instances corresponding to the unique identifier of the submitter of the task instance and a third number of all running instance tasks in the appointed computing resource of the task instance, wherein the running task instance is a running task instance in the appointed computing resource of the task instance;
step S24: and if the second quantity does not reach the upper limit value of the running number of the single task set by the specified computing resource of the task instance and the third quantity does not reach the upper limit value of the concurrency set by the specified computing resource of the task instance, the task instance meets the queuing condition.
Steps S21 and S22 are to limit the maximum number of queued task instances by the same submitter within the task instance' S specified computing resource, which sets the single-person task queue number upper limit. That is, if the first number has reached the upper limit of the number of single-person task queues, the task instance submitted by the submitter can no longer enter the designated execution queue, and can only be sent to the task waiting area. The task waiting area can avoid the tedious operation that the user needs to repeatedly submit the task on the premise of ensuring that the single person occupies too much computing resources.
Before step S21, the time for submitting the task instance may also be limited, for example, the specified computing resource of the task instance sets a time period for allowing the task instance to be submitted, the task instance may be allocated only if the time for submitting the task instance is within the time period for allowing the task instance to be submitted, otherwise, the task instance is terminated directly. Thus, in some embodiments, prior to obtaining the first number of queued task instances that correspond to the unique identification of the submitter of the task instance, comprising: and if the submission time of the task instance does not meet the allowable submission task time period set by the specified computing resource of the task instance, the task instance does not meet the queuing condition and the task instance is terminated.
The steps S23 and S24 are to set an upper limit on the number of single task runs in order to avoid too many running task instances of the same submitter, and to set an upper limit on concurrency in order to avoid exhaustion of the specified computing resources of the task instance. That is, if the second number does not reach the upper limit of the single-person task running number set by the specified computing resource of the task instance, and the third number does not reach the upper limit of the concurrency set by the specified computing resource of the task instance, the task instance meets the queuing condition, enters the specified queue for queuing, and is located at the last position of the specified execution queue of the task instance.
In step S3, the selection probability of each candidate task instance is obtained according to the resource occupation ratio of all available execution queues in the same computing resource.
The candidate task instance is a task instance located at the first position in the available execution queue, the available execution queue is an execution queue in an enabled state, the execution queue in a non-enabled state cannot distribute the task instance, and occupied computing resources cannot be used or released. For example, a certain computing resource corresponds to an execution queue a, an execution queue B, and an execution queue C, the resource occupancy of the execution queue a is 25%, the resource occupancy of the execution queue B is 30%, and the resource occupancy of the execution queue C is 45%, but only the execution queue B and the execution degree column C are enabled, then the candidate task instance in the execution queue B has a probability of being selected of 30%/(30% + 45%) -0.4, and the candidate task instance in the execution queue C has a probability of being selected of 45%/(30% + 45%) -0.6.
Thus, "obtaining the chosen probability for each candidate task instance based on the resource occupancy of all available execution queues in the same computing resource" includes: the method comprises the steps of obtaining a first resource proportion of an available execution queue where each candidate task instance is located, obtaining a second resource proportion of the sum of the resource proportions of all available execution queues corresponding to appointed computing resources of each candidate task instance, and obtaining the selection probability of the candidate task according to the ratio of the first resource proportion and the second resource proportion corresponding to the same candidate task instance.
In step S4, a candidate task instance is selected from all the available execution queues corresponding to the same computing resource according to the selection probability of each candidate task instance.
That is, a candidate task instance is arbitrarily selected from all executable queues corresponding to the same computing resource to run, and only the candidate task instance with the high selected probability is preferentially executed more easily. And after the first selected candidate task instance is executed, deleting the candidate task instance from the execution queue, and continuing the task instance arranged behind the assigned candidate task instance as the next candidate task instance.
In addition, in some embodiments, in order to prevent too many queued task instances of a certain execution queue from affecting normal use of computing resources, the computer resource sets a task termination time point, and after "polling and selecting one candidate task instance from all available execution queues corresponding to the same computing resource" to run, the method includes: and if any task instance in any available execution queue is not operated before the task termination time point set by the computing resource corresponding to the available execution queue, terminating all task instances in the available execution queue.
Example two
Based on the same concept, the present embodiment further provides a task scheduling apparatus for implementing the task scheduling method described in the first embodiment, and with specific reference to fig. 2, the apparatus includes the following modules:
the initialization module is used for acquiring at least one computing resource, creating at least one execution queue corresponding to each computing resource, and distributing the resource occupation ratio of the corresponding computing resource to each execution queue;
the task allocation module is used for acquiring at least one task instance and task information of each task instance and allocating each task instance to a specified execution queue of specified computing resources according to the task information;
the priority adjusting module is used for acquiring the selection probability of each candidate task instance according to the resource occupation ratio of all available execution queues in the same computing resource, wherein the candidate task instance is a task instance positioned at the head in each available execution queue, and the available execution queues are execution queues in an enabling state;
and the task scheduling module is used for selecting one candidate task instance from all the available execution queues corresponding to the same computing resource to run according to the selection probability of each candidate task instance.
EXAMPLE III
The present embodiment also provides an electronic device, referring to fig. 3, comprising a memory 404 and a processor 402, wherein the memory 404 stores a computer program, and the processor 402 is configured to execute the computer program to perform the steps of any one of the task scheduling methods in the above embodiments.
Specifically, the processor 402 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 404 may include, among other things, mass storage 404 for data or instructions. By way of example, and not limitation, memory 404 may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. The memory 404 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 404 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 404 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory 404 (FPMDRAM), an Extended data output Dynamic Random-Access Memory (eddram), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
Memory 404 may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by processor 402.
The processor 402 may implement any of the task scheduling methods in the above embodiments by reading and executing computer program instructions stored in the memory 404.
Optionally, the electronic apparatus may further include a transmission device 406 and an input/output device 408, where the transmission device 406 is connected to the processor 402, and the input/output device 408 is connected to the processor 402.
The transmitting device 406 may be used to receive or transmit data via a network. Specific examples of the network described above may include wired or wireless networks provided by communication providers of the electronic devices. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmitting device 406 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The input and output devices 408 are used to input or output information. In this embodiment, the input information may be a current data table such as an epidemic situation trend document, feature data, a template table, and the like, and the output information may be a feature fingerprint, a fingerprint template, text classification recommendation information, a file template configuration mapping table, a file template configuration information table, and the like.
Optionally, in this embodiment, the processor 402 may be configured to execute the following steps by a computer program:
acquiring at least one computing resource, creating at least one execution queue corresponding to each computing resource, and distributing the resource proportion of the corresponding computing resource to each execution queue;
acquiring at least one task instance and task information of each task instance, and allocating each task instance to a specified execution queue of specified computing resources according to the task information;
acquiring the selection probability of each candidate task instance according to the resource occupation ratio of all available execution queues in the same computing resource, wherein the candidate task instance is a task instance positioned at the first position in each available execution queue, and the available execution queue is an execution queue in an enabled state;
and selecting one candidate task instance from all the available execution queues corresponding to the same computing resource to run according to the selection probability of each candidate task instance.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, with reference to any one of the task scheduling methods in the first embodiment, the embodiments of the present application may be implemented by a computer program product. The computer program product comprises software code portions for performing a method for scheduling tasks implementing any of the above embodiments when the computer program product is run on a computer.
In addition, in combination with any one of the task scheduling methods in the first embodiment, the embodiment of the present application may provide a readable storage medium to implement the task scheduling method. The readable storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the task scheduling methods in the above embodiments.
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the invention may be implemented by computer software executable by a data processor of the mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Computer software or programs (also referred to as program products) including software routines, applets and/or macros can be stored in any device-readable data storage medium and they include program instructions for performing particular tasks. The computer program product may comprise one or more computer-executable components configured to perform embodiments when the program is run. The one or more computer-executable components may be at least one software code or a portion thereof. Further in this regard it should be noted that any block of the logic flow as in the figures may represent a program step, or an interconnected logic circuit, block and function, or a combination of a program step and a logic circuit, block and function. The software may be stored on physical media such as memory chips or memory blocks implemented within the processor, magnetic media such as hard or floppy disks, and optical media such as, for example, DVDs and data variants thereof, CDs. The physical medium is a non-transitory medium.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples are merely illustrative of several embodiments of the present application, and the description is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. The task scheduling method is characterized by comprising the following steps:
acquiring at least one computing resource, creating at least one execution queue corresponding to each computing resource, and distributing the resource proportion of the corresponding computing resource to each execution queue;
acquiring at least one task instance and task information of each task instance, and allocating each task instance to a specified execution queue of specified computing resources according to the task information;
acquiring the selection probability of each candidate task instance according to the resource occupation ratio of all available execution queues in the same computing resource, wherein the candidate task instance is a task instance positioned at the first position in each available execution queue, and the available execution queue is an execution queue in an enabled state;
and selecting one candidate task instance from all the available execution queues corresponding to the same computing resource to run according to the selection probability of each candidate task instance.
2. The method according to claim 1, wherein obtaining the selected probability for each candidate task instance according to the resource occupation ratios of all available execution queues in the same computing resource comprises: the method comprises the steps of obtaining a first resource proportion of an available execution queue where each candidate task instance is located, obtaining a second resource proportion of the sum of the resource proportions of all available execution queues corresponding to appointed computing resources of each candidate task instance, and obtaining the selection probability of the candidate task according to the ratio of the first resource proportion and the second resource proportion corresponding to the same candidate task instance.
3. The task scheduling method according to claim 1, wherein the task information of each task instance comprises: unique identification of task instance, submission time, unique identification of submitter, appointed computing resource information and appointed execution queue information; after "acquiring at least one computing resource", the method comprises the following steps: and setting at least one limiting parameter for each computing resource, wherein the limiting parameter is any one of a time period for allowing the task to be submitted, a task termination time point, a concurrency upper limit value, a single task queuing number upper limit value and a single task running number upper limit value.
4. The task scheduling method according to claim 3, wherein assigning each of the task instances to a designated execution queue of a corresponding designated computing resource according to the task information comprises: and judging whether the task instance meets the queuing condition according to the task information of each task instance, and distributing each task instance meeting the queuing condition to a specified execution queue in the specified computing resources of the task instance and locating at the last bit of the specified execution queue of the task instance.
5. The task scheduling method according to claim 4, wherein the method for determining whether the task instance meets the queuing condition comprises:
acquiring a first number of queuing task instances corresponding to the unique identifier of the submitter of the task instance in the appointed computing resource of the task instance, wherein the queuing task instances are the task instances queued in any one execution queue;
if the first quantity does not reach the upper limit value of the queuing number of the single task set by the appointed computing resource of the task instance, the task instance does not accord with the queuing condition;
when the first number is smaller than the upper limit value of the single task queue number set by the appointed computing resource of the task instance, acquiring a second number of all running task instances corresponding to the unique identifier of the submitter of the task instance and a third number of all running instance tasks in the appointed computing resource of the task instance, wherein the running task instance is a running task instance in the appointed computing resource of the task instance;
and if the second quantity does not reach the upper limit value of the running number of the single task set by the specified computing resource of the task instance and the third quantity does not reach the upper limit value of the concurrency set by the specified computing resource of the task instance, the task instance meets the queuing condition.
6. The task scheduling method of claim 5, comprising, prior to obtaining the first number of queued task instances that correspond to the unique identifier of the submitter of the task instance: and if the submission time of the task instance does not meet the allowable submission task time period set by the specified computing resource of the task instance, the task instance does not meet the queuing condition and the task instance is terminated.
7. The method according to claim 3, wherein after "polling and selecting one of the candidate task instances from all available execution queues corresponding to the same computing resource" for execution ", the method comprises: and if any task instance in any available execution queue is not operated before the task termination time point set by the computing resource corresponding to the available execution queue, terminating all task instances in the available execution queue.
8. The task scheduling device is characterized by comprising the following modules:
the initialization module is used for acquiring at least one computing resource, creating at least one execution queue corresponding to each computing resource, and distributing the resource occupation ratio of the corresponding computing resource to each execution queue;
the task allocation module is used for acquiring at least one task instance and task information of each task instance and allocating each task instance to a specified execution queue of specified computing resources according to the task information;
the priority adjusting module is used for acquiring the selection probability of each candidate task instance according to the resource occupation ratio of all available execution queues in the same computing resource, wherein the candidate task instance is a task instance with a queue at the head in each available execution queue, and the available execution queue is an execution queue in an enabling state;
and the task scheduling module is used for selecting one candidate task instance from all the available execution queues corresponding to the same computing resource to run according to the selection probability of each candidate task instance.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the task scheduling method of any one of claims 1 to 7.
10. A readable storage medium, characterized in that a computer program is stored in the readable storage medium, the computer program comprising program code for controlling a process to execute a process, the process comprising a task scheduling method according to any one of claims 1 to 7.
CN202111676892.3A 2021-12-31 2021-12-31 Task scheduling method and device, electronic device and readable storage medium Pending CN114265699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111676892.3A CN114265699A (en) 2021-12-31 2021-12-31 Task scheduling method and device, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111676892.3A CN114265699A (en) 2021-12-31 2021-12-31 Task scheduling method and device, electronic device and readable storage medium

Publications (1)

Publication Number Publication Date
CN114265699A true CN114265699A (en) 2022-04-01

Family

ID=80832414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111676892.3A Pending CN114265699A (en) 2021-12-31 2021-12-31 Task scheduling method and device, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN114265699A (en)

Similar Documents

Publication Publication Date Title
US10554574B2 (en) Resource management techniques for heterogeneous resource clouds
CN105791254B (en) Network request processing method and device and terminal
WO2020119029A1 (en) Distributed task scheduling method and system, and storage medium
CN110738497A (en) data processing method, device, node equipment and storage medium
US20220156115A1 (en) Resource Allocation Method And Resource Borrowing Method
CN108829510B (en) Thread binding processing method and device
US8863134B2 (en) Real time scheduling system for operating system
CN110262878B (en) Timed task processing method, device, equipment and computer readable storage medium
CN115225504B (en) Resource allocation method, device, electronic equipment and storage medium
CN114153581A (en) Data processing method, data processing device, computer equipment and storage medium
CN111338779A (en) Resource allocation method, device, computer equipment and storage medium
CN113886069A (en) Resource allocation method and device, electronic equipment and storage medium
CN111459676A (en) Node resource management method, device and storage medium
CN108897858B (en) Distributed cluster index fragmentation evaluation method and device and electronic equipment
CN111722908B (en) Virtual machine creating method, system, equipment and medium
CN110677838A (en) Service distribution method and device
CN110489356B (en) Information processing method, information processing device, electronic equipment and storage medium
CN114265699A (en) Task scheduling method and device, electronic device and readable storage medium
CN112600765B (en) Method and device for scheduling configuration resources
WO2020024207A1 (en) Service request processing method, device and storage system
CN113746932B (en) Network request merging method, device, electronic device and computer program product
CN111800446B (en) Scheduling processing method, device, equipment and storage medium
CN115809014A (en) Scheduling control method, device and computer storage medium
WO2017133421A1 (en) Method and device for sharing resources among multiple tenants
WO2020211358A1 (en) Database scheduling method and apparatus, and computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination