CN114706663A - Computing resource scheduling method, medium and computing device - Google Patents

Computing resource scheduling method, medium and computing device Download PDF

Info

Publication number
CN114706663A
CN114706663A CN202210412767.XA CN202210412767A CN114706663A CN 114706663 A CN114706663 A CN 114706663A CN 202210412767 A CN202210412767 A CN 202210412767A CN 114706663 A CN114706663 A CN 114706663A
Authority
CN
China
Prior art keywords
work
coroutine
scheduling
computing resources
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210412767.XA
Other languages
Chinese (zh)
Inventor
李卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210412767.XA priority Critical patent/CN114706663A/en
Publication of CN114706663A publication Critical patent/CN114706663A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiments disclosed in the specification provide a computing resource scheduling method, a medium and a computing device. A process may perform a computational task by creating a work coroutine, rather than by creating a work thread. In order to facilitate flexible limitation of the computational resources consumed by the processes, the work coroutines may be grouped, and a corresponding upper limit of the consumable computational resources may be set for each work coroutine group. Meanwhile, the process also creates a plurality of scheduling threads, and the scheduling threads are responsible for scheduling the computing resources of the management processor and managing the total amount of the computing resources consumed by the work coroutine group not to exceed the upper limit of the consumable computing resources corresponding to the work coroutine group.

Description

Computing resource scheduling method, medium and computing device
Technical Field
Embodiments of the present disclosure relate to the field of information technology, and in particular, to a computing resource scheduling method, medium, and computing device.
Background
In some business scenarios, a user may run its own process on a computing device, and the process may create multiple work threads to perform a computing task, where the execution of the computing task requires consumption of computing resources of the computing device. Under the condition of limited computing resources, the computing resources need to be reasonably scheduled, and the conditions that some working threads consume the computing resources without limit and other working threads are difficult to obtain the computing resources cannot occur.
Based on this, a relatively efficient method of computing resource scheduling is needed.
Disclosure of Invention
Embodiments of the present disclosure provide a method and medium for scheduling computing resources of a processor, so as to utilize the computing resources of the processor as efficiently as possible while achieving flexible limitation of the computing resources consumed by processes.
The technical scheme provided by the embodiments of the specification is as follows:
according to a first aspect of the embodiments of the present specification, a method for scheduling computing resources is provided, where the method is applied to a computing device, and a process of the computing device creates a plurality of work coroutine groups and a plurality of scheduling threads, and the method includes:
any work coroutine of the coroutine group enters a ready queue;
any scheduling thread reads the work coroutines from the ready queue and judges whether the read work coroutines meet scheduling conditions; wherein, the scheduling condition comprises: the total amount of consumed computing resources of the work protocol group to which the work protocol belongs does not exceed the upper limit of the corresponding consumable computing resources of the work protocol group;
if the judgment result is yes, the scheduling thread provides resource scheduling for the work coroutine; if the judgment result is negative, the scheduling thread refuses to provide resource scheduling for the work coroutine;
if the work coroutine is allowed to occupy the scheduling thread, the work coroutine continuously consumes the computing resources scheduled by the scheduling thread; then, the thread is released from the thread after the interruption of the execution state.
According to a second aspect of various embodiments herein, there is provided a computing device comprising a memory, a processor; the memory is for storing computer instructions executable on the processor for implementing the method of the first aspect when executing the computer instructions.
According to a third aspect of the various embodiments of the present description, a computer-readable storage medium is proposed, on which a computer program is stored which, when being executed by a processor, carries out the method of the first aspect.
In the above technical solution, a process may implement a computing task by creating a work coroutine (rather than by creating a work thread). In order to facilitate flexible limitation of the computational resources consumed by the processes, the work routines can be grouped, and a corresponding upper limit of the consumable computational resources is set for each work routine group. Meanwhile, the process also creates a plurality of scheduling threads, and the scheduling threads are responsible for scheduling the computing resources of the management processor and managing the total amount of the computing resources consumed by the work coroutine group not to exceed the upper limit of the consumable computing resources corresponding to the work coroutine group.
In the concrete implementation, each work protocol can enter a ready queue after the execution state of the work protocol is ready, a scheduling thread reads the work protocol from the ready queue, and if the total amount of consumed computing resources of a work protocol group to which the work protocol belongs exceeds the upper limit of the consumable computing resources corresponding to the work protocol group, the work protocol is refused to be provided for the work protocol for occupation; if the total amount of the consumed computing resources of the work coroutine group to which the work coroutine belongs does not exceed the upper limit of the corresponding consumable computing resources of the work coroutine group, the work coroutine group is provided with the work coroutine group to occupy so as to schedule the computing resources for the work coroutine. If any task coroutine occupies the scheduling thread, the computing resources scheduled by the scheduling thread are continuously consumed, and the occupation of the scheduling thread is relieved after the task coroutine execution state is interrupted.
Under the condition that the computing resources are limited, the computing resources are preempted among the threads with ready execution states, after a certain thread preempts the computing resources, the computing resources can be continuously consumed even if the execution state of the certain thread is interrupted, the requirement on the computing resources cannot be withdrawn, so that other threads cannot fully utilize the computing resources, and the waste of the computing resources is caused. The coroutines are different from threads in that the coroutines can share computing resources friendly to each other, after the execution state of one coroutine is interrupted, the requirement on the computing resources can be automatically withdrawn (the coroutine can also be called as the coroutine actively hangs up the coroutine after the execution state is interrupted), the utilization of the computing resources by other coroutines cannot be influenced, and the utilization rate of the computing resources can be improved.
In addition, in order to avoid unlimited requesting of computing resources by the work protocol, the work protocol groups need to be grouped, and a corresponding upper limit of the consumable computing resources is configured for each work protocol group. The process needs to create a plurality of scheduling threads to manage the scheduling of the computing resources, and the scheduling threads can schedule the computing resources for a work coroutine under the condition that the total amount of the consumed computing resources of the work coroutine group to which the work coroutine belongs is determined not to exceed the standard.
Drawings
Fig. 1 is a flowchart illustrating a computing resource scheduling method provided in this specification.
FIG. 2 illustratively provides a computational resource scheduling process for a processor.
Fig. 3 is a schematic structural diagram of a computing device provided by the present disclosure.
In the drawings, like or corresponding reference characters designate like or corresponding parts. Any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
In some business scenarios, a user may run their own process on a computing device, and the process may create multiple worker threads to perform a computing task, where the execution of the computing task requires consumption of computing resources of a processor of the computing device. Under the condition of limited computing resources, the computing resources need to be reasonably scheduled, and the conditions that some working threads consume the computing resources without limit and other working threads are difficult to obtain the computing resources cannot occur.
A feasible scheduling scheme of computing resources is that a cgroup technology is utilized to group a plurality of working threads created by a process and configure corresponding upper limits of the consumable computing resources for each working thread group, and a controller of the cgroup can schedule the computing resources for each working thread group on the premise of considering the upper limits of the consumable computing resources of each working thread group.
Because the computing resources are preempted among the threads with ready execution states under the condition that the computing resources are limited, after a certain thread preempts the computing resources, the computing resources are continuously consumed even if the execution state of the certain thread is interrupted, and the requirement for the computing resources is not withdrawn, which causes that other threads cannot fully utilize the computing resources and causes the waste of the computing resources, the controller of the cgroup usually actively executes thread switching operation, that is, a working thread which continuously consumes the computing resources is switched to another working thread which is ready to be executed by one working thread with interrupted execution states.
However, the above-described possible solutions have certain drawbacks. First, the controller of the cgroup generally runs in the kernel mode of the processor, and the worker thread created by the process generally runs in the user mode of the processor, and when the controller performs the thread switching operation, switching between the kernel mode and the user mode is involved, which causes additional consumption of computing resources.
Secondly, in a cloud computing scenario, a user runs a process of the user in a virtual machine, and deploys the virtual machine in a virtual machine container (a common virtual machine container may be, for example, a K8S container) of a cloud computing server. However, the common virtual machine container does not support the modification of cgroup, that is, the working thread groups configured by the user when initializing the process and the upper limit of the consumable computing resources of each working thread group are difficult to be changed, which means that it is inconvenient for the user if the user is difficult to modify the working thread groups and the upper limit of the consumable computing resources of each working thread group according to the change of the actual business needs of the user.
Therefore, the present disclosure provides another technical solution. A process may perform a computational task by creating a work coroutine, rather than by creating a work thread. In order to facilitate flexible limitation of the computational resources consumed by the processes, the work coroutines may be grouped, and a corresponding upper limit of the consumable computational resources may be set for each work coroutine group. Meanwhile, the process also creates a plurality of scheduling threads, and the scheduling threads are responsible for scheduling the computing resources of the management processor and managing the total amount of the computing resources consumed by the work coroutine group not to exceed the upper limit of the consumable computing resources corresponding to the work coroutine group.
In the concrete implementation, each work protocol enters a ready queue after the execution state of the work protocol is ready, a scheduling thread reads the work protocol from the ready queue, and if the total amount of consumed computing resources of a work protocol group to which the work protocol belongs exceeds the upper limit of the consumable computing resources corresponding to the work protocol group, the work protocol is refused to be provided for the work protocol for occupation; and if the total amount of the consumed computing resources of the work coroutine group to which the work coroutine belongs does not exceed the upper limit of the consumable computing resources corresponding to the work coroutine group, providing the work coroutine to the work coroutine for occupation so as to schedule the computing resources for the work coroutine. If any task coroutine occupies the scheduling thread, the computing resources scheduled by the scheduling thread are continuously consumed, and the occupation of the scheduling thread is relieved after the task coroutine execution state is interrupted.
Under the condition of limited computing resources, the computing resources are preempted among threads with ready execution states, and after a certain thread preempts the computing resources, the computing resources can be continuously consumed even if the execution state of the certain thread is interrupted, the requirement on the computing resources cannot be withdrawn, so that other threads cannot fully utilize the computing resources, and the waste of the computing resources is caused. The coroutines are different from threads in that the coroutines can share computing resources friendly to each other, after the execution state of one coroutine is interrupted, the requirement on the computing resources can be automatically withdrawn (the coroutine can also be called as the coroutine actively hangs up the coroutine after the execution state is interrupted), the utilization of the computing resources by other coroutines cannot be influenced, and the utilization rate of the computing resources can be improved.
In addition, in order to avoid unlimited requesting of computing resources by the work protocol, the work protocol groups need to be grouped, and a corresponding upper limit of the consumable computing resources is configured for each work protocol group. The process needs to create a plurality of scheduling threads to manage the scheduling of the computing resources, and the scheduling threads can schedule the computing resources for a work coroutine under the condition that the total amount of the consumed computing resources of the work coroutine group to which the work coroutine belongs is determined not to exceed the standard.
In addition, in a cloud computing scene, a process is deployed in a virtual machine, and the virtual machine is deployed in a virtual machine container of a cloud computing server or is directly deployed on the cloud computing server. In addition, under the cgroup mechanism, the controller usually does not count the computing resources consumed by the work thread group, but manages and controls the work thread group based on the corresponding consumable resource upper limit, and the management and the control are not fine enough. According to the technical scheme, the coroutine can be automatically suspended after the execution state is interrupted, and the consumption of the computing resources is automatically stopped, so that the consumed computing resource amount of the coroutine can be conveniently counted and recorded, and the fine management and control of the computing resource consumption of a coroutine group are realized.
The technical solution is described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a method for scheduling computing resources of a processor, which includes the following steps:
s100: any work protocol of the protocol group enters the ready queue.
The method flow shown in fig. 1 may be applied to a process running on a computing device, and the process may be a process deployed on the computing device by a user to implement business computation of the user.
The process may create a number of worker coroutine groups and a number of scheduling threads, the number of worker coroutine groups including at least two worker coroutines. Each coroutine group may correspond to a consumable computing resource cap.
A user can flexibly configure the work coroutines created by the processes, the division of the coroutine groups and the upper limit of the consumable computing resources corresponding to each coroutine group through a coroutine group configuration interface exposed to the user by the processes. In some embodiments, a user may divide the work coroutine groups according to different computing tasks that the processes need to implement, where each computing task corresponds to each work coroutine group one to one, and the computing tasks are implemented by each work coroutine in the work coroutine group corresponding to the computing task.
That is, the process exposes the protocol group configuration interface to the user. And the process responds to a coroutine group updating instruction input by a user calling the configuration interface, updates the work coroutines contained in the created one or more work coroutine groups and/or adjusts the upper limit of the consumable computing resources corresponding to the created one or more work coroutine groups.
In other embodiments, a user may correspond one part of work coroutine groups to each computing task, and in addition, the user may also use another part of work coroutine groups as work coroutine groups with higher authority levels, and work coroutines in the work coroutine groups with higher authority levels may be used to implement different computing tasks or the same computing task. Regarding the nature of the work protocol groups with higher privilege levels, the following is not to be taken as a table and is described in detail later.
The computational resource of the processor may generally be the available time of each processor core, and the upper limit of the consumable computational resource corresponding to each work coroutine group includes: an upper time scale limit assigned to the work coroutine group from the available time of a processing core.
In some embodiments, the process may be deployed in a virtual machine, either directly on the cloud computing server or in a virtual machine container of the cloud computing server. In these embodiments, the availability time of one virtual processing core of the virtual machine; the upper limit of the consumable computing resource corresponding to each work coroutine group comprises: an upper time scale limit assigned to the work coroutine group from the available time of a virtual processing core.
In some embodiments, the upper consumable computing resource limit for each work protocol group is positively correlated to the cost paid by the user for that work protocol group. For example, in a cloud computing scenario, the upper limit of the consumable computing resource corresponding to each work protocol group is positively correlated to the cost paid by the user to the cloud computing service party for the work protocol group.
Typically, a work protocol may enter the ready queue after its own readiness state is ready.
It should be noted here that the status of the execution of the coroutine is ready, and it can be understood that there is no limitation in the aspects of the I/O, timer, the concurrent synchronization means, etc. of the coroutine, which means that the coroutine can continue to advance its execution process of the corresponding task. When the execution state of the coroutine is interrupted, it can be understood that at least one of the aspects of I/O, timer, concurrent synchronization means, etc. of the coroutine has a limitation, which means that the coroutine cannot continue to advance its execution process for the corresponding task, and thus cannot continue to consume computing resources.
That is, the execution state of the work protocol is ready, and can be understood as including: the working protocol currently acquires the necessary execution parameters input by a user; and the work coroutine determines the next execution time point which is planned in advance and is reached currently; and, the work routine determines to limit its concurrent lock release of current read/write data from storage.
The execution state interrupt of the work coroutine may include two possibilities. One possibility is that the working routine is forced to stop executing by the process, and then the situation that the execution state is ready can not occur; another possibility is that the work routine may suspend execution because some execution conditions are not met, and later when the execution conditions are met, the execution state may be considered ready and execution may continue. Suspending execution of the work routine may include: the working protocol program does not currently acquire the necessary execution parameters input by the user; or, the work coroutine determines that the next execution time point which is not planned in advance is not reached currently; alternatively, the work routine determines that there is a concurrent lock restricting its current reading/writing of data from storage.
S102: the scheduling thread reads the work coroutines from the ready queue and judges whether the read work coroutines meet scheduling conditions; if the determination result is yes, step S104 is executed, and if the determination result is no, step S106 is executed.
In some embodiments, a process may create at least two scheduling threads, with different scheduling threads being used to schedule different sets of computing resources of a processor. In a cloud computing scenario, different scheduling threads may be used to manage different virtual cores, for example, each scheduling thread may correspond to each virtual core one to one. Work coroutines can be read from the ready queue among various scheduling threads based on a load balancing mechanism.
The working protocol meeting the scheduling condition means that the total amount of the consumed computing resources of the working protocol group to which the working protocol belongs does not exceed the upper limit of the corresponding consumable computing resources of the working protocol group.
S104: the scheduling thread provides resource scheduling for the work coroutine.
The scheduling thread may provide itself to the work coroutine for occupation to facilitate scheduling of computing resources for the work coroutine. In some embodiments, a dispatch thread may only be occupied by a worker coroutine. In other embodiments, a dispatch thread may be occupied by multiple workflows that dispatch computing resources based on a predetermined policy for the multiple workflows within the amount of computing resources that the dispatch thread can dispatch.
S106: and the scheduling thread refuses to provide resource scheduling for the work coroutine.
If the work coroutine is refused to schedule the computing resource by the scheduling thread, the work coroutine can withdraw the demand on the computing resource, which is equivalent to the fact that the scheduling thread refuses to provide the work coroutine with the work coroutine for occupation.
In some embodiments, a process (which may be specifically a scheduling thread or a cumulative thread created by a process dedicated to accumulating the total amount of computing resources consumed by each work coroutine group, for example) may periodically or aperiodically re-accumulate the total amount of computing resources consumed by each work coroutine group. That is, the total amount of computing resources consumed by each work protocol group may be periodically or aperiodically cleared on the accumulation, in which case, if the total amount of computing resources consumed by the work protocol group before the accumulation is performed again exceeds the upper limit, the work protocols in the work protocol group may satisfy the scheduling condition again after the accumulation is performed again.
In some embodiments, if a work coroutine is rejected by a scheduling thread to schedule a computing resource, the work coroutine may suspend itself, and enter the ready queue after the total amount of consumed computing resources of the work coroutine group to which the waiting period belongs is accumulatively cleared.
In some embodiments, if the scheduling thread refuses to provide itself for the working coroutine to occupy, a timer is created and bound to the working coroutine; the timing duration of the timer is the duration of the current time point and the time point of the total amount of the consumed computing resources of the work cooperation program group to which the work cooperation program belongs is counted again for the next time; the work routine enters the ready queue after determining that the timer is finished.
S108: if the work coroutine is allowed to occupy the scheduling thread, the work coroutine continuously consumes the computing resources scheduled by the scheduling thread; then, the thread is released from the thread after the interruption of the execution state.
After the execution state of the coroutine is interrupted, the coroutine can hang up by itself based on the characteristics of the coroutine and does not need to be calculated any more, which can be understood as that the coroutine can 'consciously' end the occupation of the calculation resources. This means that the scheduling thread does not need to perform the operation of switching coroutines, and does not need to involve the conversion between the kernel mode and the user mode, and does not need to consume extra computing resources. The coroutines can be switched by themselves in a user mode.
If the coroutine determines that the execution state of the coroutine is ready again, the coroutine can enter the ready queue by self, the scheduling thread can read the coroutine from the ready queue again, and the computing resource is scheduled for the coroutine under the condition that the computing resource actually consumed by the coroutine group to which the coroutine belongs does not exceed the limit.
In some embodiments, the process creates the plurality of worker groups including at least one first type worker group and at least one second type worker group. If the scheduling thread monitors that the first type of work coroutine continuously consumes the computing resources to cause that the scheduling condition is not met, the scheduling thread refuses to provide the first type of work coroutine for occupation; and if the scheduling thread monitors that the second type of work coroutine continuously consumes the computing resources to cause that the scheduling condition is not met, the scheduling thread continues to provide the second type of work coroutine for occupation.
That is, the aforementioned second type of work coroutine group may be understood as the work coroutine group with a higher authority level, and if the total amount of the consumed computing resources of the work coroutine group to which the work coroutine group belongs exceeds the upper limit due to continuous consumption of the computing resources in the process of occupying the scheduling thread, the scheduling thread may continue to overrate the computing resources for the work coroutine as long as the execution state of the work coroutine itself is not interrupted, until the execution state of the work coroutine itself is interrupted, and actively suspend itself.
Referring to fig. 2, fig. 2 illustratively provides a computing resource scheduling process. As shown in fig. 2, a process running in a virtual machine of a computing device may create three scheduling threads, each responsible for managing the computing resources of one virtual machine core. Each scheduling thread may read a work coroutine from the ready queue to provide a computational resource schedule for the virtual machine core for the read work coroutine. The working coroutine continuously consumes the computing resources provided by the scheduling thread, and if the state terminal is executed, the working coroutine can suspend the working coroutine and does not occupy the scheduling thread. And the working protocol program enters a ready queue after the self execution state is ready again, waits for being read by the scheduled thread and provides resource scheduling.
A computer-readable storage medium is provided by the present disclosure, having stored thereon a computer program that, when executed by a processor, performs a function of a process.
The present disclosure also provides a computing device, wherein a process runs on the computing device, and the process creates a plurality of work coroutine groups and a plurality of scheduling threads; the work coroutine groups comprise at least two work coroutines;
each work routine enters a ready queue after the execution state is ready;
each scheduling thread reads the work coroutines from the ready queue and judges whether the read work coroutines meet scheduling conditions; wherein, the scheduling condition comprises: the total amount of consumed computing resources of the work protocol group to which the work protocol belongs does not exceed the upper limit of the corresponding consumable computing resources of the work protocol group; if the judgment result is yes, the self-service server provides the working protocol program for occupation so as to schedule computing resources for the working protocol program; if the judgment result is negative, refusing to provide the working coroutine for the working coroutine to occupy;
if the work coroutine occupies the scheduling thread, the computing resources scheduled by the scheduling thread are continuously consumed; the thread is released from the thread after the interruption of the execution state, and enters the ready queue after the execution state is ready again.
Fig. 3 is a schematic structural diagram of a computing device provided by the present disclosure, where the computing device 15 may include, but is not limited to: a processor 151, a memory 152, and a bus 153 that connects the various system components, including the memory 152 and the processor 151.
Wherein the memory 152 stores computer instructions executable by the processor 151 such that the processor 151 is capable of performing the methods of any of the embodiments of the present disclosure. The memory 152 may include a random access memory unit RAM1521, a cache memory unit 1522, and/or a read only memory unit ROM 1523. The memory 152 may further include: a program tool 1525 having a set of program modules 1524, the program modules 1524 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, one or more combinations of which may comprise an implementation of a network environment.
The bus 153 may include, for example, a data bus, an address bus, a control bus, and the like. The computing device 15 may also communicate with external devices 155 through the I/O interface 154, the external devices 155 may be, for example, a keyboard, a bluetooth device, etc. The computing device 150 may also communicate with one or more networks, which may be, for example, local area networks, wide area networks, public networks, etc., through the network adapter 156. The network adapter 156 may also communicate with other modules of the computing device 15 via the bus 153, as shown.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that the present disclosure is not limited to the particular embodiments disclosed, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalent replacements, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (12)

1. A computing resource scheduling method is applied to a computing device, wherein a plurality of work coroutine groups and a plurality of scheduling threads are created by processes of the computing device, and the method comprises the following steps:
any work coroutine of the coroutine group enters a ready queue;
any scheduling thread reads the work coroutine from the ready queue and judges whether the read work coroutine meets the scheduling condition; wherein, the scheduling condition comprises: the total amount of consumed computing resources of the work protocol group to which the work protocol belongs does not exceed the upper limit of the corresponding consumable computing resources of the work protocol group;
when the judgment result is yes, the scheduling thread provides resource scheduling for the work coroutine; if the judgment result is negative, the scheduling thread refuses to provide resource scheduling for the work coroutine;
if the scheduling thread is allowed to be occupied, the working coroutine consumes the computing resource scheduled by the scheduling thread, and the occupation of the scheduling thread is released after the execution state is interrupted.
2. The method of claim 1, further comprising:
the process, on a regular or irregular basis, re-accumulates the total amount of computing resources consumed by each work protocol group.
3. The method of claim 2, further comprising:
if the resource scheduling is refused to be provided for the work coroutine, the scheduling thread creates a timer and binds the timer to the work coroutine; the timing duration of the timer is the duration of the current time point and the time point of the total amount of the consumed computing resources of the work cooperation program group to which the work cooperation program belongs is counted again for the next time;
the work routine enters the ready queue after determining that the timer is finished.
4. The method of claim 1, further comprising, after the workflow execution state is interrupted:
the work routine enters the ready queue after the execution state is ready.
5. The method of claim 1, wherein the plurality of worker groups created by the process includes at least one worker group of a first type and at least one worker group of a second type; the method further comprises the following steps:
if the scheduling thread monitors that the first type of work coroutine continuously consumes the computing resources to cause that the scheduling condition is not met, the scheduling thread refuses to provide resource scheduling for the first type of work coroutine;
and if the scheduling thread monitors that the second type of work coroutine continuously consumes the computing resources to cause that the scheduling condition is not met, continuously providing resource scheduling for the second type of work coroutine until the execution state of the second type of work coroutine is interrupted.
6. The method of claim 1, wherein the upper limit of the consumable computing resource for each work orchestration group is positively correlated to the cost paid by the user for the work orchestration group.
7. The method of claim 1, wherein the number of dispatch threads includes at least two dispatch threads, different dispatch threads for dispatching different sets of computing resources of the processor;
any scheduling thread reads the work coroutines from the ready queue, and the method comprises the following steps:
and reading the work coroutines from the ready queue by any scheduling thread based on a load balancing mechanism with other scheduling threads.
8. The method of claim 1, wherein the execution state of the work routine is ready, comprising:
the working protocol currently acquires the necessary execution parameters input by a user; and the work coordination process determines that the next execution time point planned in advance is reached currently; and, the worker coroutine determines to restrict its concurrent lock release of current read/write data from storage;
the execution state interruption of the work coroutine comprises the following steps:
the working protocol process does not currently acquire the necessary execution parameters input by the user; or, the work coordination process determines that the next execution time point which is not planned in advance is not reached currently; alternatively, the worker coroutine determines that there is a concurrent lock that restricts its current read/write of data from storage.
9. The method of claim 1, wherein the process exposes a protocol group configuration interface to a user;
the method further comprises the following steps:
and the process responds to a coroutine group updating instruction input by a user calling the configuration interface, updates the work coroutines contained in the created one or more work coroutine groups and/or adjusts the upper limit of the consumable computing resources corresponding to the created one or more work coroutine groups.
10. The method of claim 1, wherein the process runs in a virtual machine, the virtual machine being deployed at the computing device; the computing resources scheduled by the scheduling thread comprise: an availability time of a virtual processing core of the virtual machine; the upper limit of the consumable computing resource corresponding to each work coroutine group comprises: an upper time scale limit assigned to the work coroutine group from the available time of a virtual processing core.
11. A computing device comprising a memory, a processor; the memory is for storing computer instructions executable on a processor for implementing the method of any one of claims 1-10 when the computer instructions are executed.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 10.
CN202210412767.XA 2022-04-19 2022-04-19 Computing resource scheduling method, medium and computing device Pending CN114706663A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210412767.XA CN114706663A (en) 2022-04-19 2022-04-19 Computing resource scheduling method, medium and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210412767.XA CN114706663A (en) 2022-04-19 2022-04-19 Computing resource scheduling method, medium and computing device

Publications (1)

Publication Number Publication Date
CN114706663A true CN114706663A (en) 2022-07-05

Family

ID=82175311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210412767.XA Pending CN114706663A (en) 2022-04-19 2022-04-19 Computing resource scheduling method, medium and computing device

Country Status (1)

Country Link
CN (1) CN114706663A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115048206A (en) * 2022-08-15 2022-09-13 阿里巴巴(中国)有限公司 Resource scheduling method and server

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115048206A (en) * 2022-08-15 2022-09-13 阿里巴巴(中国)有限公司 Resource scheduling method and server

Similar Documents

Publication Publication Date Title
JP5324934B2 (en) Information processing apparatus and information processing method
CN102567086B (en) Task scheduling method, equipment and system
JP5452496B2 (en) Hierarchical reserved resource scheduling infrastructure
CN106897132A (en) The method and device of a kind of server task scheduling
CN109710416B (en) Resource scheduling method and device
CN107851039A (en) System and method for resource management
CN101122872A (en) Method for managing application programme workload and data processing system
CN103179048A (en) Method and system for changing main machine quality of service (QoS) strategies of cloud data center
CN111104227B (en) Resource control method and device of K8s platform and related components
EP4177751A1 (en) Resource scheduling method, resource scheduling system, and device
CN108123980A (en) A kind of resource regulating method and system
CN112486642B (en) Resource scheduling method, device, electronic equipment and computer readable storage medium
CN112783659A (en) Resource allocation method and device, computer equipment and storage medium
CN109960591A (en) A method of the cloud application resource dynamic dispatching occupied towards tenant's resource
KR20070090649A (en) Apparatus and method for providing cooperative scheduling on multi-core system
CN114138434A (en) Big data task scheduling system
CN113032102A (en) Resource rescheduling method, device, equipment and medium
CN114625533A (en) Distributed task scheduling method and device, electronic equipment and storage medium
CN114706663A (en) Computing resource scheduling method, medium and computing device
CN116010064A (en) DAG job scheduling and cluster management method, system and device
CN112073532B (en) Resource allocation method and device
US20080022287A1 (en) Method And System For Transferring Budgets In A Technique For Restrained Budget Use
CN109189581B (en) Job scheduling method and device
KR20150089665A (en) Appratus for workflow job scheduling
CN114153604A (en) Container cluster control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination