WO2016118164A1 - Groupes de ressources de processeur à programmateur attribué - Google Patents

Groupes de ressources de processeur à programmateur attribué Download PDF

Info

Publication number
WO2016118164A1
WO2016118164A1 PCT/US2015/012730 US2015012730W WO2016118164A1 WO 2016118164 A1 WO2016118164 A1 WO 2016118164A1 US 2015012730 W US2015012730 W US 2015012730W WO 2016118164 A1 WO2016118164 A1 WO 2016118164A1
Authority
WO
WIPO (PCT)
Prior art keywords
processor resource
scheduler
processor
queue
run
Prior art date
Application number
PCT/US2015/012730
Other languages
English (en)
Inventor
Daniel Gmach
Vanish Talwar
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/012730 priority Critical patent/WO2016118164A1/fr
Publication of WO2016118164A1 publication Critical patent/WO2016118164A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5012Processor sets

Definitions

  • Computers contain computational resources as a mechanism to execute applications. Computers are able to execute multiple processes based on scheduling time with resources for the processes to complete. On a single processor computer, multiple applications can appear to be running simultaneously when most processes are waiting while a single process is using th processor at any given time. The length of time to complete a process is affected by the scheduling policy managing access to each resource,
  • Figures 1 and 2 ar block diagrams depicting example scheduler systems.
  • FIG. 4 depicts example modules used to implement example scheduler systems
  • Figures 5 and 6 are flow diagrams depicting example methods of resource scheduling.
  • Compute devices commonly execute multiple applications (e.g., a group of associated, executable processes to perform a specific operation or set of operations).
  • Applications can require special schedulers, such as video streaming application that utilizes a real-time scheduler to guarantee Jitter free video provisioning.
  • a scheduler can maintain a run-queue according to a scheduler policy.
  • a fair-share scheduler can manage a run-queue to provision time on a processor resource to allow each process (e.g., an instance of an operation to perform on the system 100 ⁇ in the queue to receive the same time interval on the processor resource.
  • a real-time scheduler can maintain a run-queue where a priority process can take over the processor resource at any time and pause other resources with less priorit from utilizing the processor resource until the priority process is complete.
  • Personal computer systems commonly include multiple core processors and enterprise systems can include multiple central processing units ( CPU"). Management of the entire pool of processor resources with a single scheduler can Jerusalem to meet the requirements of each application even when multiple cores are available.
  • CPU central processing units
  • processor resource groups of a system supporting multiple processor resources where each processor resource group is associated with a scheduling policy provided by a scheduler and tasks are allocated to a processor resource group based on their scheduling
  • the cores of a system can he space partitioned into processor resource groups to allow a group of cores to execute one scheduling policy while a disjoint set of cores can execute another scheduling policy.
  • a compute apparatus can include a framework for supporting heterogeneous schedulers of an operating system to enable application execution with different scheduling requirements on the same physical system.
  • FIGS 1 and 2 are block diagrams depicting example scheduler systems 100 and 200.
  • the example scheduler system 100 of Figure 1 generally includes a processor resource assignment engine 104, a process assignment engine 106, and a plurality of processor resources 110.
  • the process assignment engine 108 can assign a process to a processor resource group maintained by the processor resource assignment engine 104 where each processor resource group is a subset of the plurality of processor resources 110 and each processor resource group is managed by a scheduler.
  • the example scheduler system 100 can include a container engine (not shown) to allow processes of the system 100 to be organized into groups that are associated with the processor resource groups. The functionality of the container engine is discussed herein with reference to container module 208 of Figure 2 and container engine 308 of Figure 3.
  • the processor resource assignment engine 1 4 represents any circuitry or combination of circuitry and executable instructions to maintain plurality of processor resource groups based on scheduler activity information.
  • Scheduler activit information is an state Information associated with the scheduler.
  • scheduler activity information can include whether a scheduler Is active (e.g. , whether an application or task has requested to be allocated a processor resource 110 using the policy of the scheduler), the number of processes assigned to a scheduler, and/or other information associated with the activity of the scheduler.
  • Each processor resource 1 10 is assignable to a processor resource group at run time to allow dynamic allocation of processor resources 1 0 to groups. For example, as a first processor resource group receives more processor resource requests than a second processor resource group, more processor resources 110 can be allocated to the first processor resource group than the second processor resource group.
  • the processor resource groups can represent a space partition of the plurality of processor resources 1 10.
  • the plurality of processor resources 1 10 can be divided into disjoint subsets of the plurality of processor resources 110 available on the system 100, where each processor resource group is one of the disjoint subsets.
  • the general purpose processor resources can be assigned to a first processor resource group and the special purpose processor resources can be assigned to a second processor resource group.
  • the space partition can be updated using system calls, such as kernel system calls that utilize control group settings to identify how the plurality of processor resources 110 should be isolated or otherwise limited in access to the plurality of processor resources 110.
  • Each processor resource 110 is managed by a scheduler designated to the processor resource group to which the processor resource 1 10 is assigned.
  • a general purpose processor resource of a first processor resource group can be managed by a fair-scheduler while a special purpose processor resource of a second processor resource group can be managed by a real-time scheduler.
  • the genera! purpose processor resource was reassigned to the second processor resource group then the general purpose processor would cease to be managed by the fair-share scheduler and Instead would become managed by the realtime scheduler.
  • the processor resource assignment engine 104 can reassign a processor resource from one processor resource group to another.
  • the processo resource assignment engine 104 can represent a combination of circuitry and
  • the reassignment by the processor resource assignment engine can be based on at least one of an active status of the scheduler, a change in a control group setting (i.e., a specification of a group associated with limitations on resource usage) associated with an application of the processor resource request, and a load balance strategy.
  • processor resources 110 can be reassigned to a group with higher than average utilization levels of processor resources of the group.
  • the processor resources 1 10 of the processor resource group designated to the real -time scheduler can be reallocated to the processor resource group designated to the fair-shar scheduler.
  • the processor resource assignment engine 104 can analyze the space partition based on scheduler activity information and a control parameter of the process. For example, the processor resource assignment engine 104 can identif the space partition lacks sufficient resources In one of the partitions (e.g., one of the processor resource groups) by gathering demand levels and utilization levels of the processor resources 110 in each partition to execute the processes of being assigned to the processor resource group to a achieve a quality-of-service QoS") threshold. The processor resource assignment engine 104 can identify whether a particular number of processor resources are available to execute processor resource requests according to a scheduler policy.
  • the processor resource assignment engine 104 can identify whether a particular number of processor resources are available to execute processor resource requests according to a scheduler policy.
  • the processor resource assignment engine 104 can wait until another core becomes available or migrate processes to another processor by pausing execution of the processes in a first run-queue and moving the processes to a second run-queue to empty the first run-queue and allow the core associated with the first run-queue to be available for reassignment to the processor resource group to be created for the gang scheduler,
  • the process assignment engine 106 represents any circuitry or combination of circuitry and executable instructions to manage assignment of a process to a processor resource 1 10 of the system 100.
  • the process assignment engine 108 can represent any circuitry or combination of circuitry and executable instructions to assign a processor resource request to one of the plurality of processor resource groups, identify a first processor resource 1 10 of the plurality of processor resources assigned to one of the processor resource groups, and enqueue the process associated with the processor resource request on a run-queue of the first processor resource 1 10.
  • a kernel can execute system calls via the process assignment engine 106 to organize processes into processor resource groups and the kernel can manage the processes using the schedulers assigned to the processor resource groups.
  • the process assignment engine 106 can assign a processor resource request based on a set of process characteristics and a scheduler policy.
  • the processor resource request may be a request for an application performing content streaming and based on that characteristic is associated with a scheduler policy for content streaming applications, such as a real-time scheduler having a scheduling policy to give the application access to the processor resource 110 in real-time.
  • 00173 process assignment engine 108 can identify a processor resource
  • processor resource 110 based on the assignment of the processor resource 1 10 to a processor resource group and enqueue the process of the processor resource request on the identified processor resource 110. For example, when the processor resources 1 10 are space partitioned, a processor resource 110 is selected from the space allocated to the processor resource group and then add the process to the queue of the processor resource 10 for execution of the process on the processor resource 1 10, Each processor resource 1 10 In a processor resource group executes the processes in the run-queue using the policy of the scheduler. For example, the run-queue can execute process in the queue according to a strategy of execution defined by a policy associated with the space in which the processor resource 110 of the associated run-queue is partitioned.
  • the strategy of the scheduler policy is the management method of the queue of processes, such as fair allotment of time with the processor resource 1 10 for a fair-share scheduling policy o a priority based allotment of time with the processor resource 1 10 for a real-time scheduler policy.
  • ⁇ 0018J Figure 2 depicts the example system 200 can comprise a memory
  • the processor resource 210 can be operatively coupled to a data store 202.
  • the memory resource 220 can contain a set of instructions that are executable by a processor resource 210, The set of instructions are operable to cause the processor resource 210 to perform operations of the system 200 when th set of instructions are executed by the processor resource 210.
  • the set of instructions stored on the memory resource 220 can be represented as a processor resource assignment module 204, a process assignment module 206, and a container module 208.
  • the processor resource assignment module 204, the process assignment module 208, and the container module 208 represent program instructions that when executed function as the processor resource assignment engine 104 of Figure 1 , the process assignment engine 108 of Figure 1 , and the container engine 308 of Figure 3, respectively.
  • the processor resource 210 can carry out a set of Instructions to execute the modules 204, 208, 208, and/or any other appropriate operations among and/or
  • the processor resource 210 can carry out a set of instructions to assign a processor resource group to a scheduler, maintain the processor resource group with a number of processo
  • scheduler policy for a task based on a control parameter, and enqueue the task to a run-queue of one of the processor resources 210 of the processor resource group assigned to the scheduler based on the determined scheduler policy.
  • the processor resource 210 can carry out a set of instructions to analyze a behavior of the task, determine a set of process characteristics for the task based on the behavior, identify which one of a plurality of schedulers is associated with the control parameter satisfied by the set of process characteristics, analyze a space partition of a plurality of processor resources 210 based on the scheduler activity information and the control parameter of the task, create the processor resource group when a threshold level of processor resources are available and the space partition lacks a subset for the scheduler, create a run-queue for processor resource 210 with the processor resource group, and assign the task to the created run-queue.
  • the processor resource 210 can carry out a set of instructions to analyze a demand level and utilization level of the processor resource group, determine whether the number of processor resources of the processor resource group are available to host the task based on the scheduler policy associated with the processor resource group and a QoS threshold, reassign a processor resource from a first processor resource group to a second processor resource group, and move the task from a first run-queue to a second run-queue when the processor resource is reassigned to the second processor resource group.
  • the processor resource 210 can be any appropriate circuitry capable of processing (e.g. compute) instructions, such as one or multiple processing elements capable of retrieving instructions from the memory resource 220 and executing those instructions.
  • the processor resource 210 can be a core of a processor that is able to process instructions retrieved by a memory controller of the processor.
  • the processor resource 210 can be a central processing unit CPU" that enables resource scheduling by fetching, decoding, and executing modules 204, 208, and 208.
  • Example processor resources 210 include at least one CPU, a
  • the processor resource 210 can include multiple processing elements that are integrated in a single device or distributed across devices.
  • the processor resource 210 can process the instructions serially, concurrently, or in partial concurrence.
  • the memory resource 220 and the data store 202 represent a medium to store data utilized and/or produced by the system 200.
  • the medium can be any non- transitory medium or combination of non-transitory mediums able to electronically store data, such as modules of the system 200 and/or data used by the system 200,
  • the medium can be a storage medium, which is distinct from a transitory transmission medium, such as a signal.
  • the medium can be machine-readable, such as computer-readable.
  • the medium can b an electronic, magnetic, optical, or other physical storage device that is capable of containing (i.e., storing) executable
  • the memory resource 220 can be said to store program instructions that when executed by the processor resource 210 cause the processor resource 21 to impiement functionality of the system 200 of Figure 2.
  • the memory resource 220 can be integrated in the same device as the processor resource 210 or it can be separate but accessible to thai device and the processor resource 210.
  • the memory resource 220 can be distributed across devices.
  • the memory resource 220 and the data store 202 can represent the same physical medium or separate physical mediums.
  • the data of the data store 202 can include representations of data and/or information mentioned herein.
  • the data store 202 of Figure 2 can contain information utilized by processor resources 210 executing the modules 204, 206, and 208 of Figure 2. the engines 104 and 108 of Figure 1 , and the engine 308 of Figure 3.
  • the data store 202 can store a container description, a characteristic of a process, a control group setting, scheduler activity information, space partition Information, etc.
  • the data store 302 of Figure 3 can be the same as data store 202 of Figure 2.
  • the system 200 can include the executable instructions can be part of an installation package that when installed can be executed by the processor resource 210 to perform operations of the system 200, such as methods described with regards to Figures 4-6.
  • the memory resource 220 can be a portable medium such as a compact disc, a digital video disc, a flash drive, or memory maintained by a computer device from which the installation package can be downloaded and installed.
  • the executable instructions can be part of an application o applications already installed.
  • the memor resource 220 can be a non-volatile memory resource such as read only memory (“ROM”), a volatile memory resource such as random access memory (“RAM”), a storage device, o a combination thereof.
  • Example forms of a memory resource 220 include static RAM (“SRAM”), dynamic RAM (“DRAM”), electrically erasable programmable ROM EEPROM”), flash memory, or the like.
  • the memory resource 220 can include integrated memory such as a hard drive (“HD”), a solid state drive (“SSD”), or an optical drive.
  • Figure 3 depicts example environments in which various example scheduler systems can be implemented-
  • the example environment 390 is shown to Include an example system capable of resource scheduling where the system includes a processing unit 330 with any number of cores 310.
  • Example environments 390 include a multi-core compute device executing an operating system, such as LINUX kernel, to manage system resources including the processing unit 330.
  • the system (described herein with respect to Figures 1 and 2 ⁇ can represent generally any circuitry or combination of circuitry and executable instructions to schedule processor resource requests in a multi-scheduler environment.
  • the system can include a processor resource assignment engine 304 (as shown in 3A) and a process assignment engine 306 (as shown in 3B) that are the same as the processor resource assignment engine 104 and the process assignment engine 106 of Figure 1 , respectively, and the associated descriptions are not repeated for brevity.
  • Figure 38 includes a container engine 308.
  • the container engine 308 represents any circuitry or combination of circuitry and executable instructions to maintain a plurality of containers 336
  • a container 336 represents a group of processes assignable to a processor resource group.
  • a container 336 can be represented b a control group of parameter that isolate and/or shield processes in the container, in that example, scripts from a kernei can be used to manage the processes and groups of processes, such as applications 338.
  • Each container 336 can be associated with a description.
  • a plurality of containers 336 can each be described with a difference characteristic so that applications 338 associated with that characteristic are placed in the associated container 336.
  • the plurality of containers 336 are assignable to the plurality of processor resource groups 332 based on the scheduler policy associated with the groups. In this manner, characteristics of the applications 338 can be used to organize assignment of processes to processor resource groups 332 and, in turn, processor resources 310 assigned to schedulers 334 are to accept processes with the characteristics assigned to the processor resource group 332 at runtime. In other words, processes can be assigned to schedulers 334 that match the processor resource requirements of the process, such as assigning a process with realtime processing requirements to a processor resourc group 332 managed by a realtime scheduler 334.
  • the container engine 308 can reassign a first container of the plurality of containers 336 from a first processor resource group 332 to a second processor resource group 332 based on a change in a container description.
  • the container description may be updated with a new set of process characteristics of processes to be designated to the container 336 ⁇ e.g., placed within the container), and the container assignment can adapt to a different processor resource group 332 based on the change in container description.
  • the container engine 308 can include at least one of an application analysis engine 322 and an application interface engine 324,
  • the application analysis engine 322 represents circuitry or combination of circuitry and executable instructions to infer which scheduler best matches behavior of an application 338 associated with the processor resource request.
  • the application analysis engine 322 can compare the behavior of the application 338 (as described by a set of characteristics of the application 338) to the scheduler policies available by the processor resource groups 332.
  • the application interface engine 324 represents circuitry or combination of circuitry and executable instructions to enable user-supplied parameters to determine a set of control parameters associated with the plurality of containers 336.
  • a user can set a control group setting as a description of a container 336 and the control group parameters of a process can he used to determine which container 336 is to receive the process (e.g., matching control group parameters to the container description),
  • FIGS 3A and 3B demonstrate that the plurality of processor resources 310 can be cores of a processing unit 330 : such as a CPU.
  • the cores 310 are to be divided among processor resource groups 332 by the processor resource assignment engine 304 based on scheduler activity Information associated with the plurality of schedulers 334 of the system.
  • a core 310 can be assigned to a processor resource group 332 that is statically assigned to a scheduler 334. For example, whenever a new scheduler 334 is Introduced to a system a processor resource group 334 can be created to manage processor resources 310 according the policy of the new scheduler 334.
  • applications 338 can be organized into containers 336, such as process containers.
  • the system can grou processes into hierarchies or process subsets where each hierarchy or process subset is to be managed by a subsystem (e.g., managed by a scheduler 334 designated to a processor resource group 332 and restricted from access to resources outside the processor resource group 332.)
  • the process assignment engine 308 can manage the containers 336 by determining which container 338 is to be assigned to which processor resource group 332.
  • the doited line over the processing unit 330 in Figure 3B designates the boundary of the space partition of the cores 310, where, for example, the processor resource group A is restricted to access cores on the left of the dotted line and the processor resource group B is restricted to access cores on the right side of the dotted line.
  • the engines 304 and 308 can be integrated into a compute device, such as a personal computer, a server, a mobile device, or a network element.
  • the engines 304 and 308 can be integrated via circuitry or as installed instructions into a memory resource of the compute device.
  • Any appropriate combination of the system 300 and compute devices can be a virtual instance of a resource of a virtual shared pool of resources.
  • the engines and/or modules of the system herein can reside and/or execute "on the cloud" (e.g., reside and/or execute on a virtual shared pool of resources).
  • a hypervisor can be adapted to schedule resources using processor resource groups 332.
  • the engines 104 and 108 of Figure 1 ; the modules 204, 206, and 208 of Figure 2; and the engines 304, 306, 308, 322, and 324 are described as circuitry or a combination of circuitry and executable instructions. Such components can he implemented in a number of fashions.
  • the executable instructions can be processor-executable instructions, such as program instructions, stored on the memory resource 220, which is a tangible, non-transitory computer-readable storage medium, and the circuitry can be electronic circuitry, such as processor resource 210, for executing those Instructions.
  • the instructions residing on the memory resource 220 can comprise any set of instructions to be executed directly (such as machine code) or indirectly (such as a script) by the processor resource 210,
  • the engines 104, 106, and 308 and/or the modules 204, 208, and 208 can be integrated in a single compute device or distributed across multiple compute devices.
  • the engine and/or modules can complete or assist completion of operations performed in describing another engine and/o module.
  • the processor resource assignment engine 304 of Figure 3A can request, complete, or perform the methods or operations described with the processor resource assignment engine 104 of Figure 1 as well as the process assignment engine 106 and the container engine 308 of Figure 1 ,
  • the various engines and modules are shown as separate engines in Figures 1 and 2, in other implementations, the functionality of multiple engines and/or modules may be implemented as a single engine and/or module or divided in a variety of engines and/or modules. In som example, the engines of the system can perform example methods described in connection with Figures 4-6.
  • FIG. 4 depicts example modules used to implement example scheduler systems.
  • the example modules of Figure 4 generally include a container module 408, a process assignment module 406, and a processor resource assignment module 404.
  • the example modules of Figure 4 can be implemented on a compute device to schedule a processes on a system with a pluralit of processor resources.
  • a processor resource request 458 is made to the system.
  • a processor resource executing the container module 408 receives the processor resource request 458 and identifies in which container to place the processor resource request 458 based on the application behavior 460 of the task making the request 458 and any parameters 482, such as control parameters to facilitate a decision based on control group settings.
  • the container module 408 represents program instructions that are similar to the container module 208 of Figure 2.
  • the container module 408 can include program instructions, such as the application analysis module 440 and the application interface module 442, to facilitate the container selection decision.
  • the application analysis module 440 represents program instructions that when executed by a processor resource cause the processor resource to determine whether a scheduler policy would be sufficient for the task of the processor resource request 458 based on the application behavior 460.
  • the application interface module 442 represents program instructions that when executed by a processor resource cause the processor resource to accept user-supplied parameters, such as parameters 482, and determine a set of control parameters associated with the plurality of containers based on the parameters.
  • Th process assignment module 406 represents program instructions similar to the program instructions of the process assignment module 208 of Figure 2.
  • the process assignment module 406 can include program instructions, such as the scheduler analysis moduie 444 and the scheduler change module 446, to facilitate assignment of processes to processor resource groups based on which container the processes are associated with.
  • the scheduler analysis module 406 represents program instructions that when executed by a processor resource cause the processor resource to determine which scheduler to assign to the container based on a container description 463 and a scheduler list 464 containing a list of schedulers offered by the system.
  • a processor resource executing the scheduler analysis moduie 444 can identify a scheduler policy that conforms to the parameters and process characteristics of the processor resource request 458.
  • the scheduler change module 448 represents program instructions that when executed by a processor resource cause the processor resource to identify whether sufficient resources exist to assign the task to the scheduler.
  • a processor resource executing the scheduler change module 448 can identify there is a lack of resources available to execute the selected scheduler, and, in response, select a different scheduler that next-best matche the parameters and/or process characteristics of the processor resource request 458 can be selected to assign the task to the secondary scheduler.
  • resources may be available, but are not ye! allocated to the processor resource group before enqueuing the task.
  • the processor resource assignment module 404 represents program instructions that are similar to the processor resource assignment module 204 of Figure 2.
  • the processor resource assignment modul 404 can include program instructions, such as the core monitor module 448, the core analysis moduie 450, and a core change module 452, to facilitate maintenance of the plurality of processor resourc groups.
  • a processor resource executing the processor resource assignment module 404 can utilize the scheduler activity information 466, a core list 470 ⁇ which represents a list of processor resources of the system), and core activity information 472 (which represents the operational statistics of the plurality of processor resources of the system).
  • the task associated with the processor resource request 458 is ushered to a run-queue of a processor resource in the processor resource group of the scheduler selected by the processor resource executing the process assignment module 408 via the processor resource run-queue assignment 474 operation.
  • the core monitor module 448 represents program instructions that when executed by a processor resource cause the processor resource to monitor the assignment of processor resource to schedulers and the set of process tasks hosted by processor resources associated with the schedulers. For example, the demand level and the utilization level of the processor resources associated with a scheduler can be observed by a processor resource executing the core monitor module 448.
  • Th core analysis module 450 represents program instructions that when executed by a processor resource cause the processor resource to analyze the demand level of the processor resources associated with the applications using the scheduler. For example, the demand level of th processor resources can be compared to a QoS threshold. The demand levels of each processor resources can be aggregated to scheduler demand level.
  • the core change module 452 represents program instructions that when executed by a processor resource cause the processor resource to maintain the space partition of the plurality of processor resources based on the scheduler demand level.
  • a processor resource executing the core change module 452 can facilitate a change in the space partition by migrating tasks from the run-queues of any processor resources designated to change to other processor resources of the same processor resource group.
  • the processor resource executing the core change module 452 can verify the run-queues of the selected processor resources are empty and change the processor resources with empty run-queues to the processor resource group of a different scheduler.
  • a load balance technique can be used by the processor resource that executes the cor change module 452.
  • Figures 5 and 6 are flow diagrams depicting example methods of resource scheduling.
  • exampl methods of resource scheduling can generally comprise identifying a scheduler for the task, assigning a processor resource group to the scheduler., and enqueuing the task on a run-queue of a processor resource in the processor resource group.
  • a scheduler for the task is identified.
  • the scheduler is identified based on the control parameter associated with a task characteristic.
  • the task characteristic should accurately describe the behavior associated with the task and/or the application from which the task was derived so that the appropriate scheduler Is identified for the tasks of the application.
  • a task can be designated to a processor resource group that Is different from the application and/or another task of the application.
  • a processor resource group Is assigned to the scheduler.
  • the processor resource group can b assigned based on the scheduler activity information. For example, if the scheduler became flagged to operate when a task is assigned to the scheduler, then the state of the scheduler would he changed to active and should have a processor resource group associated with scheduler.
  • a processor resource group can be created for a scheduler when a schedule is not associated with a processor resource group and the scheduler is assigned a task.
  • the task is enqueued on a run-queue of a processor resource in the assigned processor resource group, A task is enqueued by placing the task into a queue.
  • the run-queue is managed by the scheduler and the task receives access to the processor resource based on the strategy of the scheduler policy. For example, the task can be moved to the front of the queue when the task has a highest priority level set and the scheduler policy takes priority into consideration, whereas a fair-share policy may send the same task to the tail of the queue when the fair-share scheduler policy does not take priority into consideration.
  • Figure 8 includes blocks similar to blocks of Figure 5 and provides addstionai blocks and details.
  • Figure 6 depicts additional blocks and details generally regarding selecting a container for a task and maintaining a processor resource group.
  • Blocks 604 and 608 are similar to blocks 504 and 508 of Figure 5 and, for brevity, their respective descriptions are not repeated in their entirety.
  • Block 608 represents an embodiment of block 506 as represented by blocks 812, 814, and 618 where the specific descriptions of blocks 81 , 614, and 616 are encompassed by the genera! description of block 508.
  • a container is selected for the task.
  • the container Is selected based on an application characteristic associated with the task. For example, a word processing application can operate with equal priority to other applications on the computer and be placed in a container with a description associated with normal
  • a content streaming application can require a certain amount of resources based on the speed of buffering and can be added to a container described with parameters for content streaming.
  • the container description can Include a control parameter associated with the application characteristic, such as a "real-time processing * parameter associated with a "streaming" characteristic.
  • the scheduler is identified based on the container description associated with the container, in this manner, the container description should accurately describe the applications associated with the container so that the appropriate scheduler is identified for the tasks of the applications
  • assignment of the processor resource group to a scheduler based on scheduler activity information can Include initiating the scheduler and creating the processor resource group.
  • a scheduler flag is set. The setting of the flag can identify to the operating system that the scheduler is available to schedule tasks on a processor resource. The scheduler can assign a task to a run-queue when the scheduler flag is set,
  • a processor resource group may need to be created for the scheduler.
  • a run-queue and setting information is created.
  • the run-queue and setting information are associated with a processor resource of the processor resource group to allow for the processor resource to accept management policy operations from the scheduler.
  • the run-queue and setting information may be maintained based on the status on the scheduler flag, such as when the scheduler flag is set.
  • a processor resource group can adjust based on scheduler activity information.
  • a number of processor resources of a first processor resource group is changed based on the scheduler activity information.
  • the number of processor resources assigned to a processo resource group can vary dynamically based on the scheduler activity information, a QoS parameter, and a number of tasks in assigned to a processor resource (e.g. . , the number of tasks in a run-queue of a processor resource).
  • a processor resource is reassigned based on scheduler activity information.
  • Processor resource allocation and container assignment may be adjusted dynamically during runtime. For example, scheduler activity information can be updated based on user input or a system event and the space partition and/or the number of containers assigned fo a processor resource group should adapt to the update.
  • a plurality of processor resources are monitored and the scheduler activity information is gathered from the plurality of processor resources at block 822,
  • the scheduler activity information can be collected based on a demand level and utilization level of the processor resources associated with the processor resource group assigned to the scheduler. For example, processor resource demand levels and utilization levels that achieve certain demand and/or utilization minimums.
  • a space partition of the plurality of processor resource groups is changed based on the scheduler activity information gathered at block 622.
  • any queued tasks of first run-queue of a first processor resource are migrated to a second run-queue of a second processor resource in the same processor resource group as the first processor resource. This happens because the tasks are fo be executed against the originally associated scheduler, hut the first processor resource Is being assigned to another processor resource group.
  • the first run-queue information of the first run-queue is replaced with different run-queue information associated with a different scheduler based on the processor resource grou to which the first processor has moved to.
  • the run-queue information should be replaced when the run-queue is empty as to not interfere with operations of the processor resource.
  • the update to the space partition can be accomplished when the processor resource designated to chang processor resource groups is free from current processes and the run-queue can receive a process once the setting information is updated with the new scheduler information associated with the processor resource group it joined, in this manner, space partitioning of the plurality of processor resources can be achieved dynamically during runtime, schedulers can be flexibly added or removed from a system, and multiple types of schedulers can manage processes concurrently on the same system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

Dans un mode de réalisation de la présente invention, un système de programmateur comprend une pluralité de ressources de processeur, un moteur d'attribution de ressources de processeur pour maintenir une pluralité de groupes de ressources de processeur sur la base des informations d'activité de programmateur, et un moteur d'attribution de processus pour attribuer une demande de ressources de processeur à l'un de la pluralité des groupes de ressources, identifier une ressource de processeur de la pluralité de ressources attribuées à l'une de la pluralité de groupes de ressources, et empiler un processus associé à la demande de ressources de processeur sur une file d'attente d'exécution de la ressource de processeur sur la base d'une stratégie du règlement de programmateur.
PCT/US2015/012730 2015-01-23 2015-01-23 Groupes de ressources de processeur à programmateur attribué WO2016118164A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/012730 WO2016118164A1 (fr) 2015-01-23 2015-01-23 Groupes de ressources de processeur à programmateur attribué

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/012730 WO2016118164A1 (fr) 2015-01-23 2015-01-23 Groupes de ressources de processeur à programmateur attribué

Publications (1)

Publication Number Publication Date
WO2016118164A1 true WO2016118164A1 (fr) 2016-07-28

Family

ID=56417534

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/012730 WO2016118164A1 (fr) 2015-01-23 2015-01-23 Groupes de ressources de processeur à programmateur attribué

Country Status (1)

Country Link
WO (1) WO2016118164A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109791504A (zh) * 2016-09-21 2019-05-21 埃森哲环球解决方案有限公司 针对应用容器的动态资源配置
US11138146B2 (en) 2016-10-05 2021-10-05 Bamboo Systems Group Limited Hyperscale architecture
US11861397B2 (en) 2021-02-15 2024-01-02 Kyndryl, Inc. Container scheduler with multiple queues for special workloads
US11979339B1 (en) * 2020-08-19 2024-05-07 Cable Television Laboratories, Inc. Modular schedulers and associated methods

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040010612A1 (en) * 2002-06-11 2004-01-15 Pandya Ashish A. High performance IP processor using RDMA
US20080155203A1 (en) * 2003-09-25 2008-06-26 Maximino Aguilar Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment
US20100333113A1 (en) * 2009-06-29 2010-12-30 Sun Microsystems, Inc. Method and system for heuristics-based task scheduling
US20120173728A1 (en) * 2011-01-03 2012-07-05 Gregory Matthew Haskins Policy and identity based workload provisioning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040010612A1 (en) * 2002-06-11 2004-01-15 Pandya Ashish A. High performance IP processor using RDMA
US20080155203A1 (en) * 2003-09-25 2008-06-26 Maximino Aguilar Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment
US20100333113A1 (en) * 2009-06-29 2010-12-30 Sun Microsystems, Inc. Method and system for heuristics-based task scheduling
US20120173728A1 (en) * 2011-01-03 2012-07-05 Gregory Matthew Haskins Policy and identity based workload provisioning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109791504A (zh) * 2016-09-21 2019-05-21 埃森哲环球解决方案有限公司 针对应用容器的动态资源配置
CN109791504B (zh) * 2016-09-21 2023-04-18 埃森哲环球解决方案有限公司 针对应用容器的动态资源配置
US11138146B2 (en) 2016-10-05 2021-10-05 Bamboo Systems Group Limited Hyperscale architecture
US11979339B1 (en) * 2020-08-19 2024-05-07 Cable Television Laboratories, Inc. Modular schedulers and associated methods
US11861397B2 (en) 2021-02-15 2024-01-02 Kyndryl, Inc. Container scheduler with multiple queues for special workloads

Similar Documents

Publication Publication Date Title
CN109471727B (zh) 一种任务处理方法、装置及系统
CN110249311B (zh) 云计算系统中针对虚拟机的资源管理
US20230039191A1 (en) Throttling queue for a request scheduling and processing system
CN109936604B (zh) 一种资源调度方法、装置和系统
US10572306B2 (en) Utilization-aware resource scheduling in a distributed computing cluster
CN106936883B (zh) 用于云系统的方法和装置
KR102514478B1 (ko) Cpu 리소스들을 할당하기 위한 장치, 디바이스 및 방법
US11113782B2 (en) Dynamic kernel slicing for VGPU sharing in serverless computing systems
US9183016B2 (en) Adaptive task scheduling of Hadoop in a virtualized environment
US8689226B2 (en) Assigning resources to processing stages of a processing subsystem
CN116450358A (zh) 云计算系统中的用于虚拟机的资源管理
US9164791B2 (en) Hierarchical thresholds-based virtual machine configuration
US11265264B2 (en) Systems and methods for controlling process priority for efficient resource allocation
US20200174844A1 (en) System and method for resource partitioning in distributed computing
US20140007093A1 (en) Hierarchical thresholds-based virtual machine configuration
CN109564528B (zh) 分布式计算中计算资源分配的系统和方法
US10630600B2 (en) Adaptive network input-output control in virtual environments
CN110221920B (zh) 部署方法、装置、存储介质及系统
US20150186256A1 (en) Providing virtual storage pools for target applications
WO2016118164A1 (fr) Groupes de ressources de processeur à programmateur attribué
US10733024B2 (en) Task packing scheduling process for long running applications
Rossi et al. Elastic deployment of software containers in geo-distributed computing environments
JP2013125548A (ja) 仮想マシン割り当てシステム及びその使用方法
US11521042B2 (en) System and method to dynamically and automatically sharing resources of coprocessor AI accelerators
CN111930516B (zh) 一种负载均衡方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15879188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15879188

Country of ref document: EP

Kind code of ref document: A1