CN117751349A - Workload aware process scheduling - Google Patents

Workload aware process scheduling Download PDF

Info

Publication number
CN117751349A
CN117751349A CN202180100569.3A CN202180100569A CN117751349A CN 117751349 A CN117751349 A CN 117751349A CN 202180100569 A CN202180100569 A CN 202180100569A CN 117751349 A CN117751349 A CN 117751349A
Authority
CN
China
Prior art keywords
execution plan
process execution
processes
dynamic bias
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180100569.3A
Other languages
Chinese (zh)
Inventor
帝玛·库兹涅佐夫
艾琳娜·贝列佐夫斯基
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Publication of CN117751349A publication Critical patent/CN117751349A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides a computing device for one or more processors to schedule execution of one or more processes according to a dynamic bias score. A semantic parameter indicating an amount of processing expected to be performed by the process within a future time interval is monitored. The semantic parameters are defined by a process execution plan associated with the process. Each process is associated with its own process execution plan that is different from the process execution plans of the other processes. And calculating a dynamic deviation score according to the process execution plan. The dynamic bias score represents the actual real-time processing requirements of the process. The one or more processors schedule the process for execution according to the dynamic bias score. Processes with higher dynamic bias scores schedule and/or allocate more processing resources with higher priority than processes with lower dynamic bias scores.

Description

Workload aware process scheduling
Technical Field
The present application relates to process execution and, more particularly, but not exclusively, to scheduling of processes for execution.
Background
As each process is executed by the processor, the operating system may execute multiple processes by scheduling. By allocating a certain amount of processing time and/or resources to each executing process, the available processing resources are shared by multiple executing processes.
Disclosure of Invention
It is an object of the present invention to provide a computing device, a system, a computer program product and a method for scheduling execution of at least one process.
The above and other objects are achieved by the features of the independent claims. Other implementations are apparent in the dependent claims, the description and the drawings.
According to a first aspect, a computing device for scheduling execution of at least one process is configured to: monitoring a plurality of semantic parameters defined by a process execution plan associated with a process, wherein the process execution plan is different from process execution plans of other processes, wherein the plurality of semantic parameters indicate an amount of processing expected to be performed by the process within a future time interval; calculating a dynamic deviation score according to the process execution plan; at least one processor schedules execution of the process according to the dynamic bias score.
According to a second aspect, a method of scheduling execution of at least one process, comprises: monitoring a plurality of semantic parameters defined by a process execution plan associated with a process, wherein the process execution plan is different from process execution plans of other processes, wherein the plurality of semantic parameters indicate an amount of processing expected to be performed by the process within a future time interval; calculating a dynamic deviation score according to the process execution plan; at least one processor schedules execution of the process according to the dynamic bias score.
According to a third aspect, a non-transitory medium stores program instructions for scheduling execution of at least one process, which when executed by a processor, cause the processor to: monitoring a plurality of semantic parameters defined by a process execution plan associated with a process, wherein the process execution plan is different from process execution plans of other processes, wherein the plurality of semantic parameters indicate an amount of processing expected to be performed by the process within a future time interval; calculating a dynamic deviation score according to the process execution plan; at least one processor schedules execution of the process according to the dynamic bias score.
The scheduling of execution of a particular process by one or more processors is improved according to a particular process execution plan using a dynamic bias score calculated for the particular process by a particular semantic parameter defined by the particular process execution plan.
In another implementation of the first, second and third aspects, the monitoring is performed for each of a plurality of processes, wherein the scheduling is performed for each of the plurality of processes according to a respective process execution schedule associated with each of the plurality of processes, calculating a respective dynamic bias score for each of the plurality of processes, and scheduling the plurality of processes for execution according to the respective dynamic bias scores.
The dynamic bias score may enable efficient scheduling of multiple processes.
In another implementation manner of the first, second and third aspect, the monitoring, the calculating the dynamic bias score and the scheduling are iterated over a plurality of time intervals according to a real-time dynamic bias score based on real-time semantic parameters for real-time dynamic scheduling of the process.
The scheduling of the specific process is dynamically adjusted according to the change of the dynamic deviation score, and the efficient allocation of the processing resources is improved according to the real-time requirements of the specific process.
In another implementation of the first, second and third aspects, a first process having a relatively higher dynamic bias score is scheduled to receive relatively more processing resources during a future time interval than a second process having a relatively lower dynamic bias score.
The dynamic bias score effectively indicates to the scheduler real-time processing resource requirements of each process, thereby enabling efficient scheduling and allocation of the processing resources.
In another implementation manner of the first, second and third aspects, the process execution plan is executed by a scheduler, which schedules execution of the process according to the dynamic bias score calculated by the process execution plan.
The process execution plan defines how dynamic bias scores are calculated from semantic parameters according to a particular process execution plan.
In another implementation manner of the first, second and third aspects, the monitoring and the calculating the dynamic deviation score are implemented by the process execution plan, wherein the process execution plan is different from the scheduler that schedules execution of the process.
In another implementation manner of the first, second and third aspects, the method further includes establishing the process execution plan for the process and notifying the scheduler of the process execution plan.
The scheduler executing the process execution plan may ensure that a process is not too busy to update its priority according to the load.
In another implementation of the first, second and third aspects, the plurality of semantic parameters are monitored using at least one interface defined by the process execution plan.
The interface may provide information that is otherwise unavailable and/or that accesses and/or considers scheduling of processes.
In another implementation of the first, second and third aspects, the process comprises one of a plurality of processes of a common type associated with a process execution plan defined for the plurality of processes of the common type, wherein the process execution plan defined for the plurality of processes of the common type is different from the process execution plans defined for the other processes of the common type.
In another implementation form of the first, second and third aspect, the method further comprises analyzing the process to identify a type of the process and selecting the process execution plan from a plurality of existing process execution plans based on the type of the process.
Multiple off-the-shelf plans may be for a particular process without necessarily requiring software changes to the particular process.
In another implementation manner of the first, second and third aspects, the plurality of semantic parameters is selected from a list of semantic parameters, the semantic parameters including at least one of: checking metrics of the process, checking input/output modes of the process, receiving explicit scheduling hints from a user space, operating System (OS) selection of typical information disclosed, and information that the OS is permitted to use.
In another implementation manner of the first, second and third aspect, the method further includes: an affinity map indicating which of a plurality of processing resources to allocate to the process is determined based on an analysis of the plurality of semantic parameters.
The affinity mapping further improves the efficiency of the use of processing resources, e.g. by allocating the most suitable processing resources to the specific process.
In another implementation of the first, second and third aspects, the process is associated with a priority value, and the scheduling of the process is based on an actual process priority calculated as a combination of the priority value and the dynamic bias score.
The dynamic bias score may be increased by a priority value used to calculate the actual process priority.
In another implementation manner of the first, second and third aspects, the process is executed in the user space, and the process execution plan is executed in a kernel space.
Additional semantic parameters from the kernel space may better determine the dynamic bias score, resulting in a more optimal scheduling of the particular process.
Unless defined otherwise, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, these materials, methods, and examples are illustrative only and not necessarily limiting.
Drawings
Some embodiments of the invention are described herein, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings, it is emphasized that the details shown are merely illustrative and for purposes of illustrative discussion of embodiments of the invention. In this regard, it will be apparent to those skilled in the art how embodiments of the invention may be practiced from the illustration of the drawings.
In the drawings:
FIG. 1 is a flow diagram of a method performed by one or more processors to schedule a process according to a dynamic bias score, provided by some embodiments;
FIG. 2 is a block diagram of components of a system provided by some embodiments for one or more processors to schedule process execution according to a dynamic bias score;
FIG. 3 is a data flow diagram provided by some embodiments describing exemplary data flows between a process, an operating system, a process execution plan, and a scheduler for scheduling a process according to a calculated dynamic bias score;
FIG. 4 is another dataflow diagram that depicts the flow of data between a process, a process execution plan, and a scheduler, provided by some embodiments, for scheduling a process according to a calculated dynamic bias score;
FIG. 5 is a data flow diagram provided by some embodiments depicting exemplary data flows between a producer, a consumer, an OS, a process execution plan, and a scheduler for scheduling the consumer according to a calculated dynamic bias score;
FIG. 6 is another data flow diagram provided by some embodiments describing data flow between a consumer, a producer, a process execution plan including a consumer counter, and a scheduler for scheduling consumers based on calculated dynamic bias scores.
Detailed Description
The present application relates in some embodiments to process execution and more particularly, but not exclusively, to scheduling of processes for execution.
An aspect of some embodiments relates to systems, methods, computing devices and/or apparatuses and/or computer program products (storing code instructions executable by one or more processors) for one or more processors to schedule process execution according to a dynamic bias score. Semantic parameters are monitored that indicate the amount of processing that an intended process is executing within a future time interval. Exemplary semantic parameters include: the metrics of the process are checked, the input/output mode of the process is checked, explicit scheduling hints are received from the user space, the Operating System (OS) selects typical information disclosed, and the OS allows the information to be used. The semantic parameters are defined by a process execution plan associated with the process. Each process (and/or type of process) is associated with its own process execution plan that is different from the process execution plans of other processes (and/or other types of processes). A dynamic bias score is calculated based on the process execution plan. The dynamic bias score represents the actual real-time processing requirements of the process. The dynamic bias score may be calculated based on the monitored semantic parameters. One or more processors schedule a process for execution according to the dynamic bias score. A process with a higher dynamic bias score may schedule and/or allocate more processing resources with a higher priority than a process with a lower dynamic bias score.
Calculating a dynamic bias score for each process and/or each type of process based on a process execution plan defined for each process and/or each type of process, using semantic parameters defined by the process execution plan, improves scheduling of processes executed by one or more processors. The dynamic bias score may represent a tendency to allocate processing resource time to a process. The process may be scheduled according to real-time requirements indicated by the dynamic bias score.
Scheduling the process according to the dynamic bias score may more quickly respond and/or efficiently utilize the available processing resources. For example, when there are a large number of work waiting processes to be processed, a larger processor (e.g., central processing unit (central processing unit, CPU)) time slice is allocated, which provides a faster response to the consumer generating the work. When no or minimal work is to be processed for a process, a smaller CPU time slice is allocated, which results in less wasted CPU cycles allocated to other processes.
At least some implementations described herein improve upon existing methods for scheduling process execution based on a priori priorities and other heuristics collected by the operating system about the process (whatever the process is). This standard approach is based on a standard priority value assigned to a process. Furthermore, such standard methods use a generic OS schedule that ignores processing semantics (e.g., how computationally intensive at any given moment). For example, the Linux scheduler considers the process priority and its "interactivity" (the speed at which a process generates a CPU), rather than monitoring semantic parameters defined based on a process execution plan defined by the process. In the standard approach, the OS scheduler is one view of all processes, regardless of process type and process-specific behavior patterns. In contrast, at least some implementations described herein do not treat all processes differently, but rather use a process execution plan defined for a process (e.g., for a single unique process) and/or for some type of process (e.g., web server, graph). A corresponding dynamic bias score is calculated for each process based on the process execution plan defined for that process.
Some processes run in a very predictable manner, but the standard OS scheduler does not know this. Furthermore, standard OS schedulers are unaware of the application-related nature of the process (e.g., the process performs a large amount of processing per second). Thus, the standard approach does not take into account the actual real-time processing requirements of the process. In contrast, at least some implementations described herein use dynamic bias scores to schedule execution of a processor. The dynamic bias score indicates actual real-time processing requirements of the process, such as whether the process is running in a very predictable manner, and/or whether the process is executing processing for multiple seconds. Scheduling using dynamic bias scores may allocate resources in real-time according to real-time requirements of the process.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of components and/or methods set forth in the following description and/or illustrated in the drawings and/or examples. The invention is capable of other embodiments or of being practiced or of being carried out in various ways.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions that cause a processor to perform aspects of the invention.
A computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a corresponding computing/processing device or to an external computer or external storage device over a network such as the internet, a local area network, a wide area network, and/or a wireless network.
The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the final scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (local area network, LAN) or a wide area network (wide area network, WAN); or with an external computer (e.g., through the internet using an internet service provider). In some embodiments, electronic circuitry, including programmable logic circuitry, field-programmable gate array (FPGA), or programmable logic array (programmable logic array, PLA), or the like, may execute the computer-readable program instructions by utilizing state information of the computer-readable program instructions to customize the electronic circuitry to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may be performed out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Referring now to FIG. 1, a flow diagram of a method performed by one or more processors to schedule a process according to a dynamic bias score is provided for some embodiments. Referring also to FIG. 2, a block diagram of components of a system 200 for scheduling process execution by one or more processors according to a dynamic bias score is provided for some embodiments. Referring also to FIG. 3, a dataflow diagram describing an exemplary dataflow between a process 302, an operating system 304, a process execution plan 306, and a scheduler 308 is provided for some embodiments to schedule the process 302 according to a calculated dynamic bias score. Referring also to FIG. 4, another dataflow diagram that describes the flow of data between a process 402, a process execution plan 404, and a scheduler 406 is provided for some embodiments for scheduling the process 402 according to a calculated dynamic bias score. Referring also to FIG. 5, a dataflow diagram that describes exemplary dataflow between a producer 502, a consumer 504, an OS 506, a process execution plan (also referred to as a plan) 508, and a scheduler 510 is provided for some embodiments to schedule the consumer 504 according to a calculated dynamic bias score. The producer 502 generates data that is then processed by the consumer 504.
Referring also to FIG. 6, another dataflow diagram that describes the dataflow between a consumer 602, a producer 604, a process execution plan including a consumer counter 606, and a scheduler 608 is provided for some embodiments for scheduling the consumer 602 according to the calculated dynamic bias score. The system 200 may implement the acts of the methods described with reference to fig. 1 and/or fig. 3-6, with code instructions (e.g., code 206A) stored in the memory 206 being executed by the one or more processors 202 of the computing device 204.
For example, computing device 204 may be implemented as one or more of the following: a computing cloud, a cloud network, a computer network, one or more virtual machines (e.g., a hypervisor, a virtual server), a network node (e.g., a switch, a virtual network, a router, a virtual router), a single computing device (e.g., a client terminal), a set of computing devices arranged in parallel, a network server, a web server, a storage server, a local server, a remote server, a client terminal, a mobile device, a stationary device, a public information machine, a smartphone, a laptop computer, a wearable computing device, an eyeglass computing device, a watch computing device, and a desktop computer.
Computing device 204 includes one or more processors 202, e.g., implemented as one or more central processing units (central processing unit, CPU), one or more graphics processors (graphics processing unit, GPU), one or more field programmable gate arrays (field programmable gate array, FPGA), one or more digital signal processors (digital signal processor, DSP), one or more application-specific integrated circuits (application-specific integrated circuit, ASIC), one or more custom circuits, processors connected with other units, and/or special-purpose hardware accelerators. One or more processors 202 may be implemented as a single processor, a multi-core processor, and/or a cluster of processors for parallel processing (which may include homogeneous and/or heterogeneous processor architectures). It is noted that one or more processors 202 may be designed to implement in hardware one or more features stored as code instructions 206A.
The memory 206 stores code instructions executable by the one or more processors 202, such as random access memory (random access memory, RAM), dynamic random access memory (dynamic random access memory, DRAM), and/or storage class memory (storage class memory, SCM), such as non-volatile memory, magnetic media, semiconductor memory devices, hard drives, removable memory, and optical media (e.g., DVD or CD-ROM). For example, the storage device 206 stores one or more of the following: code 206A that, when executed by the one or more processors 202, implements one or more features and/or acts of the method described with reference to fig. 1, a process 206B for which a dynamic bias score is calculated to be scheduled for execution by the one or more processors 202, a scheduler 206C, the one or more processors 202 scheduling the one or more processes 206B for execution according to the dynamic bias score and the one or more process execution plans 206D, as described herein.
The computing device 204 may include a data storage device 216 for storing data, e.g., a semantic parameter store 216A that stores observed and/or calculated semantic parameters for calculating dynamic bias scores, and/or an interface 216B (e.g., application programming interface (application programming interface, API), software development kit (software development kit, SDK), function call, etc.) for retrieving semantic parameters. The data store 216 can be implemented as memory, a local hard drive, virtual memory, a removable storage unit, an optical disk, a storage device, and/or a remote server and/or computing cloud (e.g., accessed via a network connection).
It should be noted that data may be stored on memory 206 and/or data storage device 216. FIG. 2 depicts an example, which is not necessarily limiting. For example, the data storage 216 may store predefined process execution plans. The process execution plan may be selected and loaded into memory 206 according to the type of process.
Computing device 204 may include one or more of a network interface 218 for connecting to network 214, such as a network interface card, a wireless interface to a wireless network, a physical interface for connecting to a cable for a network connection, a virtual interface implemented in software, network communication software providing higher-level network connections, and/or other implementations.
For example, network 214 may be implemented as the Internet, a local area network, a virtual private network, a wireless network, a cellular network, a local bus, a point-to-point link (e.g., wired), and/or combinations thereof.
Computing device 204 may be connected to one or more servers 210 and/or one or more client terminals 212 using network 214 (or another communication channel, such as by a direct link (e.g., cable, wireless) and/or an indirect link (e.g., via an intermediate computing unit such as a server and/or via a storage device).
The computing device 204 may include one or more physical user interfaces 208 and/or be in communication with the one or more physical user interfaces 208, the one or more physical user interfaces 208 including mechanisms that facilitate user input of data and/or viewing of data (e.g., defining a process execution plan), optionally within a GUI. For example, the exemplary user interface 208 includes one or more of a touch screen, a display, a keyboard, a mouse, voice activated software using a speaker and microphone.
Referring back to FIG. 1, at 102, a process execution plan is set and/or selected for a process. Each process execution plan of each process (and/or each type of process) is different from the process execution plans of other processes (and/or other types of processes).
A process is an instance of a running program. A process may have attributes such as a memory context, a standard priority value (e.g., indicating how much CPU time the process gets), and/or a process ID. The dynamic bias score (e.g., calculated as described with reference to 106) may adjust the standard priority value, and/or may calculate an actual priority value calculated as a combination of the standard priority value and the dynamic bias score, as described herein.
Alternatively, the process is one of a plurality of candidate processes of a common type, such as a web server, a message processing process, a graphics generation process, and the like. The plurality of candidate processes of the common type are associated with a common process execution plan defined for the common type. The common process execution plan for a common type may be different from other process execution plans defined for other common type processes, each common type being for multiple candidate processes.
The process execution plan may control how process scheduling is affected by system events. The process execution plan may define an interface for monitoring semantic parameters. The process execution plan may also be responsible for checking the environmental elements used by the process execution plan implementation. The process execution plan outputs a dynamic bias score and an optional affinity map for the process. For example, a process execution plan may be implemented as code (e.g., written by a developer) and/or as a selection and/or definition of values from a predefined list of parameters and/or variables. The process execution plan may be automatically generated by another code (e.g., based on analysis of the process), and/or selected from a predefined process execution plan.
The process execution plan defines how the dynamic bias score is calculated from the semantic parameters according to the process execution plan. The process execution plan monitors the runtime environment to obtain semantic parameters and calculates and reflects process resource requirements by calculating dynamic bias scores from the obtained semantic parameters. The calculation of the monitoring feature described with reference to 104 of FIG. 1 and the dynamic bias score feature described with reference to 106 of FIG. 1 is performed by a process execution plan. The process execution plan is different from a scheduler that schedules execution of processes described with reference to 110 of fig. 1.
When a process runs, the process may set a process execution plan according to semantic parameters of the process. Alternatively, the process execution plan of the process may be set by another process on behalf of the process. For example, a service manager (e.g., systemd) may be extended to generate a pre-applied process execution plan process. For example, for a packet processing procedure, the semantic parameters may include any incoming packet to be processed (e.g., scanned, hashed, or decrypted). Thus, many pending packets indicate that the packet processing process requires more CPU time. The process execution plan may be a function written by a developer (e.g., of a packet processing process) and loaded into kernel space by the packet processing process at process start-up. The function examines the metrics of the incoming connection and calculates a dynamic deviation score based on the number of packets queued on the connection flow.
The set process execution plan may be notified to a scheduler that schedules the process execution plan. For example, the scheduler is provided with the process execution plan itself (e.g., its code), a pointer to the process execution plan, and/or an indication of the process execution plan (e.g., when one process execution plan is selected from a plurality of processes). The process execution plan may be executed by a scheduler that schedules execution of the process according to a dynamic bias score calculated by the process execution plan. The scheduler executing the process execution plan may ensure that the process is not too busy to update its priority according to the load.
Optionally, the process is analyzed to identify the type of process. For example, the inputs and/or outputs of the process are analyzed. For example, a process that receives a packet input may be identified as a packet processing process. In another example, the behavior of a process is analyzed. For example, a process that is invoked when a large number of graphics are calculated may be a graphics-related process. The process execution plan may be selected from a plurality of off-the-shelf (i.e., predefined) process execution plans depending on the type of process. A number of different off-the-shelf process execution plans may be provided. Multiple off-the-shelf plans may be available to a process without necessarily requiring software changes to the process. For example, an external tool may configure a process execution plan for a process externally by providing configuration values. A number of different off-the-shelf programs may be designed to cover a wide range of workloads for different types of processes, such as web servers, databases, and HTTP proxies.
At 104, semantic parameters are monitored. The semantic parameters are defined by a process execution plan associated with the process. The semantic parameters indicate the amount of processing that the process is expected to perform in a future time interval.
Semantic parameters are monitored using one or more interfaces defined by a process execution plan. The interface may provide information that is otherwise unavailable and/or that accesses and/or considers scheduling of processes.
The semantic parameters may be based on monitoring the environment of the process execution plan, which may be everything that is disclosed by the runtime selection of the process execution plan. For example, when the process execution plan is a kernel module, then the functions and data available to each module may be accessed by the process execution plan. In another example, when the process execution plan is an eBPF function running at a hook point in the kernel, then the environment of the process execution plan includes all user-defined mappings, kernel-exposed functions, and other eBPF subroutines.
The process may be executed in user space and the process execution plan may be executed in kernel space. Executing a process execution plan in kernel space provides access to semantic parameters that are otherwise unavailable in user space. Additional semantic parameters from kernel space may better determine the dynamic bias score, resulting in a more optimal scheduling of processes.
Examples of semantic parameters include: the metrics of the process are checked, the input/output mode of the process is checked, explicit scheduling hints are received from the user space, the Operating System (OS) selects typical information disclosed, and the OS allows the information to be used.
At 106, a dynamic bias score is calculated according to the process execution plan. The dynamic score may be calculated based on semantic parameters.
The dynamic bias score may be in a range, for example, 0-1, or 0-10, or 0-100, or other values. The dynamic bias score may be an integer and/or a real number. The range may be discrete and/or continuous. Alternatively, the dynamic bias score may be a class, e.g., low, semi-low, medium, semi-high, etc.
The dynamic score may be calculated using different methods. In one example, a dynamic score is calculated using a trained Machine Learning (ML) model, such as a statistical classifier (e.g., neural network, support vector machine, regressor) that maps semantic parameters to dynamic score outputs. Such an ML model may be trained on a recorded training dataset of semantic parameters labeled with ground truth labels of dynamic scores. In another example, the dynamic score is calculated using a set of rules that map certain combinations of semantic parameters to the dynamic score, for example. In yet another example, the dynamic score is calculated using a mathematical formula and/or function that calculates the dynamic score from the input of the semantic parameter.
At 108, an affinity map may be determined. The affinity map indicates which of the available processing resources allocated to the process. Affinity mapping may be determined from analysis of semantic parameters. For example, affinity mappings are computed using a trained Machine Learning (ML) model (e.g., neural network, support vector machine, regressor) that maps semantic parameters to resources to be allocated. Such an ML model may be trained on a recorded training dataset of semantic parameters labeled with ground truth labels that allocate resources. In another example, the dynamic score is calculated using a set of rules that map certain combinations of semantic parameters to allocated resources, for example. In yet another example, the dynamic score is calculated using a mathematical formula and/or function that calculates the allocation resources based on the input of the semantic parameters.
Affinity mapping further improves the efficiency of the use of processing resources, for example by allocating the most suitable processing resources to processes. Affinity may be calculated dynamically, providing dynamic process affinity, for example in a multi-core environment. Affinity may suggest a scheduling process to take advantage of the best location of data locality (e.g., CPU cache, NUMA node). For example, a process execution plan may find that upcoming work resides on a NUMA node instead of in the CPU cache, so affinity mapping indicates that the NUMA node will be assigned to a process. In another example, when it is determined that a process uses an offload engine that is present on only a subset of cores (in a non-homogenous system), the affinity map indicates that the subset of cores is to be assigned to the process.
At 110, one or more processors schedule execution of a process according to a dynamic bias score. The scheduling of the process may be based on affinity mapping.
The dynamic bias score effectively indicates the real-time processing resource requirements of each process to the scheduler, thereby enabling efficient scheduling and allocation of processing resources. The dynamic bias score may represent how advantageous the scheduling of a process should be from a process perspective in order to perform in an optimal manner. A higher value of the dynamic bias score may indicate that the first process will obtain a more advantageous schedule than the second process of a lower value of the dynamic bias score. For example, for processes with high immediate processing workload, a high dynamic bias score will be assigned. For processes with lower or no immediate processing effort, a lower dynamic bias score is assigned.
The processes may be prioritized according to the value of the dynamic bias score. Alternatively, the range of dynamic bias scores for different processes may be the same, and different bias scores may be compared, thereby ordering the processes according to the dynamic bias scores. For example, a first process having a relatively higher dynamic bias score (e.g., 7/10) is scheduled to receive relatively more processing resources during a future time interval than a second process having a relatively lower dynamic bias score (e.g., 4/10).
The scheduling of the process may be performed by a scheduler. The scheduler may manage system resources. The scheduler may obtain dynamic bias scores from the process execution plan and may obtain optional affinity maps from the process execution plan.
The process may be associated with a priority value, which may be a standard definition using standard methods. The actual process priority may be calculated as a combination of a priority value and a dynamic bias score, such as an increase in the priority value, a sum of the priority value and the dynamic bias score, a multiplication of the priority value and the dynamic bias score, the dynamic bias score may be a weight assigned to the priority value, the priority value may be a weight assigned to the dynamic bias score, and/or by a mathematical function/equation using the priority value and the dynamic bias score as inputs. The scheduling of the process may be based on actual process priorities.
At 112, one or more features described with reference to 102-110 may be iterated. Different processes may be iterated to schedule each of the different processes. Alternatively or additionally, iterations may be performed on the same process over multiple time intervals for rescheduling the same process during different time intervals, e.g., based on a dynamic bias score indicating real-time dynamic calculation of real-time resource requirements of the process.
Optionally, during the iteration, the monitoring feature described in connection with 104 of fig. 1 is performed for each process of the plurality of processes. A respective dynamic bias score is calculated for each process according to a respective process execution plan associated with each process, and the processes are scheduled for execution according to the respective dynamic bias scores.
The dynamic bias score may enable efficient scheduling of multiple processes. The dynamic bias score for each process may be calculated based on a different process execution plan defined for each process. Thus, each dynamic bias score captures the real-time processing requirements of each process, as defined by its respective process execution plan. And carrying out overall scheduling according to a plurality of dynamic deviation scores calculated by using different strategies so as to obtain overall efficient allocation of processing resources among different processes.
Processing resources are allocated according to the needs of each of the plurality of processes as indicated by the respective dynamic bias score for each respective process. For example, a process with a relatively high dynamic bias score will schedule with a higher priority than other processes with relatively low dynamic bias scores. This may allocate more processing resources to processes requiring processing resources, as indicated by a higher dynamic bias score, and/or allocate less processing resources to processes that may run with less resources, as indicated by a lower dynamic bias score.
The monitoring feature described with reference to 104 of fig. 1, the calculated dynamic bias score feature described with reference to 106 of fig. 1, the affinity mapping feature described with reference to 108 of fig. 1, and the scheduling feature described with reference to 110 of fig. 1 may be optionally determined to iterate over a plurality of time intervals for real-time dynamic scheduling of processes according to the real-time dynamic bias score based on real-time semantic parameters.
The scheduling of the process is dynamically adjusted according to the change of the dynamic deviation score, and the efficient allocation of the processing resources is improved according to the real-time requirements of the process. The dynamic bias score may be dynamically recalculated to reflect real-time changes in the amount of processing completed by the process, as indicated by changes in the semantic parameters. The real-time variation of the value of the dynamic bias score enables the scheduling of processes to be dynamically adapted in real-time. For example, when the dynamic bias score increases, indicating that the workload of the process increases, the process may be scheduled to receive additional processing resources. When the dynamic bias score decreases, indicating that the workload of a process has decreased, the process may be scheduled to receive less processing resources, thereby freeing up processing resources for use by other processes.
Returning now to fig. 3, in 310, an initialization phase is initiated. In 310A, process 302 builds process execution plan 306 via OS 304, for example, by generating operation 310B. A "complete" 310C message may be provided back to process 302 indicating that process execution plan 306 has been initiated.
At 312, a dynamic bias score is calculated. At 312A, the scheduler 308 invokes the process execution plan 306. At 312B, the process execution plan 306 accesses relevant information (e.g., semantic parameters) from the OS 304. At 312C, OS 304 provides information (e.g., semantic parameters) to process execution plan 306. At 312D, the process execution plan 306 provides a dynamic bias score (calculated from the semantic parameters) to the scheduler 308. In 312E, the scheduler calculates the actual process priority based on the dynamic bias score and the standard priority of the process.
At 314, process scheduling is performed. At 314A, scheduler 308 calculates the CPU time slices to be allocated to process 302 based on the calculated actual process priorities. At 314B, scheduler 308 allocates a time slice to process 302. At 314C, process 302 is executed by the CPU according to the specified schedule until the time slice is complete or the process exits.
Referring back now to FIG. 4, process 402 is executed in user space 408. The process execution plan 404 and scheduler 406 execute in kernel space 410. At 450A, the process 402 designates a process execution plan 404, e.g., maps the process execution plan to a process, and/or selects a process execution plan from a plurality of process execution plans based on the type of process. At 450B, the process execution plan observes the execution environment 412 (e.g., process, OS data structure) to obtain one or more semantic parameters. At 450C, the process execution plan 404 sets policy-driven priority code 414, i.e., by providing a dynamic bias score calculated from the semantic parameters. Actual prioritizer code 416 calculates the actual priorities of processes 402 based on dynamic bias score 414 (i.e., policy driver priority) and local priority 418. At 450D, the scheduler allocates process 402 to the processing resource according to the calculated actual priority.
Returning now to fig. 5, at 512, an initialization phase is initiated. In 512A, consumer process 302 establishes process execution plan 508 via OS 506, for example, by generating operation 512B. The process execution plan 508 controls the internal counters of the pending work items of the consumer 504. A "complete" 512C message may be provided back to the consumer 504 indicating that the process execution plan 506 has been initiated. Consumer 504 also receives a counter handle that identifies an internal counter of plan 508. At 512D, consumer 504 shares a counter handle with producer 502, such as when producer 502 connects to consumer 504.
In 514, a dynamic bias score is calculated. At 514A, the scheduler 510 invokes the process execution plan 508. At 514B, the process execution plan 508 calculates a dynamic bias score based on the current counter value. At 514C, the process execution plan 508 provides a dynamic bias score (calculated from the counter value) to the scheduler 510. In 514D, the scheduler 510 calculates the actual process priority based on the dynamic bias score and the standard priority of the process.
At 516, process scheduling is performed. In 516A, scheduler 519 calculates the CPU time slices to be allocated to consumer 504 based on the calculated actual process priorities. At 516B, scheduler 510 allocates the time slices to consumer 504. At 516C, consumer 504 is executed by the CPU according to the specified schedule until the time slice is complete or the process exits.
At 518, producer-consumer (i.e., 502-504) interactions 518 occur prior to and/or in parallel with the computation of dynamic bias score 514 for dynamically updating the counter.
At 520, a loop is implemented that represents a process of adding work from producer 502 to consumer 504. At 520A, producer 502 sends data (e.g., commit work) to consumer 504. In response to sending the data, producer 502 accesses the API of OS 506 using the handle at 520B, and increments the counter of plan 508 at 520C. The API may be, for example, an Enqueue_work (handle), which increments an internal counter. At 520D, OS 506 may send a message to producer 502 indicating that the increment of the internal counter has been completed.
At 522, a loop is implemented that represents the progress of consumer 504 processing the work. At 522A, consumer 504 receives work from producer 502. In 522B, the consumer processes the job. In 522C, in response to completing the data processing, producer 504 accesses another API of OS 506 using the handle and decrements the counter of plan 508 at 522D. The API may be, for example, a dequeue_work (handle) that decrements an internal counter.
Referring back now to 514D, the scheduler 510 assigns actual priorities based on the respective internal counter of each plan 508 for each consumer 504. Consumers 504 with a larger internal counter value (indicating more work waiting to be processed) schedule more advantageously than consumers 504 with a lower internal counter value (indicating less work waiting to be processed).
Referring back to FIG. 6, another dataflow diagram is provided that describes the dataflow between a consumer 602, a producer 604, a process execution plan that includes a consumer counter 606, and a scheduler 608 for scheduling the consumer 602 according to the calculated dynamic deviation score. Producer 604 generates data that is then processed by consumer 602. In 650A, consumer 602 specifies a process execution plan 606 that includes an incrementing and decrementing TODO counter, as described herein. At 650B, consumer 602 provides producer 604 with a reference to TODO counter of process execution plan 606. In 650C, in response to sending the job to consumer 602, producer 604 increments the TODO counter of process execution plan 606. In 650D, in response to completing the processing of the job received from producer 604, consumer 602 decrements TODO counter 650D. At 650E, the process execution plan 606 provides the dynamic bias score calculated from the current value of the TODO counter to the policy driver priority code 650F. The actual consumer priority code 650G calculates the actual priority from the dynamic deviation score 650H. At 650I, the scheduler 608 schedules the consumer 602 according to the actual priority.
Another example is now described in which the process that is scheduled according to the dynamic bias score is an HTTP proxy. The HTTP proxy process is responsible for receiving HTTP requests from clients and relaying the requests to the server. The HTTP proxy process is also responsible for receiving HTTP responses from the server and relaying the responses to the client. The proxy process establishes an HTTP proxy execution plan. The plan can access all socket streams of the proxy process. The plan examines the data going into and out of the agent. The program runs a state machine for counting the number of HTTP messages per flow, for example:
initial state: wait for the start of the message (first byte).
Increment the number of messages.
Check the header when it appears, find Content-Length.
Scan up to the head and tail.
Skip the next Content-Length byte.
Transition to the initial state.
Thus, the plan can access the total number of messages received and sent by the agent. The plan calculates a dynamic bias score based on the number of pending packets (i.e., calculated as the difference between received and transmitted). The scheduler schedules the proxy process according to the actual process priority, which is calculated from the dynamic deviation score. Proxy processes with higher dynamic bias scores (representing more pending messages) are more advantageously scheduled than other proxy processes with lower dynamic bias scores (representing fewer pending messages).
Other systems, methods, features and advantages of the invention will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims.
The description of the various embodiments of the present invention is intended for purposes of illustration only and is not intended to be exhaustive or limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen to best explain the principles of the embodiments, the practical application, or the technological advancement of the art, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein, as opposed to commercially available technologies.
It is expected that during the life of a patent expiring in this application many relevant processes and drivers will be developed and the scope of the term process and driver is intended to include all such new technologies a priori.
The term "about" as used herein means ± 10%.
The terms "including," having, "and variations thereof mean" including but not limited to. This term includes the term "consisting of … …" as well as "consisting essentially of … …".
The phrase "consisting essentially of …" means that a composition or method may include additional ingredients and/or steps, provided that the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the composition or method as required.
As used herein, the singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise. For example, the term "one complex" or "at least one complex" may include a plurality of complexes, including mixtures thereof.
The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any "exemplary" embodiment is not necessarily to be construed as preferred or advantageous over other embodiments, and/or as an exclusion of any combination of features from other embodiments.
The word "optionally" as used herein means "provided in some embodiments and not provided in other embodiments. Any particular embodiment of the invention may include a plurality of "optional" features unless the features contradict each other.
In this application, various embodiments of the invention may be presented in a range format. It should be understood that the description of the range format is merely for convenience and brevity and should not be construed as a fixed limitation on the scope of the present invention. Accordingly, the description of a range should be considered to have specifically disclosed all possible sub-ranges as well as individual values within the range. For example, a description of a range such as from 1 to 6 should be considered to have specifically disclosed sub-ranges from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6, etc., and individual numbers within that range such as 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
When a range of numbers is referred to herein, it is intended to encompass any of the recited numbers (fractional or integer) within the range indicated. The phrases "a range between a first indicated number and a second indicated number" and "a range from a first indicated number to a second indicated number" are used interchangeably herein to mean that all fractions and integers between the first indicated number and the second indicated number are included.
It is appreciated that certain features of the invention, which are, for clarity of description, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as any suitable other embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments unless the embodiments are not described as being without these elements.
It is the applicant's intention that all publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. Furthermore, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. With respect to the use of section titles, the section titles should not be construed as necessarily limiting. Further, any priority documents of the present application are incorporated herein by reference in their entirety.

Claims (16)

1. A computing device (204) for scheduling execution of at least one process (206B), characterized by:
monitoring a plurality of semantic parameters (216A) defined by a process execution plan (206D) associated with a process, wherein the process execution plan (206D) is different from process execution plans of other processes,
wherein the plurality of semantic parameters indicates an amount of processing that the process is expected to execute within a future time interval;
calculating a dynamic deviation score according to the process execution plan;
at least one processor (202) schedules execution of the process according to the dynamic bias score.
2. The computing device of claim 1, wherein the monitoring is performed for each of a plurality of processes, wherein the scheduling is performed for each of the plurality of processes according to a respective process execution schedule associated with each of the plurality of processes, and wherein the plurality of processes are scheduled to execute according to the respective dynamic deviation scores.
3. The computing device of any of the preceding claims, wherein the monitoring, the calculating the dynamic bias score, and the scheduling are iterated over a plurality of time intervals for real-time dynamic scheduling of the process according to a real-time dynamic bias score based on real-time semantic parameters.
4. The computing device of any of the above claims, wherein a first process having a relatively higher dynamic bias score is scheduled to receive relatively more processing resources during a future time interval than a second process having a relatively lower dynamic bias score.
5. The computing device of any of the preceding claims, wherein the process execution plan is executed by a scheduler (206C) that schedules execution of the process according to the dynamic bias score calculated by the process execution plan.
6. The computing device of claim 1, wherein the monitoring and the calculating the dynamic bias score are implemented by the process execution plan, wherein the process execution plan is different from the scheduler that schedules execution of the process.
7. The computing device of claim 5 or 6, further comprising establishing the process execution plan for the process and informing the scheduler of the process execution plan.
8. The computing device of any of the preceding claims, wherein the plurality of semantic parameters are monitored using at least one interface defined by the process execution plan.
9. The computing device of any of the preceding claims, wherein the process comprises one of a plurality of processes of a common type associated with a process execution plan defined for the plurality of processes of the common type, wherein the process execution plan defined for the plurality of processes of the common type is different from process execution plans defined for other common types of processes.
10. The computing device of any of the preceding claims, further comprising analyzing the process to identify a type of the process, and selecting the process execution plan from a plurality of off-the-shelf process execution plans based on the type of the process.
11. The computing device of any of the preceding claims, wherein the plurality of semantic parameters are selected from a list of semantic parameters, the semantic parameters comprising at least one of: checking metrics of the process, checking input/output modes of the process, receiving explicit scheduling hints from a user space, operating System (OS) selection of typical information disclosed, and information that the OS is permitted to use.
12. The computing device of any of the preceding claims, further comprising: an affinity map indicating which of a plurality of processing resources to allocate to the process is determined based on an analysis of the plurality of semantic parameters.
13. The computing device of any of the preceding claims, wherein the process is associated with a priority value, and the scheduling of the process is based on an actual process priority calculated as a combination of the priority value and the dynamic bias score.
14. The computing device of any of the preceding claims, wherein the process is executed in the user space and the process execution plan is executed in a kernel space.
15. A method of scheduling execution of at least one process, comprising:
monitoring a plurality of semantic parameters defined by a process execution plan associated with a process, wherein the process execution plan is different from the process execution plans of other processes (104),
wherein the plurality of semantic parameters indicates an amount of processing that the process is expected to execute within a future time interval;
calculating a dynamic bias score according to the process execution plan (106);
At least one processor schedules execution of the process according to the dynamic bias score (110).
16. A non-transitory medium (206), characterized by storing program instructions (206A) for scheduling execution of at least one process, which program instructions (206A), when executed by a processor, cause the processor to:
monitoring a plurality of semantic parameters defined by a process execution plan associated with a process, wherein the process execution plan is different from process execution plans of other processes,
wherein the plurality of semantic parameters indicates an amount of processing that the process is expected to execute within a future time interval;
calculating a dynamic deviation score according to the process execution plan;
and scheduling at least one processor to execute the process according to the dynamic deviation score.
CN202180100569.3A 2021-09-22 2021-09-22 Workload aware process scheduling Pending CN117751349A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/076020 WO2023046274A1 (en) 2021-09-22 2021-09-22 Workload aware process scheduling

Publications (1)

Publication Number Publication Date
CN117751349A true CN117751349A (en) 2024-03-22

Family

ID=77998985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180100569.3A Pending CN117751349A (en) 2021-09-22 2021-09-22 Workload aware process scheduling

Country Status (2)

Country Link
CN (1) CN117751349A (en)
WO (1) WO2023046274A1 (en)

Also Published As

Publication number Publication date
WO2023046274A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
US10310908B2 (en) Dynamic usage balance of central processing units and accelerators
US11265264B2 (en) Systems and methods for controlling process priority for efficient resource allocation
US10884801B2 (en) Server resource orchestration based on application priority
Batista et al. Performance evaluation of resource management in cloud computing environments
US10884800B2 (en) Server resource balancing using a suspend-resume strategy
CN109614227B (en) Task resource allocation method and device, electronic equipment and computer readable medium
Verner et al. Scheduling processing of real-time data streams on heterogeneous multi-GPU systems
US11645123B1 (en) Dynamic distribution of a workload processing pipeline on a computing infrastructure
US11126466B2 (en) Server resource balancing using a fixed-sharing strategy
Humane et al. Simulation of cloud infrastructure using CloudSim simulator: A practical approach for researchers
EP3702917B1 (en) Intelligent server task balancing based on server capacity
US11445004B2 (en) Method for processing shared data, apparatus, and server
US20210176324A1 (en) Topic-based data routing in a publish-subscribe messaging environment
US11307898B2 (en) Server resource balancing using a dynamic-sharing strategy
US20150277980A1 (en) Using predictive optimization to facilitate distributed computation in a multi-tenant system
US11900601B2 (en) Loading deep learning network models for processing medical images
Diab et al. Dynamic sharing of GPUs in cloud systems
KR102052964B1 (en) Method and system for scheduling computing
US20230222004A1 (en) Data locality for big data on kubernetes
Wang et al. Schedule distributed virtual machines in a service oriented environment
Werner et al. HARDLESS: A generalized serverless compute architecture for hardware processing accelerators
CN117751349A (en) Workload aware process scheduling
Nino-Ruiz et al. Elastic scaling of e-infrastructures to support data-intensive research collaborations
Gupta et al. SORTIS: sharing of resources in cloud framework using CloudSim tool
Garg et al. FaaSter: Accelerated Functions-as-a-Service with Heterogeneous GPUs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination