AU2020283588A1 - Reducing cache interference based on forecasted processor use - Google Patents
Reducing cache interference based on forecasted processor use Download PDFInfo
- Publication number
- AU2020283588A1 AU2020283588A1 AU2020283588A AU2020283588A AU2020283588A1 AU 2020283588 A1 AU2020283588 A1 AU 2020283588A1 AU 2020283588 A AU2020283588 A AU 2020283588A AU 2020283588 A AU2020283588 A AU 2020283588A AU 2020283588 A1 AU2020283588 A1 AU 2020283588A1
- Authority
- AU
- Australia
- Prior art keywords
- processor
- processors
- workload
- assignment
- executing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000015654 memory Effects 0.000 claims abstract description 62
- 238000010801 machine learning Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims description 116
- 239000011159 matrix material Substances 0.000 claims description 83
- 238000005457 optimization Methods 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000002955 isolation Methods 0.000 abstract description 77
- 230000008569 process Effects 0.000 description 59
- 238000012549 training Methods 0.000 description 52
- 239000013598 vector Substances 0.000 description 44
- 230000006870 function Effects 0.000 description 33
- 238000012545 processing Methods 0.000 description 26
- 230000000977 initiatory effect Effects 0.000 description 14
- 238000004460 liquid liquid chromatography Methods 0.000 description 14
- 230000003247 decreasing effect Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 6
- 239000000470 constituent Substances 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 238000003860 storage Methods 0.000 description 6
- 230000000295 complement effect Effects 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 238000013508 migration Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009897 systematic effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 238000012896 Statistical algorithm Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 101150067055 minC gene Proteins 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5033—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering data affinity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0811—Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/084—Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0842—Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
- G06F9/3851—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5019—Workload prediction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/25—Using a specific main memory architecture
- G06F2212/254—Distributed memory
- G06F2212/2542—Non-uniform memory access [NUMA] architecture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Algebra (AREA)
- Probability & Statistics with Applications (AREA)
- Debugging And Monitoring (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
In various embodiments, a predictive assignment application computes a forecasted amount of processor use for each workload included in a set of workloads using a trained machine-learning model. Based on the forecasted amounts of processor use, the predictive assignment application computes a performance cost estimate associated with an estimated level of cache interference arising from executing the set of workloads on a set of processors. Subsequently, the predictive assignment application determines processor assignment(s) based on the performance cost estimate. At least one processor included in the set of processors is subsequently configured to execute at least a portion of a first workload that is included in the set of workloads based on the processor assignment(s). Advantageously, because the predictive assignment application generates the processor assignment(s) based on the forecasted amounts of processor use, the isolation application can reduce interference in a non-uniform memory access (NUMA) microprocessor instance.
Description
REDUCING CACHE INTERFERENCE BASED ON FORECASTED PROCESSOR
USE
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority benefit of the United States Provisional Patent Application titled,“PREDICTIVE CPU ISOLATION OF CONTAINERS,” filed on May 31 , 2019 and having Serial No. 62/855,649, and claims priority benefit of the United States Patent Application titled,“REDUCING CACHE INTERFERENCE
BASED ON FORECASTED PROCESSOR USE,” filed on July 12, 2019, and having Serial No. 16/510,756. The subject matter of these related applications is hereby incorporated herein by reference.
BACKGROUND
Field of the Various Embodiments
[0002] Embodiments of the present invention relate generally to computer science and microprocessor technology and, more specifically, to techniques for reducing cache interference based on forecasted processor use.
Description of the Related Art
[0003] In a typical multiprocessor instance, a process scheduler assigns one or more execution threads to logical processors in order to execute various workloads. Different process schedulers can base these assignments on different criteria and heuristics. For example, the default process scheduler for the Linux kernel is the well- known Completely Fair Scheduler (“CFS”). The CFS implements a variation of a scheduling algorithm known as“weighted fair queuing,” where, at a simplified level, the CFS attempts to maintain a weighted fairness when scheduling processing times across different workloads. In so doing, when assigning execution threads to a logical processor, the CFS typically prioritizes the execution thread associated with the workload most“starved” for processing time (according to various workload-specific weights) and assigns that execution thread to the logical processor first.
[0004] One drawback of conventional process schedulers, though, is that the performance impact of sharing cache memories (also referred to herein as“caches”) in a hierarchical fashion among different groups of logical processors in a non-uniform
memory access (“NUMA”) multiprocessor instance is not properly considered when assigning execution threads to different logical processors. In particular, in a phenomenon known as“noisy neighbor,” when a group of logical processors shares the same cache, the manner in which each logical processor accesses the cache can negatively impact the performance of the other logical processors included in the same group of logical processors. For example, if a thread (also referred to herein as an“execution thread”) executing on one logical processor evicts useful data that another thread executing on a second logical processor has stored in a shared cache, then the throughput and/or latency of the second logical processor is typically degraded. Among other things, the evicted data needs to be re-cached for the thread executing on the second logical processor to perform efficient data accesses on that data. As a result of these types of cache interference scenarios, the time required to execute workloads on a NUMA microprocessor instance can be substantially increased. Further, because the time required to execute different workloads can vary based on the amount of cache interference as well as the type of cache interference, the execution predictability of workloads can be decreased, which can lead to preemptive over-provisioning of processors in cloud computing
implementations. Over-provisioning can result in some processors or microprocessor instances not being used, which can waste processor resources and prevent adequate processor resources from being allocated to other tasks.
[0005] To reduce the negative impacts resulting from the“noisy neighbor” phenomenon, some process schedulers implement heuristics that (re)assign cache memory and/or execution threads in an attempt to avoid cache interference
scenarios. For instance, the CFS can perform memory-page migrations and can reassign threads to different logical processors based on heuristics associated with a lowest-level cache (“LLC”). Flowever, these types of heuristic-based mitigation strategies oftentimes do not reduce the performance and execution predictability issues associated with cache interference. For example, empirical results have shown that cache interference can increase the amount of time required to execute a workload on a NUMA microprocessor instance by a factor of three, even when heuristic-based migration strategies are being implemented.
[0006] As the foregoing illustrates, what is needed in the art are more effective techniques for executing workloads on logical processors.
SUMMARY
[0007] One embodiment of the present invention sets forth a computer- implemented method for executing workloads on processors. The method includes computing a forecasted amount of processor use for each workload included in a set of workloads using a trained machine-learning model; based on the forecasted amounts of processor use, computing a performance cost estimate associated with an estimated level of cache interference arising from executing the set of workloads on a set of processors; and determining at least one processor assignment based on the performance cost estimate, where at least one processor included in the set of processors is subsequently configured to execute at least a portion of a first workload included in the set of workloads based on the at least one processor assignment.
[0008] At least one technical advantage of the disclosed techniques relative to the prior art is that, with the disclosed techniques, cache interference in a non-uniform memory access (NUMA) microprocessor instance can be automatically and more reliably reduced. In particular, assigning workloads to processors in a NUMA microprocessor instance based on forecasted processor use can reduce cache interference in the NUMA microprocessor instance in a more systematic, data-driven fashion. Because reducing cache interference improves the latency and/or throughput of a processor, the time required to execute workloads in NUMA microprocessor instances can be substantially decreased. Further, the variances in both latency and throughput are decreased, thereby increasing execution
predictability and decreasing preemptive over-provisioning. These technical advantages represent one or more technological advancements over prior art approaches.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] So that the manner in which the above recited features of the various embodiments can be understood in detail, a more particular description of the inventive concepts, briefly summarized above, may be had by reference to various embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of the inventive concepts and are therefore not to be considered limiting of scope in any way, and that there are other equally effective embodiments.
[0010] Figure 1 is a conceptual illustration of a system configured to implement one or more aspects of the present invention;
[0011] Figure 2 is a more detailed illustration of one of the non-uniform memory access (“NUMA”) microprocessor instances of Figure 1 , according to various embodiments of the present invention;
[0012] Figure 3 is a more detailed illustration of the predictive assignment application of Figure 1 , according to various embodiments of the present invention;
[0013] Figure 4 is a more detailed illustration of the integer programming engine of Figure 3, according to various embodiments of the present invention;
[0014] Figure 5 is a more detailed illustration of the cost function of Figure 4, according to various embodiments of the present invention; and
[0015] Figure 6 is a flow diagram of method steps for executing workloads on processors that share at least one cache, according to various embodiments of the present invention.
DETAILED DESCRIPTION
[0016] In the following description, numerous specific details are set forth to provide a more thorough understanding of the various embodiments. Flowever, it will be apparent to one skilled in the art that the inventive concepts may be practiced without one or more of these specific details.
[0017] Oftentimes a service provider executes many diverse applications in a high- throughput fashion using microprocessor instances included in a cloud computing environment (Le., encapsulated shared resources, software, data, etc.). To ensure the proper execution environment for each application, the applications are organized into groups that are associated with different stand-alone executable instances of code referred to as“containers.” Each container provides the required dependencies and can execute any of the included applications. Notably, some containers may include batch applications while other containers may include applications that are intended to interface with users in real-time. When the service provider submits each container for execution, the service provider also specifies the number of logical processors on which to execute the container.
[0018] Typically, on each microprocessor instance, a process scheduler assigns one or more execution threads to logical processors in the microprocessor instance in order to execute various workloads, including containers. Different process schedulers can base these assignments on different criteria and heuristics. For example, the default process scheduler for the Linux kernel is the well-known
Completely Fair Scheduler (CFS). At a simplified level, the CFS attempts to maintain a weighted fairness when scheduling processing times across different workloads (e.g., tasks, containers, etc.). For example, if four equally-weighted containers that each requested eight processors are executing on a microprocessor instance, then the CFS attempts to provide each container with 25% of the processing time of the microprocessor instance.
[0019] Since a typical service provider can run millions of containers in a cloud computing environment each month, effectively utilizing the resources allocated to the service provider within the cloud computing environment is critical. One drawback of typical process schedulers (including the CFS), though, is that the scheduling criteria and heuristics are not properly optimized for non-uniform memory access (NUMA) multiprocessor instances that are often included in cloud computing environments.
As a general matter, in a NUMA microprocessor instance, the memory access time varies based on the memory location relative to the logical processor accessing the memory location. In particular, data in use by the logical processors is typically stored in a hierarchy of shared caches, where different levels of the cache are associated with different ranges of access times. For example, some NUMA multiprocessor instances include thirty-two cores that can each execute two hyper-threads via two logical processors. The two logical processors included in each core share a level 1 (“L1”) cache and a level 2 (“L2”) cache, and the thirty-two cores share a lowest-level cache (LLC). Consequently, each logical processor shares an L1 cache and an L2 cache with another logical processor and shares an LLC with sixty-three other logical processors.
[0020] In a phenomenon known as“noisy neighbor,” when a group of logical processors shares the same cache, the manner in which each logical processor accesses the cache can negatively impact the performance of the other logical processors included in the same group of logical processors. For example, if a thread executing on one logical processor evicts useful data that another thread executing
on a second logical processor has stored in a shared cache, then the throughput and/or latency of the second logical processor is typically degraded. Among other things, the evicted data needs to be re-cached for the thread executing on the second logical processor to perform efficient data accesses on that data. As a result of these types of cache interference scenarios, the time required to execute workloads on a NUMA microprocessor instance can be substantially increased. For example, empirical results have shown that cache interference can increase the amount of time required to execute a workload on a NUMA microprocessor instance by a factor of three.
[0021] Further, because the time required to execute different workloads can vary based on the amount of cache interference as well as the type of cache interference, the execution predictability of workloads can be decreased. If an application is particularly time-sensitive, such as an application that interfaces with users in real time, then the service provider may allocate more processors than necessary to ensure an acceptable response time based on the worst-case performance. Such “over-provisioning” can result in some processors or microprocessor instances not being used, which can waste processor resources and prevent adequate processor resources from being allocated to other tasks.
[0022] With the disclosed techniques, however, an isolation application executing on a NUMA microprocessor instance can over-ride the default scheduling behavior of the process scheduler to reduce cache interference. In one embodiment, the isolation application transmits an assignment request to a predictive assignment application. For each container associated with the NUMA microprocessor instance, the predictive assignment application computes a forecasted processor usage based on a processor usage model, container metadata (e.g., attributes of the container), and a times series of measured processor usages. Processor usage is also referred to herein as“processor use.” The processor usage model is a machine learning model that is trained using historical container and processor usage data for any number of containers associated with any number of NUMA microprocessor instances. The forecasted processor usage of a container predicts a future processor usage statistic (e.g., average) for the container over a particular time window.
[0023] An integer programming engine then executes one or more integer programming algorithms based on the forecasted processor usages to generate a set
of processor assignments designed to reduce cache interference. More precisely, in a technique similar to optimizing the assignment of airplanes to routes, the integer programming algorithms perform optimization operations on a set of processor assignments to minimize a cost function that estimates the performance costs associated with cache interference based on the forecasted processor usages. The isolation application then configures the process scheduler to assign each of the containers to the requested number of logical processors in the NUMA
microprocessor instance based on the set of processor assignments.
[0024] At least one technical advantage of the disclosed techniques relative to the prior art is that the predictive assignment application can automatically and more reliably reduce cache interference associated with co-located threads (La, threads sharing at least one cache) in a NUMA microprocessor instance. Because reducing cache interference improves the latency and/or throughput of a processor, the time required to execute containers in NUMA microprocessor instances can be
substantially decreased. Further, the variances in both latency and throughput are decreased, thereby increasing execution predictability and decreasing preemptive over-provisioning. As a result, the overall amount of resources requested by containers can be reduced. These technical advantages represent one or more technological advancements over prior art approaches.
System Overview
[0025] Figure 1 is a conceptual illustration of a system 100 configured to implement one or more aspects of the present invention. As shown, the system 100 includes, without limitation, any number of non-uniform memory access (NUMA) microprocessor instances. For explanatory purposes, multiple instances of like objects are denoted with reference numbers identifying the object and parenthetical numbers identifying the instance where needed. In some embodiments, any number of the NUMA microprocessor instances 110 may be distributed across multiple geographic locations or included in one or more cloud computing environments (i.e., encapsulated shared resources, software, data, etc.) in any combination.
[0026] Each of the NUMA microprocessor instances 110 may be any type of physical system, virtual system, server, chip, etc., that includes multiple processors linked via a NUMA memory design architecture. As shown, each of the NUMA microprocessor instances 110 includes, without limitation, clustered processors 112
and processor memory 116. The clustered processors 112 include, without limitation, any number of logical processors (not shown in Figure 1 ) arranged in any technically feasible fashion. Each logical processor is capable of executing a different thread of execution. The term“thread” as used herein refers to any type of thread of execution, including a hyper-thread. The processor memory 116(x) includes any number of blocks of physical memory (e.g., caches) that store content, such as software applications and data, for use by the clustered processors 112(x) of the NUMA microprocessor instance 110(x).
[0027] In each of the NUMA microprocessor instances 110, the memory access time varies based on the memory location relative to the logical processor accessing the memory location. In particular, data in use by the logical processors is typically stored in a hierarchy of shared caches, where different levels of the cache are associated with different ranges of access times and are shared between different groups of logical processors. Figure 2 illustrates the organization of logical
processors and shared caches within one of the NUMA multiprocessors 110, according to various embodiments.
[0028] In general, each of the NUMA microprocessor instances 110 may include any number and type of logical processors and any amount and type of memory structured in any technically feasible fashion that is consistent with a NUMA
architecture. In various embodiments, the number of logical processors and/or amount of memory included in the NUMA microprocessor instance 110(x) may vary from the number of logical processors and/or amount of memory included in the NUMA microprocessor instance 110(y). In the same or other embodiments, the hierarchy of shared caches within the NUMA microprocessor instance 110(x) may vary from the hierarchy of shared caches within the NUMA microprocessor instance 110(y).
[0029] At any given point in time, each NUMA microprocessor instance 110(x) is configured to execute a different set of workloads. As referred to herein, a“workload” is a set of executable instructions that represents a discrete portion of work.
Examples of workloads include, without limitation, a thread, a task, a process, containers 190, etc. To execute a workload, a process scheduler 180(x) executing on the NUMA microprocessor instance 110(x) assigns threads to one or more of the logical processors included in the clustered processors 112(x). As a general matter,
different process schedulers can base these assignments on different criteria and heuristics. The processor scheduler 180(x) is a software application that, at any given point in time, may be stored in any block of physical memory included in the processor memory 116(x) and may execute on any of the logical processors included in the clustered processors 112(x). The process scheduler 180(x) may perform scheduling and assignment operations at any level of granularity (e.g., thread, task, process, the container 190) in any technically feasible fashion.
[0030] One drawback of conventional process schedulers is that the performance impact of sharing caches in a hierarchical fashion among different groups of the logical processors in the associated NUMA multiprocessor instance 110 is not properly considered when assigning threads to the relevant logical processors. In a phenomenon known as“noisy neighbor,” when a group of the logical processors shares the same cache, the manner in which each of the logical processors accesses the cache can negatively impact the performance of the other logical processors in the same group of logical processors.
[0031] For example, if a thread executing on one logical processor evicts useful data that another thread executing on a second logical processor has stored in a shared cache, then the throughput and/or latency of the second logical processor is typically degraded. Among other things, the evicted data needs to be re-cached for the thread executing on the second logical processor to perform efficient data accesses on that data. As a result of these types of cache interference scenarios, the time required to execute workloads on each of the NUMA multiprocessor instances 110 can be substantially increased. Further, because the time required to execute different workloads can vary based on the amount of cache interference as well as the type of cache interference, the execution predictability of workloads can be decreased, which can lead to preemptive over-provisioning of processors in cloud computing implementations.
Optimizing Processor Assignments
[0032] To reliably reduce cache interference, each of the NUMA microprocessor instances 110 includes, without limitation, a different instance of an isolation application 170. The isolation application 170(x) interacts with a predictive
assignment application 160 to determine and subsequently implement optimized processor assignments that reduce cache interfere within the NUMA microprocessor
instance 110(x). As referred to herein, a“processor assignment” is an assignment of a workload at any granularity for execution by one or more of the logical processors. With respect to processor assignment, the term“assignment” is symmetrical.
Consequently, a processor assignment between the container 190(y) and a particular logical processor can be referred to as either“assigning the container 190(y) to the logical processor” or“assigning the logical processor to the container 190(y).”
[0033] Note that the techniques described herein are illustrative rather than restrictive, and may be altered without departing from the broader spirit and scope of the invention. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described
embodiments and techniques. Further, in various embodiments, any number of the techniques disclosed herein may be implemented while other techniques may be omitted in any technically feasible fashion.
[0034] In particular and for explanatory purposes, the functionality of each of the isolation application 170, the predictive assignment application 160 and a training application 140 is described herein in the context of the containers 190. Each of the containers 190 is associated with a stand-alone executable instance of code that provides an independent package of software including, without limitation, any number of applications and the associated dependencies. Accordingly, each of the containers 190 may execute any number (including one) of tasks via any number (including one) of threads.
[0035] However, as persons skilled in the art will recognize, the techniques described herein are applicable to optimizing processor assignments at any level of granularity. In alternate embodiments, the isolation application 170 may assign any type of workload to any type of processors and the techniques described herein may be modified accordingly. For instance, in some embodiments, the isolation
application 170 assigns different threads to different central processing units (“CPUs”) included in the associated microprocessor instance 110(x), where each CPU implements two logical processors via hyper-threading.
[0036] As shown, the isolation application 170(x) is included in a scheduling infrastructure 120(x) that also includes, without limitation, a processor accounting controller 122(x), a processing time file 124(x), and the process scheduler 180(x).
Each of the isolation application 170(x), the processing accounting controller 122(x), and the process scheduler 180(x) are a software application that, at any given point in time, may be stored in any portion of the processor memory 116(x) and may execute on any of the clustered processors 112(x).
[0037] The isolation application 170(x) configures the processor accounting controller 122(x) to track the accumulated processing time for each of the logical containers 190 executing on the NUMA microprocessor instance 110(x). The processing time file 124(x) is a kernel data structure that, when accessed, triggers the processor accounting controller 122(x) to provide up-to-date accumulated processing times for the containers 190 executing on the NUMA microprocessor instance 110(x). The accumulated processing time of the container 190(y) is the aggregate processing time (in nanoseconds) consumed by the logical processor(s) executing thread(s) associated with the container 190(y).
[0038] The processor accounting controller 122 and the processing time file 124 may be implemented in any technically feasible fashion. For instance, in some embodiments, the isolation applications 170 defines a different control group
(“cgroup”) for each of the containers 190, and the processor accounting controller 122(x) is a CPU accounting controller that is a component of the Linux kernel. A cgroup is a Linux feature that limits, accounts for, and isolates the resource usage (e.g., CPU, memory, disk I/O, network, etc.) of a collection of one or more processes. The CPU accounting controller provides the aggregated processing time for each cgroup. In alternate embodiments, the processor accounting controller 122(x) and the processing time file 124(x) may be replaced with any elements that track activities of processors and/or any type of workload at any granularity in any technically feasible fashion.
[0039] In operation, the isolation application 170(x) implements a usage update process, a training database update process, and a processor assignment
process. asynchronously to one another. The isolation application 170(x) implements the usage update process at intervals of a usage update delta (e.g., 1 minute). The isolation application 170(x) may acquire the usage update delta in any technically feasible fashion. For instance, in some embodiments, the isolation application 170(x) acquires the usage update delta via a container management subsystem (not shown in Figure 1 ). For explanatory purposes only, if the usage update process u occurs at
a time of t, then the usage update process (u+1 ) occurs at a time of (t+usage update delta).
[0040] To implement the usage update process u at time t, the isolation application 170(x) accesses the processing time file 124 to obtain the current processing time for each of the containers 190 executing on the NUMA microprocessor instance 110(x). For each of the containers 190, the isolation application 170(x) then subtracts the processing time obtained during the usage update process (u-1 ) from the current processing time to determine a delta processing time. Subsequently, the isolation application 170(x) normalizes each of the delta processing times with respect to the total number of logical processors included in the clustered processors 112(x) to determine processor usages associated with the time t in units of“logical processors.” Processor usage is also referred to herein as“processor use.”
[0041] For each of the containers 190(y), the isolation application 170(x) appends the processor usage to a processor usage time series that is stored in the yth row of a processor usage matrix (not shown in Figure 1 ). According, each row in the processor usage matrix is associated with a different container 190 and each column is associated with a different time. In alternate embodiments, the isolation application 170(x) may acquire and store processor usages in any technically feasible fashion.
[0042] The isolation application 170(x) implements the training database update process at intervals of a training database update delta (e.g., 10 minutes). The isolation application 170(x) may acquire the training database update delta in any technically feasible fashion. For instance, in some embodiments, the isolation application 170(x) acquires the training database update delta via a container management subsystem. To implement the training database update process, the isolation application 170(x) updates a training database 130 based on the process usage matrix and a container metadata vector (not shown in Figure 1 ). The container metadata vector includes, without limitation, container metadata for each of the different containers 190 associated with the NUMA microprocessor instance 110(x). As referred to herein, if the container 190 is executing or initiating execution on the NUMA microprocessor instance 110(x), then the container 190 is“associated with” the NUMA microprocessor instance 110(x).
[0043] The container metadata for the container 190(y) may include any amount and type of information that is associated with the container 190(y) and the execution of the container 190(y). For example, the container metadata for the container 190(y) could include a user identifier indicating a user that launched the container 190(y), a Boolean indicating whether Java is installed in the container 190(y), the resources requested when the container 190(y) was launched (e.g., number of logical processors, amount of network resources, amount of the processor memory 116, amount of disk, etc.), an application name, and so forth. The isolation application 170(x) may acquire the container metadata in any technically feasible fashion. For instance, in various embodiments, the isolation application 170(x) acquires the container metadata that is the yth entry in the container metadata vector during a container initiation event that initiates the execution of the container 190(y) on the NUMA microprocessor instance 110(x).
[0044] As shown, the training database 130 includes, without limitation, a global processor usage matrix 132 and a global container metadata vector 134. The global processor usage matrix 132 is an aggregation of the different processor usage matrices associated with the different NUMA multiprocessor instances 110. In particular, the global processor usage matrix 132 includes, without limitation, a processor usage time series for each of any number of the containers 190 associated with or previously associated with the NUMA microprocessor instances 110. Each row in the global processor usage matrix 132 is associated with a different container 190 and each column in the global processor usage matrix 132 is associated with a different time. Similarly, the global container metadata vector 134 is an aggregation of the different container metadata vectors associated with the different NUMA microprocessor instances 110. Each element in the global container metadata vector 134 contains container metadata for a different container 190.
[0045] The isolation application 170(x) may update the training database 130 in any technically feasible fashion. For instance, in some embodiments, the isolation application 170(x) transmits the processor usage matrix and the container metadata vector to a training database management application (not shown). For each new container 190, the training database management application adds the container metadata as a new entry in the global container metadata vector 134. The training database management application also adds the processor usage time series for the
new container 190 (included in the processor usage matrix) to the global processor usage matrix 132. For each existing container 190, the training database
management application appends new data from the processor usage time series included in the processor usage matrix to the corresponding processor usage time series included in the global processor usage matrix 132. In alternate embodiments, the isolation application 170(x) directly updates the training database 130 to reflect data acquired since executing the previous training database update process.
[0046] As depicted via dashed lines, a training application 140 generates a processor usage model 150 based on the training database 130. For explanatory purposes only, the training application 140 is depicted within the container 190(L) associated with the NUMA microprocessor instance 110(1 ). At any given point in time, the training application 140 may be stored in any portion of the processor memory 116(1 ) and may execute on any of the clustered processors 112(1 ). In alternate embodiments, any portion (including all) of the training application 140 may execute outside any container 190 and/or on any device capable of executing instructions.
[0047] For each combination of one of the containers 190 represented in the training database 130 and one of any number of prediction times, the training application 140 configures a feature extractor 136 to generate a different feature set (not shown in Figure 1 ). For example, if the training database 130 represented 100,342 containers 190 and the total number of prediction times was 80, then the feature extractor 136 would execute 8,027,360 times and generates 8,027,360 feature sets.
[0048] Each feature set includes, without limitation, any number of time series features and any number of contextual features. Each of the contextual features corresponds to the associated container 190 or the launch of the associated container 190 and is derived from the associated container metadata. By contrast, each of the temporal features corresponds to the processor usages of the associated container 190 or time-related attribute(s) of the associated container 190. Examples of time series features could include, without limitation, the average processor usage in the last minute, the median processor usage from (the prediction time - 20 minutes) to the prediction time, the median processor usage from (the prediction time - 10 minutes) to the prediction time, the current hour of the day, the current day of the week, etc.
[0049] The prediction time is the end time of a historical time window that has a duration equal to a historical window duration, and a start time of (the prediction time - the historical window duration). For example, suppose that the historical window duration was 60 minutes, a first feature set was associated with the container 190(1 ) and a prediction time of 60 minutes, and a second feature set was associated with the container 190(1 ) and a prediction time of 120 minutes. The first feature set would include features derived from the container metadata associated with the container 190(1 ) and the portion of the processor usage time series associated with the container 190(1 ) from 0 to 60 minutes. By contrast, the second feature set would include features derived from the container metadata associated with the container 190(1 ) and the portion of the processor usage time series associated with the container 190(1 ) from 60 to 120 minutes.
[0050] For each feature set, the training application 140 also computes the ground truth label used to train the processor usage model 150 to compute a forecasted processor usage of the associated container 190 relative to the associated prediction time. The training application 140 may define the forecasted processor usage and compute the ground truth labels in any technically feasible fashion that is consistent with the processor usage model 150 and the training database 130.
[0051] For instance, in some embodiments, the processor usage model 150 is a Gradient Boosted Regression Tree (“GBRT”) conditional regression model and the training application 140 defines the forecasted processor usage as the P95 processor usage in the ten minutes following the prediction time. As persons skilled in the art will recognize, the forecasted processor usage is therefore the value for which the probability that the actual processor usage in the next ten minutes reaches (or exceeds) the value is 5%. For example, if the forecasted processor usage is 4 logical processors, then the probability that the actual processor usage in the next ten minutes reaches (or exceeds) 4 logical processors is 5%.
[0052] To compute the ground truth labels, the training application 140 may implement any technically feasible algorithm that is consistent with the processor usage delta that defines the granularity of each processor usage time series. For instance, in some embodiments, the training application 140 sets each ground truth label equal to a random value of minute-average processor usage in the next 10 minutes. If the processor usage delta is 1 minute, then the training application 140
sets the ground-truth label for a feature set associated with the container 190(y) and a prediction time t to one of the ten processor usages from the time (t+1 ) to the time (t+10). in the processor usage time series associated with the container 190(y).
[0053] In alternate embodiments, the training application 140 may define the forecasted processor usage as a statistical level of confidence (i.e. , Pxx) based on any quantile (e.g., 50 instead of 95) and any length of time (e.g., 15 minutes instead of 10 minutes) following the prediction time. In other alternate embodiments, the training application 140 may execute any number and type of machine learning operations to train any type of processor usage model 150 that predicts any type of statistic or characteristic associated with processor usages for any type of workload at any future time or over any future time window. Notably, the forecasted processor usage may be defined based on the intended application of the forecasted processor usages and/or characteristics of the processor usages. For instance, defining the forecasted processor usage as an average instead of an upper-quantile would result in a usage prediction model 150 that was less conservative with respect to data heterogeneity.
[0054] The training application 140 may perform any number and type of machine learning operations to generate the processor usage model 150 based on the feature sets and the associated ground truth labels. For instance, in some embodiments, the training application 140 may execute any number and type of statistical algorithms for learning and regression to generate a trained conditional regression model or a trained conditional quantile regression model (e.g.. a GBRT conditional regression model) that is the processor usage model 150. After generating the processor usage model 150, the training application 140 transmits the processor usage model 150 to the predictive assignment application 160.
[0055] At intervals of a training delta (e.g.. 3 hours), the training application 140 re generates the processor usage model 150 based on the training database 130. The training application 140 may acquire the training delta in any technically feasible fashion. For instance, in some embodiments, the training application 140 acquires the training delta via a container management subsystem. Because each of the isolation applications 170 periodically updates the training database 130, periodically re-training the processor usage model 150 ensures that the processor usage model 150 reflects current conditions in the system 100. The training application 140 then
transmits the updated processor usage model 150 to the predictive assignment application 160.
[0056] As described in greater detail in conjunction with Figures 3 and 4, the isolation application 170(x) executes the processor assignment process. in response to each reassignment event associated with the NUMA microprocessor instance 110(x). A reassignment event is a container initiation event, a container termination event, or a re-balancing event. A container initiation event is associated with a container request that solicits a specified number of logical processors to execute a new container 190(y) within the NUMA microprocessor instance 110(x). By contrast, a container termination event is associated with the termination of the container 190(z) that was previously executing within the NUMA microprocessor instance 110(x). A re-balancing event is an on-demand trigger for a new processor
assignment process that reflects any changes in the processor usages of the containers 190 since the last processor assignment process.
[0057] The isolation application 170(x) may acquire the reassignment events in any technically feasible fashion. Further, in some embodiments, the isolation application 170(x) or a container management subsystem may generate a re balancing event to trigger a processor assignment process when the time elapsed since the last execution of the processor assignment process exceeds a maximum reassignment delta.
[0058] To initiate a processor assignment process, the isolation application 170(x) updates a request dataset (not shown in Figure 1 ), generates an assignment request specifying the request dataset, and transmits the assignment request to the predictive assignment application 160. The request dataset includes information describing the containers 190 associated with the NUMA microprocessor instance 110(x), such as the processor usage matrix and the container metadata vector. In response to the assignment request, the predictive assignment application 160 generates a current assignment matrix (not shown in Figure 1 ) that specifies optimized processor assignments for the containers associated with the NUMA microprocessor instance 110(x).
[0059] For explanatory purposes only, the predictive assignment application 160 and the processor usage model 150 are depicted within the container 190(M)
associated with the NUMA microprocessor instance 110(2). At any given point in time, each of the predictive training assignment application 160 and the processor usage model 150 may be stored in any portion of the processor memory 116(2) and may execute on any of the clustered processors 112(2). Advantageously, because the isolation application 170(2) generates optimized processor assignments for the NUMA microprocessor instance 110(2), the predictive assignment application 160 optimizes the logical processor assignments associated with the container 190(M). The predictive assignment application 160 can therefore self-isolate from other software applications executing in other containers 190. In alternate embodiments, any portion (including all) of the predictive assignment application 160 and/or the processor usage model 150 may execute outside any container 190 and/or on any device capable of executing instructions.
[0060] As shown, the predictive assignment application 160 includes, without limitation, the feature extractor 136 and an integer programming engine 162. For each of the containers 190 associated with the NUMA microprocessor instance 110(x)t, the predictive assignment application 160 configures the feature extractor 136 to generate a different feature set based the associated container metadata, the associated processor usage time series, and a prediction time equal to the current time. Subsequently, for each container 190(y), the predictive assignment application 160 provides the associated feature set as an input to the processor usage model 150 and stores the resulting output of the processor usage model 150 as the forecasted processor usage of the container 190(y).
[0061] The predictive assignment application 160 then configures the integer programming engine 162 to generate the current assignment matrix based on the forecasted processor usages and configuration data associated with the NUMA microprocessor instance 110 and the associated containers 190. The integer programming engine 162 executes one more integer programming algorithms to optimize a binary assignment matrix based on the forecasted processor usages and a cost function that estimates a performance cost associated with cache interference.
In particular, the cost function includes terms associated with goals of balancing the predicted pressures across each of the different levels of caches included in the NUMA microprocessor instance 110 based on the forecasted processor usages. For each of the containers 190 associated with the NUMA microprocessor instance 110,
the binary assignment matrix specifies at least one processor assignment. The predictive assignment application 160 transmits the binary assignment matrix as the current assignment matrix to the isolation application 170 that issued the assignment request.
[0062] As shown, the isolation application 170 generates a processor assignment specification 172 based on the current assignment matrix and then transmits the processor assignment specification 172 to the process scheduler 180. The processor assignment specification 172 configures the process scheduler 180 to assign the containers 190 to the logical processors as per the current assignment matrix. The processor assignment specification 172 may configure the process scheduler 180 in any technically feasible fashion.
[0063] It will be appreciated that the system 100 shown herein is illustrative and that variations and modifications are possible. For example, the functionality provided by the predictive assignment application 160, the integer programming engine 162, the feature extractor 136, the isolation application 170 and the process scheduler 180 as described herein may be integrated into or distributed across any number of software applications (including one) and any number of components of the computer system 100. For instance, in some embodiments, the predictive assignment application 160 may reside and/or execute externally to the NUMA microprocessor instance 110. In various embodiments, the process scheduler 180 may be omitted from the NUMA microprocessor instance 110, and the isolation application 170 may include the functionality of the process scheduler 180. Further, the connection topology between the various units in Figure 1 may be modified as desired.
[0064] Figure 2 is a more detailed illustration of one of the non-uniform memory access (NUMA) microprocessor instances 110 of Figure 1 , according to various embodiments of the present invention. .As shown, the NUMA microprocessor instance 110 includes, without limitation, any number of sockets 220, where each of the sockets 220 is a different physical block of processors and memory.
[0065] Each of the sockets 220 includes, without limitation, a lowest-level cache (LLC) 240 and any number of cores 230, level one (L1 ) caches 232, and level two (L2) caches 234. Each of the cores 230 includes, without limitation, two logical processors 212. Each of the cores 230 may be any instruction execution system,
apparatus, or device capable of executing instructions and having hyper-threading capabilities. For example, each of the cores 230 could comprise a different central processing unit (CPU) having hyper-threading capabilities. Each of the logical processors 212 included in the core 230 is a virtual processing core that is capable of executing a different hyper-thread. In alternate embodiments, each of the cores 230 may include any number (including one) of the logical processors 212. In some embodiments, the cores 230 do not have hyper-threading capabilities and therefore each of the cores 230 is also a single logical processor 212.
[0066] The total number of the sockets 220 included in the NUMA microprocessor instance 110 is N, where N is any positive integer. Within each of the sockets 220, the total number of each of the cores 230, the L1 caches 232, and the L2 caches 234 is P/2, where P is any positive integer, and the total number of the logical processors 212 is P. Consequently, within the NUMA microprocessor instance 110, the total number of the LLCs 240 is N; the total number of each of the cores 230, the L1 caches 232, and the L2 caches 234 is (N*P/2); and the total number of the logical processors 212 is (N*P). Referring to Figure 1 , the clustered processors 112 of the microprocessor instance 110 includes (N*P) logical processors 212 organized into (N*P/2) cores 230. The processor memory 116 of the microprocessor instance 110 includes N LLCs 240, (N*P/2) L1 caches, and (N*P/2) L2 caches 234.
[0067] Each of the caches is a different block of memory, where the cache level (e.g., L1 , L2, lowest-level) indicates the proximity to the associated cores 230. In operation, each of the caches is shared between multiple logical processors 212 in a hierarchical fashion. As shown, the LLC 240 included in the socket 220(g) is shared between the P logical processors 212 included in the socket 220(g). The L1 cache 232(h) and the L2 cache 234(h) are associated with the core 230(h) and are shared between the two logical processors 212 included in the core 230(h). Consequently, each of the logical processors 212 shares one of the L1 caches 232 and one of the L2 caches 234 with another logical processor 212 and shares one of the LLCs 240 with (P-1 ) other logical processors 212.
[0068] For explanatory purposes only, the logical processors 212(1 )-(4) and 212(P) are annotated with dotted boxes depicting the assigned containers 190 as per the container assignment 172. As shown, the container 212(1 ) is assigned to the logical processors 121 (1 -4), and the container 190(K) is assigned to the logical
processor 212(P). Accordingly, the LLC cache 240 is shared by at least four threads associated with the container 190(1 ) and one thread associated with the container 190(K). Further, the L1 cache 232(1 ) and the L2 cache 234(1 ) are shared between two threads associated with the container 190(1 ). Similarly, the L1 cache 232(2) and the L2 cache 234(2) are shared between two threads associated with the container 190(1 ). By contrast, the L1 cache 232(P/2) and the L2 cache 234(P/2) are used by a single thread associated with the container 190(K).
Assigning Processors Based on Forecasted Use
[0069] Figure 3 is a more detailed illustration of the predictive assignment application 160 of Figure 1 , according to various embodiments of the present invention. For explanatory purposes, Figure 3 describes the predictive assignment application 160 in the context of a processor assignment process that generates the processor assignment specification 172 for the NUMA microprocessor instance 110 of Figure 2. Figure 3 depicts the processor assignment process as a series of numbered bubbles.
[0070] As depicted with a bubble number 1 , a container management subsystem 320 transmits a reassignment event 324 and container metadata 322 to the isolation application 170. The container management subsystem 320 may perform any number and type of activities related to generating and executing the containers 190 on any number and type of microprocessor systems, including any number of the NUMA microprocessor instances 110. The reassignment event 324 may be a container initiation event, a container termination event, or a re-allocation event.
[0071] A container initiation event is associated with a container request (not shown) to execute a new container 190 within the NUMA microprocessor instance 110. The container request is associated with the container metadata 322 that specifies attributes of the new container 190 and/or the initiation of the new container 190. In particular, the container metadata 322 specifies the number of logical processors 212 that the container request solicits for execution of the new container 190. A container termination event is associated with the termination of the container 190 that was previously executing within the NUMA microprocessor instance 110. A re-balancing event is an on-demand trigger for a new processor assignment process that reflects any changes in the processor usages of the containers 190 since the last processor assignment process.
[0072] As shown, the isolation application 170 includes, without limitation, a container count 332, a request vector 330, a container metadata vector 338, a processor usage matrix 336, a current assignment matrix 334, and a scheduler reconfiguration engine 390. The container count 332 is equal to the total number of the containers 190 associated with the NUMA microprocessor instance 110. With reference to Figure 2, the container count 332 is K (assuming that none of the containers 190(1 )-190(K) have terminated). As depicted in Figure 3, the container count 332 is symbolized as a variable“/c”.
[0073] For each of the containers 190 associated with the NUMA microprocessor instance 110 executing the isolation application 170, the request vector 330 specifies the requested processor count of the associated container request. The request vector 330 is symbolized as Ύ . The request vector 330 is assumed to have at least one feasible solution, expressed as the following equation (1 ), where d is the total number of logical processors 112 in the NUMA microprocessor instance 110:
[0074] Prior to receiving the first container initiation event, the isolation application 170 sets the container count 332 equal to zero and sets the request vector 330 equal to NULL. Upon receiving a container initiation event, the isolation application 170 increments the container count 330 and adds the requested processor count of the associated container request to the request vector 330. For explanatory purposes, the requested processor count associated with the container 190(x) is the xth element “rx” of the request vector 330. In a complementary fashion, upon receiving a container termination event, the isolation application 170 decrements the container count 332 and removes the associated requested processor count from the request vector 330.
[0075] The container metadata vector 338 contains a different entry for each of the containers 190 associated with the NUMA microprocessor instance 110. If the NUMA microprocessor instance 110 is associated with K containers 190, then the container metadata vector 338 includes K entries. Each of the entries in the container metadata vector 338 specifies any amount of the container metadata 322 and any amount of data derived from the container metadata 322 for the associated container 190. Upon receiving a container initiation event, the isolation application 170 generates a new
entry in the container metadata vector 338 based on the associated container metadata 322.
[0076] The processor usage matrix 336 includes processor usages for the containers 190 that are currently executing or have previously executed on the NUMA multiprocessor instance 110. As previously described in conjunction with Figure 1 , the isolation application 170 periodically generates the processor usages based on processing times acquired via the processing time file 124. Each row in the processor usage matrix 336 corresponds to a processor usage time series for a different container 190 and each column corresponds to a different time. In some
embodiments, the processor usage matrix 336 may have a finite number of columns and the isolation application 170 may discard the oldest column when generating a new column.
[0077] The current assignment matrix 334, symbolized as“ M ,” specifies the assignments of the containers 190 to the logical processors 212 in the associated NUMA multiprocessor instance 110. Prior to receiving the first container initiation event, the isolation application 170 sets the current assignment matrix 334 equal to NULL, corresponding to the container count 332 of 0. Subsequently, upon receiving a new container initiation event, the isolation application 170 adds a new column of zeros corresponding to the new container 190 to the current assignment matrix 334.
In a complementary fashion, upon receiving a new container termination event, the isolation application 170 removes the column corresponding to the container 190 associated with the container termination event from the current assignment matrix 334.
[0078] As shown, a request dataset 372 includes, without limitation, the container count 332, the request vector 330, the container metadata vector 338, the processor usage matrix 336, and the current assignment matrix 334. As depicted with a bubble numbered 2, after updating the request dataset 370 based on the reassignment event 324, the isolation application 170 transmits an assignment request 370 specifying the associated request dataset 370 to the predictive assignment application 160.
[0079] As depicted with a bubble numbered 3 and as described previously herein in conjunction with Figure 1 , the predictive assignment application 160 configures the feature extractor 136 to generate a feature set 350(y) for each of the containers
190(y) included in the request vector 330. For the container 190(y), the predictive assignment application 160 configures the feature extractor 136 to generate the feature set 350(y) based on the yth entry in the container metadata vector 338, the processor usage time series associated with the container 190(y), and a prediction time equal to the current time.
[0080] The features set 350(y) includes, without limitation, any number of contextual features 354 and any number of time series features 352. Each of the contextual features 354 is associated with the container 190(y) and is derived from the associated container metadata included in the container metadata vector 338. Examples of contextual features 354 include, without limitation, a user identifier indicating the user that launched the container 190(y), a Boolean indicating whether Java is installed in the container 190(y), the resources requested when the container 190(y) was launched (e.g., number of logical processors, amount of network resources, amount of the memory 116, amount of disk, etc.), an application name, etc.
[0081] By contrast, each of the temporal features is associated with the past processor usage of the container 190(y) or a time-related attribute of the container 190(y). Examples of time-series features include, without limitation, the average processor usage in the last minute, the median processor usage from (the prediction time - 20 minutes) to the prediction time, the median processor usage from (the prediction time - 10 minutes) to the prediction time, current hour of the day, current day of the week, etc.
[0082] As depicted with a bubble numbered 4, the predictive assignment application 160 configures the processor usage model 350 to generate a forecasted processor usage vector 360 based on the feature sets 350 (depicted with a bubble numbered 5). More precisely, the predictive assignment application 160 transmits each of the feature sets 350 to the processor usage model 150 as a different input initiating a different execution of the processor usage model 150. Accordingly, for each of the containers 190(y) specified in the request vector 330, the processor usage model 150 generates a forecasted processor usage 362(y) based on the feature set 350(y).
[0083] The forecasted processor usage vector 360 includes, without limitation, the forecasted processor usage 362(1 )-362(K). As described previously in conjunction with Figure 1 , the forecasted processor usage 362(y) is the predicted P95 processor usage of the container 190(y) in the ten minutes following the prediction time (Le., the current time). In alternate embodiments, the processor usage model 150 may compute any type of forecasted processor usage 362 in any technically feasible fashion. For instance, in some embodiments, the forecasted processor usage 362(y) may be a prediction of the average processor usage of the container 190(y) in the next hour.
[0084] As depicted with a bubble numbered 6, the predictive assignment application 160 configures the integer programming engine 162 to generate the binary assignment matrix 382 based on the forecasted processor usage vector 360, the request vector 330, the container count 332, the current assignment matrix 334, and an instance configuration 342. The instance configuration 342 includes, without limitation, a socket count, a logical processor count, and a core count. The socket count, the logical processor count, and the core count specify, respectively, the number of the sockets 220, the number of the logical processors 212, and the number of the cores 230 included in the NUMA microprocessor instance 110.
[0085] The predictive assignment application 160 may acquire the instance configuration 342 in addition to any other relevant information associated with the NUMA microprocessor instance 110 in any technically feasible fashion. For instance, in some embodiments, the predictive assignment application 160 receives the instance configuration 342 from the isolation application 170 or via an application programming interface (“API”).
[0086] As described in greater detail in conjunction with Figures 4-5, the integer programming engine 162 executes one or more integer programming algorithms to optimize the binary assignment matrix 382 based on a cost function that estimates a performance cost associated with cache interference. Importantly, the cost function includes, without limitation, a term associated with a goal of balancing the predicted pressures across the LLCs 240 and a term associated with a goal of balancing the predicted pressures across the L1 caches 232 and the L2 caches 234 based on the forecasted processor usage vector 360.
[0087] As depicted with a bubble numbered 7, the predictive assignment application 160 transmits the binary assignment matrix 382 to the isolation application 170 as a new current assignment matrix 334. Subsequently and as depicted with a bubble numbered 8, the scheduler configuration engine 390 generates the processor assignment specification 172 based on the current assignment matrix 334. The scheduler configuration engine 380 then transmits the processor assignment specification 172 to the process scheduler 180.
[0088] In general, the processor assignment specification 172 configures the process scheduler 180 to assign the containers 190 to the logical processors 212 as per the assignments specified in the current assignment matrix 334. The scheduler configuration engine 390 may generate any type of processor assignment
specification 172, at any level of granularity, and in any technically feasible fashion that is consistent with the process scheduler 180 executing on the NUMA
microprocessor instance 110.
[0089] For instance, in some embodiments, the processor assignment
specification 172 includes any number and type of processor affinity commands that bind a process (e.g., the container 190) or a thread to a specific processor (e.g., the logical processor 212) or a range of processors. In Linux, for example, each container 190 may be defined as an individual cgroup and the processor assignment specification 172 could include any number of“cpuset” commands, where each cpuset commands specifies the logical processors 112 that a particular cgroup is to execute on.
[0090] The isolation application 170 generates a new processor assignment specification 172 whenever the isolation application 170 acquires a new reassignment event 324. However, as described in greater detail in conjunction with Figure 5, one of the terms in the cost function penalizes migrating containers 190 from one logical processor 212 to another logical processor 212, thereby discouraging the isolation application 170 from modifying processor assignments for currently executing containers 190.
[0091] Note that the techniques described herein are illustrative rather than restrictive, and may be altered without departing from the broader spirit and scope of the invention. Many modifications and variations will be apparent to those of ordinary
skill in the art without departing from the scope and spirit of the described
embodiments and techniques. As a general matter, the techniques outlined herein are applicable to assigning any type of workload to any type of processor based on a function associated with accessing shared caches in a NUMA architecture and forecasted processor usages.
[0092] Figure 4 is a more detailed illustration of the integer programming engine 162 of Figure 3, according to various embodiments of the present invention. For explanatory purposes only, the predictive assignment application 160 executes the integer programming engine 162 in response to receiving the assignment request 370 and the associated request dataset 372 from the isolation application 170 executing on the NUMA microprocessor instance 110 of Figure 2.
[0093] As shown, the integer programming engine 162 generates the binary assignment matrix 382 based on the forecasted processor usage vector 360, the request vector 330, the container count 332, the current assignment matrix 334, and the instance configuration 342. The binary assignment matrix 382 is symbolized as “M”, the forecasted processor usage vector 360 is symbolized as“p”, the request vector 330 is symbolized as “r”, the container count 332 is symbolized as“ , and the current assignment matrix 334 is symbolized as“M”.
[0094] The instance configuration 342 includes, without limitation, a socket count 412, a logical processor count 414, and a core count 416. The socket count 412 specifies the number of the sockets 220 included in the NUMA microprocessor instance 110. The logical processor count 414 specifies the number of the logical processors 212 included in the NUMA microprocessor instance 110. The core count 416 specifies the number of the cores 230 included in the NUMA microprocessor instance 110. With reference to Figure 2, the socket count 412 is N, the logical processor count 414 is the product of N and P, and the core count 416 is half of the logical processor count 414. As depicted in Figure 4, the socket count 412 is symbolized as a constant“n”, the logical processor count 414 is symbolized as a constant“d”, and the core count 416 is symbolized as a constant“c”.
[0095] The binary assignment matrix 382 is symbolized as“M” and specifies the assignments of the containers 190 to the logical processors 212. The /1h row of the binary assignment matrix 382 represents the logical processor 212(/) and the th
column of the binary assignment matrix 382 represents the container 190 (/). If the container 190 (/) is assigned to the logical processor 112(/), then the element M ,j included in the binary assignment matrix 382 is 1. Otherwise, the element M;,y included in the binary assignment matrix 382 is 0.
[0096] The dimensions of the binary assignment matrix 382 are d (the logical processor count 414) rows by k (the container count 332) columns, and each element included in the binary assignment matrix 382 is either zero or one. Consequently,
M e (0, l }dxi . Note that, as previously described herein, the term“assignment” is symmetrical. If M;,y is equal to one, then the container 190 (j) is assigned to the logical processor 212(/), and the logical processor 212(/) is assigned to the container 190 (/). Importantly, each of the logical processors 212 is assigned to at most one of the containers 190, while each of the containers 190 can be assigned to multiple logical processors 212, and each logical processor 112 can execute a different thread.
[0097] The integer programming engine 162 implements an overall goal 440 via a cost function 442, an optimization criterion 444, constraints 446(1 ) and 446(2), and the current assignment matrix 334. The overall goal 440 is to optimize the binary assignment matrix 382 with respect to the cost function 442 while meeting the requested processor counts 422 and assigning at most one of the containers 190 to each of the logical processors 212.
[0098] The cost function 442 is symbolized as“C(M)” and estimates a cost associated with cache interference for the binary assignment matrix 382. The cost function 442 is described in greater detail in conjunction with Figure 3. The
optimization criterion 444 specifies that the integer programming engine 162 is to search for the binary assignment matrix 382 that minimizes the cost function 442 under a set of constraints that includes, without limitation, the constraints 446. The optimization criterion 444 can be expressed as the following equation (2): minC(M) (2)
M
[0099] The constraint 446(1 ) specifies that, for each of the containers 190, a valid binary assignment matrix 382 provides the requested processor count specified in the request vector 330. The constraint 446(1 ) can be expressed as the following equation (3a):
M = r (3a)
As referred to herein, ll <º R; is the constant vector of one values of dimension /.
[0100] The constraint 446(2) specifies that at most one container 190 can be assigned to each of the logical processors 212. The constraint 446(2) can be expressed as the following equation (3b):
Ml, < \d (3b)
[0101] In operation, to achieve the overall goal 440, the integer programming engine 162 implements any number and type of integer programming operations based on the cost function 442, the optimization criterion 444, the constraints 446(1 ) and 446(2), and the current assignment matrix 334. For instance, in some
embodiments, the integer programming engine 162 implements versions of the cost function 442, the optimization criterion 444, and the constraints 446 that are
amenable to solution using integer linear programming. The integer programming engine 162 executes a solver that implements any number and combination of integer linear programming techniques to efficiently optimize the binary assignment matrix 382. Examples of typical integer linear programming techniques include, without limitation, branch and bound, heuristics, cutting planes, linear programming (“LP”) relaxation, etc.
Estimating Cache Interference Costs
[0102] Figure 5 is a more detailed illustration of the cost function 442 of Figure 4, according to various embodiments of the present invention. As described previously herein with conjunction with Figure 4, the cost function 442 estimates the
performance impact of cache interference, and the integer programming engine 162 performs optimization operations that minimize the cost function 442. More precisely, the cost function 442 estimates a value for a cost that is correlated with the
performance impact of cache interference. As shown, the cost function 442 is a weighted sum of five different constituent costs: a NUMA cost 522, an LLC cost 532, an L1/2 cost 542, a hyper-thread cost 552, and a reshuffling cost 562. The cost function 442 can be expressed as the following equation (4):
C(M)— aNU CNU (M) + ocLLC CLLC (M) + ctL CLm (M) + a0 C0 (M) + aR CR (M) (4)
[0103] The weights ONU, OLLC, at 1/2, cto, and CIR are hyper-parameters that encode the relative contribution of each of the constituent costs to the overall performance impact associated with cache interference. Consequently, the weights reflect the relative importance of different goals associated with the constituent costs. In an order of highest importance to lowest importance, the constituent goals are a NUMA goal 520, an LLC goal 530, an L1/2 goal 540, a hyper-thread goal 550, and a reshuffling goal 560. Notably, the weights are associated with the latency costs of the various levels of the cache hierarchy, cross-socket NUMA accesses, and migration costs associated with reassigning threads. In some embodiments, the weights may be varied to reflect the characteristics of different NUMA microprocessor instances 1 10.
[0104] To render equation (4) amenable to solution via integer programming techniques, each of the constituent costs may be linearized as described below, resulting in a linearized form of the cost function 442 that can be expressed as the following equation (5):
C(M,X, Y,Z,V, U) = aNU CNU(X) + aLLC CLLC( Y, U) +
[0105] To facilitate the description of the constituent costs, a symbol“b” specifies the number of the logical processors 212 included in each of the sockets 220, is equal to the logical processor count 414 (d) divided by the socket count 412 (n), and is assumed to be constant across the sockets 220. Further, as depicted in Figure 2, where b = P, indexing of the logical processor 212 corresponds to the parenthetical numbering and the following convention:
• The logical processor 212(1 ) is the first logical processor 212 included in the core 230(1 ) of the socket 220(1 )
• The logical processor 212(2) is the second logical processor 212 included in the core 230(1 ) of the socket 220(1 )
• The logical processor 212(3) is the first logical processor 212 included in the core 230(2) of the socket 220(1 )
• • · ·
• The logical processor 212(6) is the second logical processor 212 included in the last core 230(c) of the socket 220(1 )
• The logical processor 212(6+1) is the first logical processor 212 included in the first core 230(c+1 ) of the socket 220(2)
• · · ·
[0106] The NUMA goal 520 and the NUMA cost 522 are defined based on a hypothesis that assigning a workload to a set of the logical processors 212 included in a single socket 220 instead of assigning the workload to a set of the logical processors 212 spread across multiple sockets 220 typically reduces cross-socket memory accesses. Accordingly, the NUMA goal 520 is to minimize cross-socket memory accesses. In a complementary fashion, the NUMA cost 522 reflects the cross-socket memory access cost of executing the container 190 on a set of the logical processors 212 that spans multiple sockets 220. Quantitatively, the NUMA cost 522 can be expressed as the following equations (6a) and (6b):
[0107] As referred to herein, wtj is the number of the logical processors 212 included in the socket 220 (/) that are assigned to the container 190(/). As persons skilled in the art will recognize, if any of the logical processors 212 assigned to the container 190(j) are not included in the socket 220(t), then min(r7 - min(r7, wtj), 1) is equal to 1. Otherwise, min(r7 - min(r7, wtj), 1) is equal to 0. In this fashion, the NUMA cost 522 is related to the number of the containers 190 that span multiple sockets 220, and minimizing the NUMA cost 522 achieves the NUMA goal 520.
[0108] Linearizing the min function to re-parameterize equation (6a) results in the following equations (7a) and (7b) (introducing extra integer variables xtj) for the NUMA cost 522:
[0109] The LLC goal 530 and the LLC cost 532 are defined based on a hypothesis that balancing the total forecasted processor usage 362 across the sockets 220 evens out the pressure on the LLCs 240 and typically avoids/reduces LCC trashing. As referred to herein, LCC trashing is the eviction of useful data from the LLCs 240. Accordingly, the LLC goal 530 is to balance the total forecasted processor usage 362 across the sockets 220. In a complementary fashion, the LLC cost 532 reflects the cost of uneven pressure on the LLCs 240. To encode the LLC goal 530 in a manner that is amenable to solution via integer programming techniques, the average forecasted processor usage 362 per socket 220 is introduced as the extra variable U. After further introducing extra integer variables^, the LLC cost 532 can be expressed as the following linearized equations (8a) and (8b):
[0110] The L1/2 goal 540 and the L1/2 cost 542 are defined based on a hypothesis that balancing the total forecasted processor usage 362 across the cores 230 evens out the pressure on the LLCs 240 and typically avoids/reduces L1/2 trashing. As referred to herein, L1/2 trashing is the eviction of useful data from the L1 caches 232 and/or the L2 caches 234.
[0111] The number of the containers 190 that are assigned to the core 230 (/) is equal to the sum of the two rows of the binary assignment matrix (M) 382 associated with the core 230 (/). As described in greater detail previously herein in conjunction with Figure 2, the two rows of the binary assignment matrix 382 associated with the core 230 (/) are the row {21) associated with the logical processor 212(2/) and the row (2/+1 ) associated with the logical processor (2/+1 ). As a general matter, the number of the containers 190 that are assigned to the core 230 (/) is either zero, one, or two, and the total forecasted processor usage 362 for the core 230 (/) can be computed using the following summation (9):
[0112] If the number of the containers 190 that are assigned to the core 230 (/) is two, then two threads are executing on the core 230 (/), sharing both the L1 cache 232 (/) and the L2 cache 234(/). Accordingly, the L1/2 cost 542 can be expressed as the following equation (10):
[0113] Linearizing the max function to re-parameterize equation (10) results in the following equations (11a) and (11 b) (introducing extra integer variables zi ) for the L1/2 cost 542:
[0114] The hyper-thread goal 550 and the hyper-thread cost 552 are defined based on a hypothesis that when two hyper-threads of the same core 230 are required to co-exist, associating the hyper-threads with the same workload typically reduces cache interference. More precisely, to reduce cache interference, assigning the same container 190 to the two logical processors 212 included in the core 230(x) is preferable to assigning different containers 190 to the two logical processors 212 included in the core 230(x). Accordingly, the hyper-thread goal 550 is to maximize hyper-thread affinity. [0115] To encode the hyper-thread goal 550 in a manner that is amenable to solution via integer programming techniques, the elements of a matrix H are defined based on the following equation (12): ceil(x) H e N" (12)
[0116] Penalizing scheduling in which assigned logical processors 212 are not contiguous and preferring scheduling of the logical processors 212 and the sockets 220 having low indices results in the following equation (13) for the hyper-thread cost 552:
[0117] Notably, equation (13) is linear in M. As persons skilled in the art will recognize, when both of the logical processors 212 in a given core 230(x) are used, the combination of the hyper-thread cost 552 and the L1/2 cost 542 predisposes the integer programming engine 162 to assign the same container 190 to both of the logical processors 212. Further, the hyper-thread cost 552 predisposes the integer programming engine 162 to generate visually "organized" processor assignments, in which assigned indices are often contiguous.
[0118] The reshuffling goal 560 and the reshuffling cost 562 are defined based on a hypothesis that when a new workload is initiated or an existing task workload, retaining the processor assignments of currently executing workloads is preferable to reassigning the workloads. Accordingly, the reshuffling goal 560 is to minimize the reshuffling of the containers 190 between the logical processors 212, and the reshuffling cost 562 penalizes processor assignments that deviate from the current assignment matrix 382. The reshuffling cost 562 can be expressed as the following equation (14):
[0119] As described previously in conjunction with Figure 2, M symbolizes the current assignment matrix 334. The current assignment matrix 334 is the previous binary assignment matrix 382 augmented with a final column for a new container 190 or modified with a column filled with zeros for a terminated container 190.
[0120] Linearizing the equation (14) results in the following equations (15a) and (15b) (introducing extra integer variables v, ) for the reshuffling cost 562:
[0121] As persons skilled in the art will recognize, the equations (4)-(15b) are examples of mathematical ways to express the overall goal 440, the NUMA goal 520, the LLC goal 530, the L1/2 goal 540, the hyper-thread goal 550, and the reshuffling goal 560. In alternate embodiments, any number of the overall goal 440, the NUMA goal 520, the LLC goal 530, the L1/2 goal 540, the hyper-thread goal 550, and the reshuffling goal 560 may be expressed in different mathematical ways.
[0122] Figure 6 is a flow diagram of method steps for executing workloads on processors that share at least one cache, according to various embodiments of the present invention. Although the method steps are described with reference to the systems of Figures 1 -5, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention. In particular, although the method steps are described in the context of the containers 190, the containers 190 may be replaced with any type of workload in alternate embodiments.
[0123] As shown, a method 600 begins at step 602, where the isolation application 170 sets the current assignment matrix 334 equal to NULL to indicate that no containers 190 are currently executing on the NUMA microprocessor instance 110.
At step 604, the isolation application 170 receives a reassignment event 324 and updates the request set 372 to reflect current processor usages and any changes to the containers 190 associated with the NUMA microprocessor instance 110. At step 606, the isolation application 170 transmits the assignment request 370 and the associated request set 372 to the predictive assignment application 160.
[0124] At step 608, the predictive assignment application 160 computes the forecasted processor usage vector 360 based on the processor usage matrix 336, the container metadata vector 338, and the processor usage model 150. As described previously in conjunction with Figure 2, the forecasted processor usage vector 360 includes the forecasted processor usage 362 for each of the containers 190
associated with the NUMA microprocessor instances 110.
[0125] At step 610, the integer programming engine 162 generates the binary assignment matrix 382 based on the forecasted processor usage vector 360 and the cost function 442. At step 612, the predictive assignment application 160 transmits the binary assignment matrix 382 to the isolation application 170 as the current assignment matrix 334. At step 614, the scheduler configuration engine 390 constrains the process scheduler 180 based on the current assignment matrix 334.
As a result, the default scheduling behavior of the process scheduler 180 is over ruled, and the process scheduler 180 implements the processor assignments specified in current assignment matrix 334.
[0126] At step 616, the isolation application 170 determines whether to continue executing. The isolation application 170 may determine whether to continue executing in any technically feasible fashion and based on any type of data. For instance, in some embodiments, the isolation application 170 may determine to cease executing based on an exit command received via an application programming interface (“API”).
[0127] If, at step 616, the isolation application 170 determines to continue executing, then the method 600 returns to step 604, where the isolation application 170 receives a new reassignment event 324. The isolation application 170 and the predictive assignment application 160 continue to cycle through steps 604-616, regenerating the current assignment matrix 334 in response to new reassignment events 324, until the isolation application 170 determines to cease executing. If, however, at step 616, the isolation application 170 determines to cease executing, then the method 600 terminates.
[0128] In sum, the disclosed techniques may be used to efficiently execute workloads (e.g., tasks, processes, threads, containers, etc.) on processors
implemented in NUMA architectures. In one embodiment, a system includes a training database, a training application, a predictive assignment application, and any number of NUMA microprocessor instances each of which executes a different instance of an isolation application. For each of any number of containers, the training database includes container metadata and a processor usage time series.
The training application performs machine learning operations using contextual features and time series features derived from the training database to generate a processor usage model that computes forecasted processor usages for containers.
[0129] Each of the isolation applications tracks the processor usage of the containers executing on the associated NUMA microprocessor instance and asynchronously executes a processor assignment process in response to
reassignment events (e.g., a container initiation event, a container termination event, or a re-balancing event). To initiate a processor assignment process, the isolation application transmits an assignment request and an associated request dataset to the predictive assignment application. For each container associated with the NUMA microprocessor instance, the request dataset includes container metadata and a processor usage time series.
[0130] For each container associated with the assignment request, the predictive assignment application computes a forecasted processor usage based on the associated container metadata, the associated processor usage time series, and the processor usage model. Subsequently, an integer programming engine included in the predictive assignment application generates a current assignment matrix that minimizes a cost function based on the forecasted processor usages. The cost function includes terms that relate to a priori processor assignment goals. In particular, the cost function includes terms that relate to balancing the predicted pressures across the LLCs and across the L1 and L2 caches. The current assignment matrix specifies a set of processor assignments that assigns each of the containers to the associated requested processor count of logical processors. The isolation application then configures a processor scheduler to implement the set of processor assignments.
[0131] At least one technical advantage of the disclosed techniques relative to the prior art is that the predictive assignment application can automatically and more reliably reduce cache interference associated with co-located threads (La, threads sharing at least one cache) in a NUMA microprocessor instance. In particular, balancing the predicted pressures across the LLCs and across the L1 and L2 caches based on forecasted processor use can reduce cache interference in NUMA processor instances in a more systematic, data-driven fashion. Reducing cache interference improves the latency and/or throughput of the logical processors and, consequently, the time required for workloads to execute in NUMA microprocessor instances can be substantially decreased. Further, the variances in both latency and throughput are decreased, thereby increasing execution predictability and decreasing
preemptive over-provisioning. These technical advantages represent one or more technological advancements over prior art approaches.
[0132] 1. In some embodiments, a computer-implemented method for executing workloads on processors comprises computing a forecasted amount of processor use for each workload included in a first plurality of workloads using a trained machine learning model; based on the forecasted amounts of processor use, computing a performance cost estimate associated with an estimated level of cache interference arising from executing the first plurality of workloads on a first plurality of processors; and determining at least one processor assignment based on the performance cost estimate, wherein at least one processor included in the first plurality of processors is subsequently configured to execute at least a portion of a first workload included in the first plurality of workloads based on the at least one processor assignment.
[0133] 2. The computer-implemented method of clause 1 , wherein the first workload comprises a container, an execution thread, or a task.
[0134] 3. The computer-implemented method of clauses 1 or 2, wherein the first plurality of processors are included in a non-uniform memory access multiprocessor instance.
[0135] 4. The computer-implemented method of any of clauses 1 -3, wherein computing the forecasted amount of processor use for the first workload comprises computing at least one time-series feature based on a measured amount of processor use associated with executing the first workload on the first plurality of processors; computing at least one contextual feature based on metadata associated with the first workload; and inputting the at least one time-series feature and the at least one contextual feature into the trained machine-learning model.
[0136] 5. The computer-implemented method of any of clauses 1 -4, wherein determining the at least one processor assignment comprises executing one or more integer programming operations based on a first binary assignment matrix to generate a second binary assignment matrix that specifies the at least one processor assignment.
[0137] 6. The computer-implemented method of any of clauses 1 -5, wherein the first plurality of processors are partitioned into at least a first subset of processors that
share a first lowest-level cache (LLC) and a second subset of processors that share a second LLC, and wherein computing the performance cost estimate comprises estimating a cache interference cost based on an imbalance in predicted pressures between the first LLC and the second LLC.
[0138] 7. The computer-implemented method of any of clauses 1 -6, wherein each processor included in the first plurality of processors is associated with a different set of one or more cache memories, and wherein computing the performance cost estimate comprises estimating a cache interference cost based on an imbalance in predicted pressures across the sets of one or more cache memories.
[0139] 8. The computer-implemented method of any of clauses 1 -7, wherein computing the performance cost estimate comprises estimating a cache sharing cost resulting from sharing at least one of a level one cache memory and a level two cache memory between a first execution thread associated with the first workload and a second execution thread associated with a second workload included in the first plurality of workloads.
[0140] 9. The computer-implemented method of any of clauses 1 -8, further comprising for each workload included in a second plurality of workloads, acquiring a set of attributes associated with the workload and a measured amount of processor use associated with executing at least a portion of the workload on at least one processor included in a second plurality of processors; and executing one or more machine-learning algorithms to generate the trained machine-learning model based on the measured amounts of processor use and the sets of attributes.
[0141] 10. The computer-implemented method of any of clauses 1 -9, wherein the trained machine-learning model comprises a conditional regression model.
[0142] 11. In some embodiments, one or more non-transitory computer readable media include instructions that, when executed by one or more processors, cause the one or more processors to execute workloads on processors by performing the steps of for each workload included in a plurality of workloads, inputting at least one feature associated with the workload into a trained machine-learning model to compute a forecasted amount of processor use associated with the workload; based on the forecasted amounts of processor use, computing a performance cost estimate
associated with an estimated level of cache interference arising from executing the plurality of workloads on a plurality of processors; and determining a first plurality of processor assignments based on the performance cost estimate, wherein at least one processor included in the plurality of processors is subsequently configured to execute at least a portion of a first workload included in the plurality of workloads based on at least one processor assignment included in the first plurality of processor assignments.
[0143] 12. The one or more non-transitory computer readable media of clause 11 , further comprising computing the at least one feature associated with the first workload based on a measured amount of processor use associated with executing the first workload on the plurality of processors.
[0144] 13. The one or more non-transitory computer readable media of clauses 11 or 12, further comprising computing the at least one feature associated with the first workload based on at least one of a number of requested processors associated with the first workload, an amount of requested memory associated with the first workload, a name of a software application associated with the first workload, and a user identifier associated with the first workload.
[0145] 14. The one or more non-transitory computer readable media of any of clauses 11 -13, wherein determining the first plurality of processor assignments comprises executing one or more optimization operations based on a first binary assignment matrix that specifies a second plurality of processor assignments to generate a second binary assignment matrix that specifies the first plurality of processor assignments.
[0146] 15. The one or more non-transitory computer readable media of any of clauses 11 -14, wherein the plurality of processors are partitioned into at least a first subset of processors that share a first lowest-level cache (LLC) and a second subset of processors that share a second LLC, and wherein computing the performance cost estimate comprises estimating a cache interference cost based on an imbalance in predicted pressures between the first LLC and the second LLC.
[0147] 16. The one or more non-transitory computer readable media of any of clauses 11 -15, wherein each processor included in the plurality of processors is
associated with a different set of one or more cache memories, and wherein computing the performance cost estimate comprises estimating a cache interference cost based on an imbalance in predicted pressures across the sets of one or more cache memories.
[0148] 17. The one or more non-transitory computer readable media of any of clauses 1 1 -16, wherein the plurality of processors are partitioned into at least a first subset of processors that are included in a first socket and a second subset of processors that are included in a second socket, and wherein computing the performance cost estimate comprises estimating a cross-socket memory access cost resulting from executing a first execution thread associated with the first workload on a first processor included in the first subset of processors while also executing a second execution thread associated with the first workload on a second processor included in the second subset of processors.
[0149] 18. The one or more non-transitory computer readable media of any of clauses 1 1 -17, wherein computing the performance cost estimate comprises estimating a performance cost resulting from executing a first thread associated with the first workload on a first processor included in the plurality of processors and subsequently executing a second thread associated with a second workload included in the plurality of workloads on the first processor.
[0150] 19. The one or more non-transitory computer readable media of any of clauses 1 1 -18, wherein the forecasted amounts of processor use comprise predicted amounts of processor use associated with a statistical level of confidence.
[0151] 20. In some embodiments, a system for generating executing workloads on processors comprises one or more memories storing instructions; and one or more processors that are coupled to the one or more memories and, when executing the instructions, are configured to compute a forecasted amount of processor use for each workload included in a plurality of workloads using a trained machine-learning model; based on the forecasted amounts of processor use, compute a performance cost estimate associated with an estimated level of cache interference arising from executing the plurality of workloads on a plurality of processors; and perform one or more optimization operations on a first plurality of processors assignments based on the performance cost estimate to generate a second plurality of processor
assignments, wherein at least one processor included in the plurality of processors is subsequently configured to execute at least a portion of a first workload included in the plurality of workloads based on at least one processor assignment included in the second plurality of processor assignments.
[0152] Any and all combinations of any of the claim elements recited in any of the claims and/or any elements described in this application, in any fashion, fall within the contemplated scope of the present invention and protection.
[0153] The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.
[0154] Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an
embodiment combining software and hardware aspects that may all generally be referred to herein as a“module” or“system.” In addition, any hardware and/or software technique, process, function, component, engine, module, or system described in the present disclosure may be implemented as a circuit or set of circuits. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
[0155] Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only
memory (EPROM or Flash memory), an optical fiber, a portable compact disc read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
[0156] Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program
instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine. The instructions, when executed via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable gate arrays.
[0157] The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that
perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
[0158] While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Claims (20)
1. A computer-implemented method for executing workloads on processors, the method comprising:
computing a forecasted amount of processor use for each workload included in a first plurality of workloads using a trained machine-learning model; based on the forecasted amounts of processor use, computing a performance cost estimate associated with an estimated level of cache interference arising from executing the first plurality of workloads on a first plurality of processors; and
determining at least one processor assignment based on the performance cost estimate,
wherein at least one processor included in the first plurality of processors is subsequently configured to execute at least a portion of a first workload included in the first plurality of workloads based on the at least one processor assignment.
2. The computer-implemented method of claim 1 , wherein the first workload comprises a container, an execution thread, or a task.
3. The computer-implemented method of claim 1 , wherein the first plurality of processors are included in a non-uniform memory access multiprocessor instance.
4. The computer-implemented method of claim 1 , wherein computing the forecasted amount of processor use for the first workload comprises:
computing at least one time-series feature based on a measured amount of processor use associated with executing the first workload on the first plurality of processors;
computing at least one contextual feature based on metadata associated with the first workload; and
inputting the at least one time-series feature and the at least one contextual feature into the trained machine-learning model.
5. The computer-implemented method of claim 1 , wherein determining the at least one processor assignment comprises executing one or more integer
programming operations based on a first binary assignment matrix to generate a second binary assignment matrix that specifies the at least one processor assignment.
6. The computer-implemented method of claim 1 , wherein the first plurality of processors are partitioned into at least a first subset of processors that share a first lowest-level cache (LLC) and a second subset of processors that share a second LLC, and wherein computing the performance cost estimate comprises estimating a cache interference cost based on an imbalance in predicted pressures between the first LLC and the second LLC.
7. The computer-implemented method of claim 1 , wherein each processor included in the first plurality of processors is associated with a different set of one or more cache memories, and wherein computing the performance cost estimate comprises estimating a cache interference cost based on an imbalance in predicted pressures across the sets of one or more cache memories.
8. The computer-implemented method of claim 1 , wherein computing the performance cost estimate comprises estimating a cache sharing cost resulting from sharing at least one of a level one cache memory and a level two cache memory between a first execution thread associated with the first workload and a second execution thread associated with a second workload included in the first plurality of workloads.
9. The computer-implemented method of claim 1 , further comprising:
for each workload included in a second plurality of workloads, acquiring a set of attributes associated with the workload and a measured amount of processor use associated with executing at least a portion of the workload on at least one processor included in a second plurality of processors; and
executing one or more machine-learning algorithms to generate the trained
machine-learning model based on the measured amounts of processor use and the sets of attributes.
10. The computer-implemented method of claim 1 , wherein the trained machine learning model comprises a conditional regression model.
11. One or more non-transitory computer readable media including instructions that, when executed by one or more processors, cause the one or more processors to execute workloads on processors by performing the steps of:
for each workload included in a plurality of workloads, inputting at least one feature associated with the workload into a trained machine-learning model to compute a forecasted amount of processor use associated with the workload;
based on the forecasted amounts of processor use, computing a performance cost estimate associated with an estimated level of cache interference arising from executing the plurality of workloads on a plurality of processors; and
determining a first plurality of processor assignments based on the
performance cost estimate,
wherein at least one processor included in the plurality of processors is
subsequently configured to execute at least a portion of a first workload included in the plurality of workloads based on at least one processor assignment included in the first plurality of processor assignments.
12. The one or more non-transitory computer readable media of claim 11 , further comprising computing the at least one feature associated with the first workload based on a measured amount of processor use associated with executing the first workload on the plurality of processors.
13. The one or more non-transitory computer readable media of claim 11 , further comprising computing the at least one feature associated with the first workload based on at least one of a number of requested processors associated with the first workload, an amount of requested memory associated with the first workload, a name of a software application associated with the first workload, and a user identifier associated with the first workload.
14. The one or more non-transitory computer readable media of claim 11 , wherein determining the first plurality of processor assignments comprises executing one or more optimization operations based on a first binary assignment matrix that specifies a second plurality of processor assignments to generate a second binary assignment matrix that specifies the first plurality of processor assignments.
15. The one or more non-transitory computer readable media of claim 11 , wherein the plurality of processors are partitioned into at least a first subset of processors that share a first lowest-level cache (LLC) and a second subset of processors that share a second LLC, and wherein computing the performance cost estimate comprises estimating a cache interference cost based on an imbalance in predicted pressures between the first LLC and the second LLC.
16. The one or more non-transitory computer readable media of claim 11 , wherein each processor included in the plurality of processors is associated with a different set of one or more cache memories, and wherein computing the performance cost estimate comprises estimating a cache interference cost based on an imbalance in predicted pressures across the sets of one or more cache memories.
17. The one or more non-transitory computer readable media of claim 11 , wherein the plurality of processors are partitioned into at least a first subset of processors that are included in a first socket and a second subset of processors that are included in a second socket, and wherein computing the performance cost estimate comprises estimating a cross-socket memory access cost resulting from executing a first execution thread associated with the first workload on a first processor included in the first subset of processors while also executing a second execution thread associated with the first workload on a second processor included in the second subset of processors.
18. The one or more non-transitory computer readable media of claim 11 , wherein computing the performance cost estimate comprises estimating a performance cost resulting from executing a first thread associated with the first workload on a first processor included in the plurality of processors and subsequently executing a
second thread associated with a second workload included in the plurality of workloads on the first processor.
19. The one or more non-transitory computer readable media of claim 11 , wherein the forecasted amounts of processor use comprise predicted amounts of processor use associated with a statistical level of confidence.
20. A system for generating executing workloads on processors, the system comprising:
one or more memories storing instructions; and
one or more processors that are coupled to the one or more memories and, when executing the instructions, are configured to:
compute a forecasted amount of processor use for each workload
included in a plurality of workloads using a trained machine learning model;
based on the forecasted amounts of processor use, compute a
performance cost estimate associated with an estimated level of cache interference arising from executing the plurality of workloads on a plurality of processors; and
perform one or more optimization operations on a first plurality of
processors assignments based on the performance cost estimate to generate a second plurality of processor assignments, wherein at least one processor included in the plurality of processors is subsequently configured to execute at least a portion of a first workload included in the plurality of workloads based on at least one processor assignment included in the second plurality of processor assignments.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962855649P | 2019-05-31 | 2019-05-31 | |
US62/855,649 | 2019-05-31 | ||
US16/510,756 | 2019-07-12 | ||
US16/510,756 US11429525B2 (en) | 2019-05-31 | 2019-07-12 | Reducing cache interference based on forecasted processor use |
PCT/US2020/034943 WO2020243318A1 (en) | 2019-05-31 | 2020-05-28 | Reducing cache interference based on forecasted processor use |
Publications (2)
Publication Number | Publication Date |
---|---|
AU2020283588A1 true AU2020283588A1 (en) | 2021-12-23 |
AU2020283588B2 AU2020283588B2 (en) | 2023-03-09 |
Family
ID=73549690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2020283588A Active AU2020283588B2 (en) | 2019-05-31 | 2020-05-28 | Reducing cache interference based on forecasted processor use |
Country Status (7)
Country | Link |
---|---|
US (1) | US11429525B2 (en) |
EP (1) | EP3977281A1 (en) |
AU (1) | AU2020283588B2 (en) |
BR (1) | BR112021024162A2 (en) |
CA (1) | CA3141319C (en) |
MX (1) | MX2021014466A (en) |
WO (1) | WO2020243318A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11182183B2 (en) * | 2019-07-15 | 2021-11-23 | Vmware, Inc. | Workload placement using conflict cost |
US20220156639A1 (en) * | 2019-08-07 | 2022-05-19 | Hewlett-Packard Development Company, L.P. | Predicting processing workloads |
US11561706B2 (en) * | 2019-11-20 | 2023-01-24 | International Business Machines Corporation | Storage allocation enhancement of microservices based on phases of a microservice run |
US11836525B2 (en) * | 2020-12-17 | 2023-12-05 | Red Hat, Inc. | Dynamic last level cache allocation for cloud real-time workloads |
WO2024129068A1 (en) * | 2022-12-13 | 2024-06-20 | Robin Systems, Inc | Dynamic cpu allocation on failover |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6615316B1 (en) * | 2000-11-16 | 2003-09-02 | International Business Machines, Corporation | Using hardware counters to estimate cache warmth for process/thread schedulers |
US8429665B2 (en) | 2010-03-19 | 2013-04-23 | Vmware, Inc. | Cache performance prediction, partitioning and scheduling based on cache pressure of threads |
US8533719B2 (en) * | 2010-04-05 | 2013-09-10 | Oracle International Corporation | Cache-aware thread scheduling in multi-threaded systems |
US8898390B2 (en) | 2011-03-08 | 2014-11-25 | Intel Corporation | Scheduling workloads based on cache asymmetry |
US9268542B1 (en) * | 2011-04-28 | 2016-02-23 | Google Inc. | Cache contention management on a multicore processor based on the degree of contention exceeding a threshold |
US8732291B2 (en) * | 2012-01-13 | 2014-05-20 | Accenture Global Services Limited | Performance interference model for managing consolidated workloads in QOS-aware clouds |
US10255091B2 (en) | 2014-09-21 | 2019-04-09 | Vmware, Inc. | Adaptive CPU NUMA scheduling |
US10102033B2 (en) | 2016-05-26 | 2018-10-16 | International Business Machines Corporation | Method and system for performance ticket reduction |
US10896059B2 (en) * | 2017-03-13 | 2021-01-19 | International Business Machines Corporation | Dynamically allocating cache in a multi-tenant processing infrastructure |
KR102028096B1 (en) * | 2017-04-18 | 2019-10-02 | 한국전자통신연구원 | Apparatus and method for isolation of virtual machine based on hypervisor |
US11003592B2 (en) * | 2017-04-24 | 2021-05-11 | Intel Corporation | System cache optimizations for deep learning compute engines |
US10346166B2 (en) | 2017-04-28 | 2019-07-09 | Intel Corporation | Intelligent thread dispatch and vectorization of atomic operations |
US10223282B2 (en) | 2017-05-23 | 2019-03-05 | International Business Machines Corporation | Memory affinity management |
US20190213130A1 (en) * | 2018-01-05 | 2019-07-11 | Intel Corporation | Efficient sector prefetching for memory side sectored cache |
US10942767B2 (en) * | 2018-02-27 | 2021-03-09 | Microsoft Technology Licensing, Llc | Deep neural network workload scheduling |
WO2019209674A1 (en) * | 2018-04-25 | 2019-10-31 | President And Fellows Of Harvard College | Systems and methods for designing data structures and synthesizing costs |
US11360891B2 (en) * | 2019-03-15 | 2022-06-14 | Advanced Micro Devices, Inc. | Adaptive cache reconfiguration via clustering |
-
2019
- 2019-07-12 US US16/510,756 patent/US11429525B2/en active Active
-
2020
- 2020-05-28 BR BR112021024162A patent/BR112021024162A2/en active Search and Examination
- 2020-05-28 EP EP20760592.4A patent/EP3977281A1/en active Pending
- 2020-05-28 AU AU2020283588A patent/AU2020283588B2/en active Active
- 2020-05-28 MX MX2021014466A patent/MX2021014466A/en unknown
- 2020-05-28 WO PCT/US2020/034943 patent/WO2020243318A1/en unknown
- 2020-05-28 CA CA3141319A patent/CA3141319C/en active Active
Also Published As
Publication number | Publication date |
---|---|
CA3141319A1 (en) | 2020-12-03 |
BR112021024162A2 (en) | 2022-01-11 |
EP3977281A1 (en) | 2022-04-06 |
MX2021014466A (en) | 2022-01-06 |
US11429525B2 (en) | 2022-08-30 |
CA3141319C (en) | 2024-04-16 |
US20200379907A1 (en) | 2020-12-03 |
WO2020243318A1 (en) | 2020-12-03 |
AU2020283588B2 (en) | 2023-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2020283588B2 (en) | Reducing cache interference based on forecasted processor use | |
US11720408B2 (en) | Method and system for assigning a virtual machine in virtual GPU enabled systems | |
Zhang et al. | Slaq: quality-driven scheduling for distributed machine learning | |
US9632836B2 (en) | Scheduling applications in a clustered computer system | |
US10884779B2 (en) | Systems and methods for selecting virtual machines to be migrated | |
US20180032373A1 (en) | Managing data processing resources | |
Ahn et al. | Improving {I/O} Resource Sharing of Linux Cgroup for {NVMe}{SSDs} on Multi-core Systems | |
CN108205469B (en) | MapReduce-based resource allocation method and server | |
AU2020262300B2 (en) | Techniques for increasing the isolation of workloads within a multiprocessor instance | |
Song et al. | A two-stage approach for task and resource management in multimedia cloud environment | |
Chen et al. | Retail: Opting for learning simplicity to enable qos-aware power management in the cloud | |
CN111177984B (en) | Resource utilization of heterogeneous computing units in electronic design automation | |
US20180039514A1 (en) | Methods and apparatus to facilitate efficient scheduling of digital tasks in a system | |
Hu et al. | Olympian: Scheduling gpu usage in a deep neural network model serving system | |
Chen et al. | Data prefetching and eviction mechanisms of in-memory storage systems based on scheduling for big data processing | |
Tan et al. | GPUPool: A holistic approach to fine-grained gpu sharing in the cloud | |
Chen et al. | Ensemble: A tool for performance modeling of applications in cloud data centers | |
Modi et al. | CABARRE: Request Response Arbitration for Shared Cache Management | |
Ma et al. | MalleTrain: Deep Neural Networks Training on Unfillable Supercomputer Nodes | |
AU2021106510A4 (en) | A method of cpu scheduling performance analysis using markov chain modeling. | |
KR102168464B1 (en) | Method for managing in-memory cache | |
JP2018067113A (en) | Control device, control method and control program | |
Araujo | Analysis and optimization of cache-related parameters for real-time systems | |
Meehean | Towards Transparent CPU Scheduling | |
Mobaiyen et al. | Scheduling and Response-time Analysis of Multicore and Multi-GPU Heterogeneous Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGA | Letters patent sealed or granted (standard patent) |