US20200142736A1 - Computer processing system with resource optimization and associated methods - Google Patents

Computer processing system with resource optimization and associated methods Download PDF

Info

Publication number
US20200142736A1
US20200142736A1 US16/734,667 US202016734667A US2020142736A1 US 20200142736 A1 US20200142736 A1 US 20200142736A1 US 202016734667 A US202016734667 A US 202016734667A US 2020142736 A1 US2020142736 A1 US 2020142736A1
Authority
US
United States
Prior art keywords
priority level
processor
executed
response
resource classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/734,667
Inventor
Pierre Marmignon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Citrix Systems Inc
Original Assignee
Citrix Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Citrix Systems Inc filed Critical Citrix Systems Inc
Priority to US16/734,667 priority Critical patent/US20200142736A1/en
Assigned to CITRIX SYSTEMS, INC. reassignment CITRIX SYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MARMIGNON, PIERRE
Publication of US20200142736A1 publication Critical patent/US20200142736A1/en
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION reassignment WILMINGTON TRUST, NATIONAL ASSOCIATION SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CITRIX SYSTEMS, INC.
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT reassignment WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: CITRIX SYSTEMS, INC., TIBCO SOFTWARE INC.
Assigned to GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT reassignment GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT SECOND LIEN PATENT SECURITY AGREEMENT Assignors: CITRIX SYSTEMS, INC., TIBCO SOFTWARE INC.
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: CITRIX SYSTEMS, INC., TIBCO SOFTWARE INC.
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT reassignment WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: CITRIX SYSTEMS, INC., CLOUD SOFTWARE GROUP, INC. (F/K/A TIBCO SOFTWARE INC.)
Assigned to CLOUD SOFTWARE GROUP, INC. (F/K/A TIBCO SOFTWARE INC.), CITRIX SYSTEMS, INC. reassignment CLOUD SOFTWARE GROUP, INC. (F/K/A TIBCO SOFTWARE INC.) RELEASE AND REASSIGNMENT OF SECURITY INTEREST IN PATENT (REEL/FRAME 062113/0001) Assignors: GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Definitions

  • the present disclosure relates to the field of computer processing system monitoring, and more particularly, to optimizing resource usage by applications executing within the computer processing system.
  • resources such as CPU, Memory and input/output operations per second (IOPS) allocations are not optimized by the underlying operating system. This may create user experience and hardware density issues.
  • IOPS input/output operations per second
  • Windows processes can have several levels of CPU and I/O priorities, and can be spread among one or several CPU/Cores, these settings are not adjusted dynamically to reflect the real user activity. Thus, real user activities may be subject to unnecessary slowdowns.
  • a computer processing system comprises at least one processor configured to execute a process, and at least one memory coupled to the at least one processor.
  • the at least one processor is further configured to compare execution of the process to at least one global parameter, adjust a priority level of the executed process in response to satisfaction of the at least one global parameter, and allocate a portion of the memory in response to the adjusted priority level.
  • the priority level may be a processing priority level of the at least one processor.
  • the priority level may be an input/output (I/O) priority level of the at least one processor.
  • the at least one global parameter may comprise a processor consumption level.
  • the portion of the memory may be allocated in response to a monitored execution of the process being less than the processor consumption level.
  • the at least one processor may be further configured to perform the following before execution of the process: identify the process to be executed; determine if the identified process matches a known resource classification, with the known resource classification having a known priority level associated therewith; and adjust the priority level of the process to be executed to the known priority level in response to the identified process matching the known resource classification.
  • the at least one processor may be further configured to adjust the priority level of the process to be executed to a default priority level in response to the identified process not matching the known resource classification, with the default priority level associated with a default resource classification.
  • the at least one memory may comprise a dedicated memory for supporting the process executable by the at least one processor, with the dedicated memory configured to store the known and default resource classifications.
  • the at least one processor may comprise a plurality of processors, and wherein the adjustment of the priority level further comprises changing how many processors are to be used to execute the process.
  • a resource classification of the executed process may be updated after the adjustment of the priority level.
  • the computing device may further comprise an operating system stored in the at least one memory, wherein the at least one processor may be configured to operate based on the operating system, and wherein the at least one processor may be further configured to update the resource classification via the operating system.
  • Another aspect is directed to a computing device that comprises at least one processor configured to execute a process, and at least one memory coupled to the at least one processor.
  • the at least one processor is further configured to monitor execution of the process by determining processing resources consumed, compare execution of the process to a processor consumption level, adjust a priority level of the executed process in response to the consumption of processing resources being less than the processor consumption level, and allocate a portion of the memory in response to the adjusted priority level.
  • Another aspect is directed to a method comprising monitoring execution of a process by at least one processor that is supported by at least one memory coupled to the at least one processor, with the monitoring including determining processing resources consumed by the at least one processor.
  • the method further includes comparing execution of the process to a processor consumption level, adjusting a priority level of the executed process in response to the consumption of processing resources being less than the processor consumption level, and allocating a portion of the memory in response to the adjusted priority level.
  • Yet another aspect is directed to a method comprising monitoring execution of a process by at least one processor that is supported by at least one memory coupled to the at least one processor. The method further includes comparing execution of the process to at least one global parameter, adjusting a priority level of the executed process in response to satisfaction of the at least one global parameter, and allocating a portion of the memory in response to the adjusted priority level.
  • FIG. 1 is a block diagram of a computing environment having an intelligent resource optimization engine for intelligently and dynamically adjusting processing system resources for processes that are executed.
  • FIG. 2 is a flow chart illustrating a process for intelligently managing computing systems resources consumed by processes executing within the computing system.
  • FIG. 3 is one embodiment of a computer system that may be used with the present disclosure.
  • processes executed in a computing environment are subject to optimizations done in parallel with the loading and execution of the application.
  • a common process behavior analysis engine referred to herein as an intelligent resource optimization engine, monitors processes and associates those processes with classifications relevant to resource allocation and/or utilization within the computing environment, and adjusts the classifications dynamically as the processes are loaded and executed by the computing environment.
  • the present disclosure also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • FIG. 1 illustrates one embodiment of a computing environment 100 having an intelligent resource optimization engine 120 for intelligently and dynamically adjusting processing system resources 110 for processes 140 executed within the computing environment.
  • computing environments such as those enables by the WINDOWSTM operating system, allow the configuration of CPU and I/O priority levels 115 .
  • CPU(s) Typical levels available for one or more processors executing the computing environment 100 , referred to herein as CPU(s), are as follows: Realtime, High, Above Normal, Normal, Below Normal, Low, and Idle.
  • Typical levels for I/O processes executing within the computing environment 100 are as follows: High, Normal, Low, and Very Low.
  • clamping Another more aggressive technique, called clamping, consists in pausing/restarting a process at a low interval of time (usually in milliseconds) to limit the maximum amount of CPU it could use. This technique will have a dramatic effect on the user experience as it is creating a saw tooth effect on application responsiveness and the process CPU usage.
  • Today's computers are usually equipped with multiple CPU or single CPU with multiple cores.
  • Computing systems such as MICROSOFT WINDOWSTM operating systems, expose a setting, called affinity, allowing the configuration of one or several core(s) where application processes, such as processes 140 , will be executed.
  • the intelligent optimization engine 120 will classify running user processes (e.g., processes 140 ) as belong to one of a plurality of different classification levels indicative of a process's known behavior.
  • the classification levels could be as follows: Good, Bad, Really bad, and Really Really bad.
  • the intelligent resource optimization engine 120 will adjust the process's CPU and I/O priorities and CPU/Cores affinity against:
  • Rules defined by the self-learning mechanisms of the intelligent resource optimization engine 120 may be as follows:
  • any preconfigured or well known “Bad” process/application will not have the ability to slow down other applications or the operating system itself because the more an application/process is identified as “Bad”, the bigger the priority gap is. Adjustments are done right at process creation and not after a trigger.
  • a rule for use by the intelligent resource optimization engine 120 is a rule where a process will be considered as having an abnormal behavior when it is exceeding 20% of CPU for 30 seconds.
  • the processes classification can be adjusted accordingly.
  • an extension to this global CPU-based rule is an I/O based rule for I/O optimization.
  • priorities and affinity will be adjusted for a specified period of time (called “Idle Time”); CPU priority will be set to “Idle”; I/O priority will be set to “Very Low”; and affinity will be adjusted to a preconfigured number of CPU/Core(s) but limited to the highest core Ids to preserve the operating system.
  • the intelligent resource optimization engine 120 will remember behavior and/or classifications associated with processes, and add the process to its table of “known processes” (e.g., processes classification storage 125 ).
  • the effect will be to park a process that is currently having an abnormal behavior for a specific period of time dramatically restricting the amount of CPU & I/O it can consume and thus preserving the system and other applications.
  • the intelligent resource optimization engine 120 while adjusting priorities allows intelligent resource optimization engine 120 to give precedence to some applications/process over others, limit the affinity to highest Ids cores/CPUs, etc., the intelligent resource optimization engine 120 allows limiting the maximum amount of CPU an application/process can consume without having to use more aggressive techniques (such as clamping) that would dramatically affect the user experience (i.e., freezing applications).
  • the intelligent optimization engine 120 does not utilize CPU clamping techniques, in one embodiment, it could implement both techniques to solve different issues. In this case, any clamped process will be excluded from the intelligent engine.
  • intelligent resource optimization engine 120 will monitor processes activity, and based on an activity rule, will periodically trigger computing system APIs responsible that enable processes memory to be reallocated (e.g., move unused memory pages to the pagefile), and thus free physical memory for other processes also running on the same device dramatically decreasing the amount of minimum required physical memory per user.
  • FIG. 2 is a flow chart 200 illustrating a process for intelligently managing computing systems resources consumed by processes executing within the computing system.
  • the process is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), firmware, or a combination.
  • the process is performed by the intelligent resource allocation engine 120 .
  • the method comprises identifying the process to be executed at Block 202 , and determining if the process has a known resource classification associated therewith at Block 204 .
  • the known resource classification corresponds to a processing priority level and an I/O priority level.
  • At least one of the processing priority level and the I/O priority level for the process may be adjusted to the known resource classification at Block 206 .
  • At least one of the processing priority level and the I/O priority level for the process may be adjusted to the default resource classification at Block 208 .
  • the resource classification is then updated at Block 214 .
  • Monitoring the behavior includes determining how much processing is being consumed by the executed process, and the at least one global parameter includes a predetermined processor consumption level.
  • the computer processing system includes at least one memory with a dedicated memory area for supporting the executed process, and the method further includes reallocating a portion of the dedicated memory area when the amount of consumed processing is less than the predetermined processor consumption level.
  • the readjusting includes changing at least one of the processing priority level and the I/O priority level.
  • the computer processing system includes a plurality of processors, and the method further includes adjusting further comprises changing how many processors are to be used to execute the process based on the known or default resource classification.
  • the computer processing system includes an operating system that is configured for adjusting the process to the known or default resource classification.
  • the operating system includes a WindowsTM based operating system, for example.
  • the intelligent resource optimization engine 120 is applicable to operating systems other than WindowsTM.
  • FIG. 3 is one embodiment of a computer system that may be used with the present disclosure. It will be apparent to those of ordinary skill in the art, however that other alternative systems of various system architectures may also be used.
  • the data or computer processing system 300 illustrated in FIG. 3 includes a bus or other internal communication means 315 for communicating information, and at least one processor 310 coupled to a first bus 315 for processing information.
  • the system further comprises at least one random access memory (RAM) or other volatile storage device 350 (referred to as memory), coupled to the first bus 315 for storing information and instructions to be executed by processor 310 .
  • RAM random access memory
  • the at least one main memory 350 also may be used for storing temporary variables or other intermediate information during execution of instructions by the at least one processor 310 .
  • the system also comprises a read only memory (ROM) and/or static storage device 320 coupled to the first bus 315 for storing static information and instructions for the at least one processor 310 , and a data storage device 325 , such as a magnetic disk or optical disk and its corresponding disk drive.
  • the data storage device 325 is coupled to the first bus 315 for storing information and instructions.
  • the resource optimization engine 120 is stored in the at least one main memory 350 .
  • the resource optimization engine 120 may be stored and/or accessible by the computer processing system 300 in locations other than the at lest one main memory 350 , as discussed below.
  • the computer processing system 300 may further be coupled to a display device 370 , such as a liquid crystal display (LCD), coupled to the first bus 315 through a second bus 365 for displaying information to a computer user.
  • a display device 370 such as a liquid crystal display (LCD)
  • An alphanumeric input device 375 may also be coupled to first bus 315 through the second bus 365 for communicating information and command selections to the at least one processor 310 .
  • An additional user input device is a cursor control device 380 , such as a touchpad, mouse, a trackball, stylus, or cursor direction keys coupled to the first bus 315 through the second bus 365 for communicating direction information and command selections to the at least one processor 310 , and for controlling cursor movement on the display device 370 .
  • the communication device 390 may include any of a number of commercially available networking peripheral devices, such as those used for coupling to an Ethernet, token ring, Internet, or wide area network.
  • the communication device 390 may further be a null-modem connection, or any other mechanism that provides connectivity between the computer system 300 and the outside or external world. Note that any or all of the components of this system illustrated in FIG. 3 and associated hardware may be used in various embodiments of the present disclosure.
  • control logic or software implementing the present disclosure can be stored in the main memory 350 , mass storage device 325 , or other storage medium locally or remotely accessible to the at least one processor 310 .
  • a non-transitory computer readable medium may have a plurality of computer executable instructions for causing the computer processing system 300 to perform steps comprising identifying a process to be executed on the computer processing system, and determining if the process has a known resource classification associated therewith.
  • the known resource classification may correspond to a processing priority level and an I/O priority level. If the process is associated with the known resource classification, then at least one of the processing priority level and the I/O priority level for the process may be adjusted to the known resource classification. If the process is not associated with the known resource classification, then at least one of the processing priority level and the I/O priority level for the process may be adjusted to the default resource classification.
  • the present disclosure may also be embodied in a handheld or portable device containing a subset of the computer hardware components described above.
  • the handheld device may be configured to contain only the first bus 315 , the at least one processor 310 , and memory 350 and/or 325 .
  • the handheld device may also be configured to include a set of buttons or input signaling components with which a user may select from a set of available options.
  • the handheld device may also be configured to include an output apparatus, such as a liquid crystal display (LCD) or display element matrix for displaying information to a user of the handheld device. Conventional methods may be used to implement such a handheld device.
  • LCD liquid crystal display
  • Conventional methods may be used to implement such a handheld device.
  • the implementation of the present disclosure for such a device would be apparent to one of ordinary skill in the art.
  • the present disclosure may also be embodied in a special purpose appliance including a subset of the computer hardware components described above.
  • the appliance may include at least one processor 310 , a data storage device 325 , a bus 315 , and a memory 350 , and only rudimentary communications mechanisms, such as a small touch-screen that permits the user to communicate in a basic manner with the device.

Abstract

A computing device includes a processor to execute a process, and at least one memory coupled to the processor. The processor compares execution of the process to at least one global parameter, adjusts a priority level of the executed process in response to satisfaction of the at least one global parameter, and allocates a portion of the memory in response to the adjusted priority level.

Description

    RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 15/416,090 filed on Jan. 26, 2017 which claims the benefit of provisional application Ser. No. 62/287,638 filed Jan. 27, 2016, which is incorporated herein in its entirety by reference.
  • TECHNICAL FIELD
  • The present disclosure relates to the field of computer processing system monitoring, and more particularly, to optimizing resource usage by applications executing within the computer processing system.
  • BACKGROUND
  • In a Windows based computing environment, resources such as CPU, Memory and input/output operations per second (IOPS) allocations are not optimized by the underlying operating system. This may create user experience and hardware density issues.
  • While Windows processes can have several levels of CPU and I/O priorities, and can be spread among one or several CPU/Cores, these settings are not adjusted dynamically to reflect the real user activity. Thus, real user activities may be subject to unnecessary slowdowns.
  • The same issue exists with the memory (e.g., RAM) resources, where the computing environment relies on developers to properly optimize the amount of memory used by applications. However, the reality is that it is almost never done.
  • SUMMARY
  • A computer processing system comprises at least one processor configured to execute a process, and at least one memory coupled to the at least one processor. The at least one processor is further configured to compare execution of the process to at least one global parameter, adjust a priority level of the executed process in response to satisfaction of the at least one global parameter, and allocate a portion of the memory in response to the adjusted priority level.
  • The priority level may be a processing priority level of the at least one processor. The priority level may be an input/output (I/O) priority level of the at least one processor.
  • The at least one global parameter may comprise a processor consumption level. The portion of the memory may be allocated in response to a monitored execution of the process being less than the processor consumption level.
  • The at least one processor may be further configured to perform the following before execution of the process: identify the process to be executed; determine if the identified process matches a known resource classification, with the known resource classification having a known priority level associated therewith; and adjust the priority level of the process to be executed to the known priority level in response to the identified process matching the known resource classification.
  • The at least one processor may be further configured to adjust the priority level of the process to be executed to a default priority level in response to the identified process not matching the known resource classification, with the default priority level associated with a default resource classification.
  • The at least one memory may comprise a dedicated memory for supporting the process executable by the at least one processor, with the dedicated memory configured to store the known and default resource classifications.
  • The at least one processor may comprise a plurality of processors, and wherein the adjustment of the priority level further comprises changing how many processors are to be used to execute the process.
  • A resource classification of the executed process may be updated after the adjustment of the priority level. The computing device may further comprise an operating system stored in the at least one memory, wherein the at least one processor may be configured to operate based on the operating system, and wherein the at least one processor may be further configured to update the resource classification via the operating system.
  • Another aspect is directed to a computing device that comprises at least one processor configured to execute a process, and at least one memory coupled to the at least one processor. The at least one processor is further configured to monitor execution of the process by determining processing resources consumed, compare execution of the process to a processor consumption level, adjust a priority level of the executed process in response to the consumption of processing resources being less than the processor consumption level, and allocate a portion of the memory in response to the adjusted priority level.
  • Another aspect is directed to a method comprising monitoring execution of a process by at least one processor that is supported by at least one memory coupled to the at least one processor, with the monitoring including determining processing resources consumed by the at least one processor. The method further includes comparing execution of the process to a processor consumption level, adjusting a priority level of the executed process in response to the consumption of processing resources being less than the processor consumption level, and allocating a portion of the memory in response to the adjusted priority level.
  • Yet another aspect is directed to a method comprising monitoring execution of a process by at least one processor that is supported by at least one memory coupled to the at least one processor. The method further includes comparing execution of the process to at least one global parameter, adjusting a priority level of the executed process in response to satisfaction of the at least one global parameter, and allocating a portion of the memory in response to the adjusted priority level.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments, which, however, should not be taken to limit the disclosure to the specific embodiments, but are for explanation and understanding only.
  • FIG. 1 is a block diagram of a computing environment having an intelligent resource optimization engine for intelligently and dynamically adjusting processing system resources for processes that are executed.
  • FIG. 2 is a flow chart illustrating a process for intelligently managing computing systems resources consumed by processes executing within the computing system.
  • FIG. 3 is one embodiment of a computer system that may be used with the present disclosure.
  • DETAILED DESCRIPTION
  • The present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the disclosure are shown. This disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope thereof to those skilled in the art. Like numbers refer to like elements throughout, and prime notation is used to indicate similar elements in alternative embodiments.
  • In the following description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure.
  • In the embodiments discussed herein, processes executed in a computing environment, such as a WINDOWS™ computing environment, are subject to optimizations done in parallel with the loading and execution of the application. In one embodiment, a common process behavior analysis engine, referred to herein as an intelligent resource optimization engine, monitors processes and associates those processes with classifications relevant to resource allocation and/or utilization within the computing environment, and adjusts the classifications dynamically as the processes are loaded and executed by the computing environment.
  • Some portions of the detailed description that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
  • It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as generating, converting, executing, storing, receiving, obtaining, constructing, accessing, capturing, or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
  • FIG. 1 illustrates one embodiment of a computing environment 100 having an intelligent resource optimization engine 120 for intelligently and dynamically adjusting processing system resources 110 for processes 140 executed within the computing environment.
  • In embodiments, computing environments, such as those enables by the WINDOWS™ operating system, allow the configuration of CPU and I/O priority levels 115.
  • Typical levels available for one or more processors executing the computing environment 100, referred to herein as CPU(s), are as follows: Realtime, High, Above Normal, Normal, Below Normal, Low, and Idle.
  • Typical levels for I/O processes executing within the computing environment 100 are as follows: High, Normal, Low, and Very Low.
  • On loaded environments, all user processes are competing for CPU and I/O. As a result, this also increases context switches and generates a lot of performance downsides, dramatically decreasing the performances of the applications.
  • In virtual environments (virtual desktops or server based computing) this default behavior also leads to a low user density (low number of users per server). While users can adjust CPU (not I/O) priorities manually through the task manager, this is not a common usage.
  • To the exception of some computing system internal processes, all processes generally start with a “normal” priority, regardless of their history of abnormal behavior. While one could programmatically decrease the priority of a process when the process is having an abnormal behavior, it is not enough.
  • The effect of such a technique depends on the priority levels gap between the process having an abnormal behavior and the others. If a process is having an abnormal behavior, lowering its priority will have an effect but the impact will be lower if all processes are running at the “normal” priority compared to a situation where other processes would be running at the “high” priority level which is not the default. For example, if a process is consuming 100% of CPU and is decreased to a low or idle priority level, it will still impact processes running at the “normal” priority level (slowing them down) while it would not impact processes running at the “high” priority level.
  • Another reason is that because the decision of lowering the priority is made only after a sample time, this leads to a global system slowdown during the sample time.
  • Yet another reason is that such a technique does not properly isolate the abnormal process that will still have the ability to consume as many CPU resources as it can.
  • Another more aggressive technique, called clamping, consists in pausing/restarting a process at a low interval of time (usually in milliseconds) to limit the maximum amount of CPU it could use. This technique will have a dramatic effect on the user experience as it is creating a saw tooth effect on application responsiveness and the process CPU usage.
  • Today's computers are usually equipped with multiple CPU or single CPU with multiple cores. Computing systems, such as MICROSOFT WINDOWS™ operating systems, expose a setting, called affinity, allowing the configuration of one or several core(s) where application processes, such as processes 140, will be executed.
  • Old applications are likely to start with a single CPU/core affinity or with an affinity lower than the number of cores which could lead to low performances. CPU/cores are internally identified with an Id and it appears that in a normal environment when lower Id cores are saturated there is a big slowdown of the operating system itself.
  • In one embodiment, the intelligent optimization engine 120 will classify running user processes (e.g., processes 140) as belong to one of a plurality of different classification levels indicative of a process's known behavior. For example, the classification levels could be as follows: Good, Bad, Really bad, and Really Really bad.
  • By default, unless a rule has been specified for a specific process, the specific process will be considered as good until the actual behavior of the processes is known as discussed below.
  • As soon as a process is started, the intelligent resource optimization engine 120 will adjust the process's CPU and I/O priorities and CPU/Cores affinity against:
  • Any manual rule that would have been defined by an administrator.
  • Rules defined by the self-learning mechanisms of the intelligent resource optimization engine 120 may be as follows:
  • If the process is known as “Good”, CPU priority will be adjusted to “High”, I/O priority to “High” and affinity to all available CPU/Cores;
  • If the process is known as “Bad”, CPU priority will be adjusted to Above Normal, I/O priority won't be adjusted so will stay at “Normal” and affinity will not be adjusted;
  • If the process is known as “Really Bad”, CPU priority will be adjusted to Normal, I/O priority to low and affinity will not be adjusted; and
  • If the process is known as “Really Really Bad”, CPU priority will be adjusted to “Below Normal”, I/O priority to “Low” and affinity will not be adjusted.
  • With such a system, any preconfigured or well known “Bad” process/application will not have the ability to slow down other applications or the operating system itself because the more an application/process is identified as “Bad”, the bigger the priority gap is. Adjustments are done right at process creation and not after a trigger.
  • Also, if other applications are not demanding resources the application will remain unaffected and will not show any performance slowdown.
  • The benefits associated with intelligent and evolving processes classification can be understood by the following example. Suppose, a user has started to work and his applications have been adjusted as “good”. Then an antivirus scan starts. As CPU and I/O priorities have been adjusted to “High”, the user's applications won't experience the usual slowdown caused by the antivirus software.
  • In one embodiment, the engine will monitor processes activity and check their behavior against a global rule. The global rule defines when a process is experiencing abnormal CPU consumption. In one embodiment, the rule is expressed in an average CPU consumption (expressed in percent of total CPU usage) over a specified period of time (in seconds).
  • One example of a rule for use by the intelligent resource optimization engine 120 is a rule where a process will be considered as having an abnormal behavior when it is exceeding 20% of CPU for 30 seconds. When such a rule is satisfied (e.g., the processes exceeds the CPU usage % for the period of time), the processes classification can be adjusted accordingly.
  • In embodiments, an extension to this global CPU-based rule is an I/O based rule for I/O optimization. When a process will trigger this global rule is as follows: priorities and affinity will be adjusted for a specified period of time (called “Idle Time”); CPU priority will be set to “Idle”; I/O priority will be set to “Very Low”; and affinity will be adjusted to a preconfigured number of CPU/Core(s) but limited to the highest core Ids to preserve the operating system.
  • In other embodiments, the intelligent resource optimization engine 120 will remember behavior and/or classifications associated with processes, and add the process to its table of “known processes” (e.g., processes classification storage 125).
  • The effect will be to park a process that is currently having an abnormal behavior for a specific period of time dramatically restricting the amount of CPU & I/O it can consume and thus preserving the system and other applications.
  • When idle time is over, previous priorities and affinity will be restored according to any user defined rule, any self-learning algorithm rule, or based on a previous state.
  • Each time a process has triggered the rule on a specific computer/machine, the processes is remembered and based on this memory the intelligent algorithm will be able to classify processes and take actions accordingly.
  • In embodiments, while adjusting priorities allows intelligent resource optimization engine 120 to give precedence to some applications/process over others, limit the affinity to highest Ids cores/CPUs, etc., the intelligent resource optimization engine 120 allows limiting the maximum amount of CPU an application/process can consume without having to use more aggressive techniques (such as clamping) that would dramatically affect the user experience (i.e., freezing applications).
  • Although the intelligent optimization engine 120 does not utilize CPU clamping techniques, in one embodiment, it could implement both techniques to solve different issues. In this case, any clamped process will be excluded from the intelligent engine.
  • Furthermore, while computing systems generally provide APIs to clean a process memory, such as discarding unused pages, this process is not automatic and should be implemented by application developers (which again they rarely do). This leads to applications overconsuming the physical memory (RAM) causing users to add more and more memory to their computers/servers. In one embodiment, intelligent resource optimization engine 120 will monitor processes activity, and based on an activity rule, will periodically trigger computing system APIs responsible that enable processes memory to be reallocated (e.g., move unused memory pages to the pagefile), and thus free physical memory for other processes also running on the same device dramatically decreasing the amount of minimum required physical memory per user.
  • FIG. 2 is a flow chart 200 illustrating a process for intelligently managing computing systems resources consumed by processes executing within the computing system. The process is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), firmware, or a combination. In one embodiment, the process is performed by the intelligent resource allocation engine 120.
  • From the start, the method comprises identifying the process to be executed at Block 202, and determining if the process has a known resource classification associated therewith at Block 204. The known resource classification corresponds to a processing priority level and an I/O priority level.
  • If the process is associated with the known resource classification, then at least one of the processing priority level and the I/O priority level for the process may be adjusted to the known resource classification at Block 206.
  • If the process is not associated with the known resource classification, then at least one of the processing priority level and the I/O priority level for the process may be adjusted to the default resource classification at Block 208.
  • This further includes executing the process and monitoring behavior of the executed process, and comparing behavior of the executed process to at least one global parameter at Block 210. At least one of the processing priority level and the I/O priority level is readjusted at Block 212 for the executed process if the monitored behavior meets the at least one global parameter. The resource classification is then updated at Block 214.
  • Monitoring the behavior includes determining how much processing is being consumed by the executed process, and the at least one global parameter includes a predetermined processor consumption level. The computer processing system includes at least one memory with a dedicated memory area for supporting the executed process, and the method further includes reallocating a portion of the dedicated memory area when the amount of consumed processing is less than the predetermined processor consumption level.
  • The readjusting includes changing at least one of the processing priority level and the I/O priority level.
  • The computer processing system includes a plurality of processors, and the method further includes adjusting further comprises changing how many processors are to be used to execute the process based on the known or default resource classification.
  • The computer processing system includes an operating system that is configured for adjusting the process to the known or default resource classification. As noted above, the operating system includes a Windows™ based operating system, for example. Alternatively, the intelligent resource optimization engine 120 is applicable to operating systems other than Windows™.
  • FIG. 3 is one embodiment of a computer system that may be used with the present disclosure. It will be apparent to those of ordinary skill in the art, however that other alternative systems of various system architectures may also be used.
  • The data or computer processing system 300 illustrated in FIG. 3 includes a bus or other internal communication means 315 for communicating information, and at least one processor 310 coupled to a first bus 315 for processing information. The system further comprises at least one random access memory (RAM) or other volatile storage device 350 (referred to as memory), coupled to the first bus 315 for storing information and instructions to be executed by processor 310. The at least one main memory 350 also may be used for storing temporary variables or other intermediate information during execution of instructions by the at least one processor 310. The system also comprises a read only memory (ROM) and/or static storage device 320 coupled to the first bus 315 for storing static information and instructions for the at least one processor 310, and a data storage device 325, such as a magnetic disk or optical disk and its corresponding disk drive. The data storage device 325 is coupled to the first bus 315 for storing information and instructions. For illustration purposes, the resource optimization engine 120 is stored in the at least one main memory 350. The resource optimization engine 120 may be stored and/or accessible by the computer processing system 300 in locations other than the at lest one main memory 350, as discussed below.
  • The computer processing system 300 may further be coupled to a display device 370, such as a liquid crystal display (LCD), coupled to the first bus 315 through a second bus 365 for displaying information to a computer user. An alphanumeric input device 375, including alphanumeric and other keys, may also be coupled to first bus 315 through the second bus 365 for communicating information and command selections to the at least one processor 310. An additional user input device is a cursor control device 380, such as a touchpad, mouse, a trackball, stylus, or cursor direction keys coupled to the first bus 315 through the second bus 365 for communicating direction information and command selections to the at least one processor 310, and for controlling cursor movement on the display device 370.
  • Another device, which may optionally be coupled to the computer processing system 300, is a communication device 390 for accessing other nodes of a distributed system via a network. The communication device 390 may include any of a number of commercially available networking peripheral devices, such as those used for coupling to an Ethernet, token ring, Internet, or wide area network. The communication device 390 may further be a null-modem connection, or any other mechanism that provides connectivity between the computer system 300 and the outside or external world. Note that any or all of the components of this system illustrated in FIG. 3 and associated hardware may be used in various embodiments of the present disclosure.
  • It will be appreciated by those of ordinary skill in the art that any configuration of the system may be used for various purposes according to the particular implementation. The control logic or software implementing the present disclosure can be stored in the main memory 350, mass storage device 325, or other storage medium locally or remotely accessible to the at least one processor 310.
  • It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in the main memory 350 or the read only memory 320 and executed by the at least one processor 310. This control logic or software may also be resident on an article of manufacture comprising a computer readable medium having computer readable program code embodied therein and being readable by the mass storage device 325 and for causing the processor 310 to operate in accordance with the methods and teachings herein.
  • For example, a non-transitory computer readable medium may have a plurality of computer executable instructions for causing the computer processing system 300 to perform steps comprising identifying a process to be executed on the computer processing system, and determining if the process has a known resource classification associated therewith. The known resource classification may correspond to a processing priority level and an I/O priority level. If the process is associated with the known resource classification, then at least one of the processing priority level and the I/O priority level for the process may be adjusted to the known resource classification. If the process is not associated with the known resource classification, then at least one of the processing priority level and the I/O priority level for the process may be adjusted to the default resource classification.
  • The present disclosure may also be embodied in a handheld or portable device containing a subset of the computer hardware components described above. For example, the handheld device may be configured to contain only the first bus 315, the at least one processor 310, and memory 350 and/or 325. The handheld device may also be configured to include a set of buttons or input signaling components with which a user may select from a set of available options. The handheld device may also be configured to include an output apparatus, such as a liquid crystal display (LCD) or display element matrix for displaying information to a user of the handheld device. Conventional methods may be used to implement such a handheld device. The implementation of the present disclosure for such a device would be apparent to one of ordinary skill in the art.
  • The present disclosure may also be embodied in a special purpose appliance including a subset of the computer hardware components described above. For example, the appliance may include at least one processor 310, a data storage device 325, a bus 315, and a memory 350, and only rudimentary communications mechanisms, such as a small touch-screen that permits the user to communicate in a basic manner with the device. In general, the more special-purpose the device is, the fewer of the elements need be present for the device to function.
  • It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
  • Many modifications and other embodiments of the disclosure will come to the mind of one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is understood that the disclosure is not to be limited to the specific embodiments disclosed, and that modifications and embodiments are intended to be included within the scope of the appended claims.

Claims (24)

That which is claimed is:
1. A computer processing system comprising:
at least one processor configured to execute a process;
at least one memory coupled to said at least one processor; and
said at least one processor is further configured to perform the following:
compare execution of the process to at least one global parameter,
adjust a priority level of the executed process in response to satisfaction of the at least one global parameter, and
allocate a portion of the memory in response to the adjusted priority level.
2. The computing device according to claim 1 wherein the priority level being a processing priority level of said at least one processor.
3. The computing device according to claim 1 wherein the priority level being an input/output (I/O) priority level of said at least one processor.
4. The computing device according to claim 1 wherein the at least one global parameter comprises a processor consumption level.
5. The computing device according to claim 4 wherein the portion of the memory is allocated in response to a monitored execution of the process being less than the processor consumption level.
6. The computing device according to claim 1 wherein said at least one processor is further configured to perform the following before execution of the process:
identify the process to be executed;
determine if the identified process matches a known resource classification, the known resource classification having a known priority level associated therewith; and
adjust the priority level of the process to be executed to the known priority level in response to the identified process matching the known resource classification.
7. The computing device according to claim 6 wherein said at least one processor is further configured to adjust the priority level of the process to be executed to a default priority level in response to the identified process not matching the known resource classification, the default priority level associated with a default resource classification.
8. The computing device according to claim 7 wherein said at least one memory comprises a dedicated memory for supporting the process executable by said at least one processor, the dedicated memory configured to store the known and default resource classifications.
9. The computing device according to claim 1 wherein said at least one processor comprises a plurality of processors, and wherein the adjustment of the priority level further comprises changing how many processors are to be used to execute the process.
10. The computing device according to claim 1 wherein a resource classification of the executed process is updated after the adjustment of the priority level.
11. The computing device according to claim 10 further comprising an operating system stored in said at least one memory, wherein said at least one processor is configured to operate based on the operating system, and wherein said at least one processor is further configured to update the resource classification via the operating system.
12. A computing device comprising:
at least one processor configured to execute a process;
at least one memory coupled to said at least one processor; and
said at least one processor is further configured to perform the following:
monitor execution of the process by determining processing resources consumed,
compare execution of the process to a processor consumption level,
adjust a priority level of the executed process in response to the consumption of processing resources being less than the processor consumption level, and
allocate a portion of the memory in response to the adjusted priority level.
13. The computing device according to claim 12 wherein the priority level being at least one of a processing priority level and an input/output (I/O) priority level of said at least one processor.
14. The computing device according to claim 12 wherein said at least one processor is further configured to perform the following before execution of the process:
identify the process to be executed;
determine if the identified process matches a known resource classification, the known resource classification having a known priority level associated therewith; and
adjust the priority level of the process to be executed to the known priority level in response to the identified process matching the known resource classification.
15. The computing device according to claim 14 wherein said at least one processor is further configured to adjust the priority level of the process to be executed to a default priority level in response to the identified process not matching the known resource classification, the default priority level associated with a default resource classification.
16. A method comprising:
monitoring execution of a process by at least one processor that is supported by at least one memory coupled to the at least one processor, the monitoring including determining processing resources consumed by the at least one processor;
comparing execution of the process to a processor consumption level;
adjusting a priority level of the executed process in response to the consumption of processing resources being less than the processor consumption level; and
allocating a portion of the memory in response to the adjusted priority level.
17. The method according to claim 16 wherein the priority level being at least one of a processing priority level and an input/output (I/O) priority level of said at least one processor.
18. The method according to claim 16 further comprising performing the following before execution of the process:
identifying the process to be executed;
determine if the identified process matches a known resource classification, the known resource classification having a known priority level associated therewith; and
adjust the priority level of the process to be executed to the known priority level in response to the identified process matching the known resource classification.
19. The method according to claim 18 further comprising adjusting the priority level of the process to be executed to a default priority level in response to the identified process not matching the known resource classification, the default priority level associated with a default resource classification.
20. A method comprising:
monitoring execution of a process by at least one processor that is supported by at least one memory coupled to the at least one processor;
comparing execution of the process to at least one global parameter;
adjusting a priority level of the executed process in response to satisfaction of the at least one global parameter; and
allocating a portion of the memory in response to the adjusted priority level.
21. The method according to claim 20 wherein the priority level being at least one of a processing priority level of the at least one processor and an input/output (I/O) priority level of the at least one processor.
22. The method according to claim 20 wherein the at least one global parameter comprises a processor consumption level, and wherein the portion of the memory is allocated in response to a monitored execution of the process being less than the processor consumption level.
23. The method according to claim 20 further comprising performing the following before execution of the process:
identifying the process to be executed;
determining if the identified process matches a known resource classification, the known resource classification having a known priority level associated therewith; and
adjusting the priority level of the process to be executed to the known priority level in response to the identified process matching the known resource classification.
24. The method according to claim 23 further comprising adjusting the priority level of the process to be executed to a default priority level in response to the identified process not matching the known resource classification, the default priority level associated with a default resource classification.
US16/734,667 2016-01-27 2020-01-06 Computer processing system with resource optimization and associated methods Abandoned US20200142736A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/734,667 US20200142736A1 (en) 2016-01-27 2020-01-06 Computer processing system with resource optimization and associated methods

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201662287638P 2016-01-27 2016-01-27
US15/416,090 US10528387B2 (en) 2016-01-27 2017-01-26 Computer processing system with resource optimization and associated methods
US16/734,667 US20200142736A1 (en) 2016-01-27 2020-01-06 Computer processing system with resource optimization and associated methods

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/416,090 Continuation US10528387B2 (en) 2016-01-27 2017-01-26 Computer processing system with resource optimization and associated methods

Publications (1)

Publication Number Publication Date
US20200142736A1 true US20200142736A1 (en) 2020-05-07

Family

ID=59360653

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/416,090 Active 2037-12-03 US10528387B2 (en) 2016-01-27 2017-01-26 Computer processing system with resource optimization and associated methods
US16/734,667 Abandoned US20200142736A1 (en) 2016-01-27 2020-01-06 Computer processing system with resource optimization and associated methods

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/416,090 Active 2037-12-03 US10528387B2 (en) 2016-01-27 2017-01-26 Computer processing system with resource optimization and associated methods

Country Status (1)

Country Link
US (2) US10528387B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220405412A1 (en) * 2021-06-21 2022-12-22 Microsoft Technology Licensing, Llc Configuration of default sensitivity labels for network file storage locations
US20230030132A1 (en) * 2021-08-02 2023-02-02 Samsung Electronics Co., Ltd. Application optimization method and apparatus supporting the same

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7089294B1 (en) * 2000-08-24 2006-08-08 International Business Machines Corporation Methods, systems and computer program products for server based type of service classification of a communication request
US7761873B2 (en) * 2002-12-03 2010-07-20 Oracle America, Inc. User-space resource management
US7448037B2 (en) * 2004-01-13 2008-11-04 International Business Machines Corporation Method and data processing system having dynamic profile-directed feedback at runtime
US9098333B1 (en) * 2010-05-07 2015-08-04 Ziften Technologies, Inc. Monitoring computer process resource usage
JP5767480B2 (en) * 2011-01-31 2015-08-19 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Information processing apparatus, information processing system, arrangement configuration determining method, program, and recording medium
US9329901B2 (en) * 2011-12-09 2016-05-03 Microsoft Technology Licensing, Llc Resource health based scheduling of workload tasks
US20160224258A1 (en) * 2015-02-02 2016-08-04 Microsoft Technology Licensing, Llc Generating computer programs for use with computers having processors with dedicated memory

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220405412A1 (en) * 2021-06-21 2022-12-22 Microsoft Technology Licensing, Llc Configuration of default sensitivity labels for network file storage locations
US11783073B2 (en) * 2021-06-21 2023-10-10 Microsoft Technology Licensing, Llc Configuration of default sensitivity labels for network file storage locations
US20230030132A1 (en) * 2021-08-02 2023-02-02 Samsung Electronics Co., Ltd. Application optimization method and apparatus supporting the same

Also Published As

Publication number Publication date
US20170212790A1 (en) 2017-07-27
US10528387B2 (en) 2020-01-07

Similar Documents

Publication Publication Date Title
US7010596B2 (en) System and method for the allocation of grid computing to network workstations
US9727372B2 (en) Scheduling computer jobs for execution
US10423451B2 (en) Opportunistically scheduling and adjusting time slices
US8195859B2 (en) Techniques for managing processor resource for a multi-processor server executing multiple operating systems
US11010199B2 (en) Efficient critical thread scheduling for non-privileged thread requests
US7992151B2 (en) Methods and apparatuses for core allocations
JP4558661B2 (en) Computer system and method for transferring executable programs between partitions
US8516462B2 (en) Method and apparatus for managing a stack
US9582337B2 (en) Controlling resource consumption
US9875141B2 (en) Managing pools of dynamic resources
US20130111489A1 (en) Entitlement vector for managing resource allocation
US20130111491A1 (en) Entitlement vector with resource and/or capabilities fields
JP2012133778A (en) System, method and program for run-time allocation of functions to hardware accelerator
US20130055283A1 (en) Workload Performance Control
KR20130101693A (en) Method and apparatus for power management in virtualization system using different operation system
CN110659499A (en) Techniques for cache-side channel attack detection and mitigation
US8677360B2 (en) Thread-related actions based on historical thread behaviors
US20200142736A1 (en) Computer processing system with resource optimization and associated methods
US9135064B2 (en) Fine grained adaptive throttling of background processes
US8024738B2 (en) Method and system for distributing unused processor cycles within a dispatch window
JPWO2010089808A1 (en) Virtual computer allocation method, allocation program, and information processing apparatus having virtual computer environment
WO2023205926A1 (en) Performance-aware smart framework to improve cpu power efficiency in static display read mode
CN115390983A (en) Hardware resource allocation method, device, equipment and storage medium for virtual machine
CN116303132A (en) Data caching method, device, equipment and storage medium
CN114691279A (en) Resource scheduling method, device and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: CITRIX SYSTEMS, INC., FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MARMIGNON, PIERRE;REEL/FRAME:051451/0155

Effective date: 20170126

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

AS Assignment

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, DELAWARE

Free format text: SECURITY INTEREST;ASSIGNOR:CITRIX SYSTEMS, INC.;REEL/FRAME:062079/0001

Effective date: 20220930

AS Assignment

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT, DELAWARE

Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:TIBCO SOFTWARE INC.;CITRIX SYSTEMS, INC.;REEL/FRAME:062113/0470

Effective date: 20220930

Owner name: GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT, NEW YORK

Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNORS:TIBCO SOFTWARE INC.;CITRIX SYSTEMS, INC.;REEL/FRAME:062113/0001

Effective date: 20220930

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:TIBCO SOFTWARE INC.;CITRIX SYSTEMS, INC.;REEL/FRAME:062112/0262

Effective date: 20220930

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

AS Assignment

Owner name: CLOUD SOFTWARE GROUP, INC. (F/K/A TIBCO SOFTWARE INC.), FLORIDA

Free format text: RELEASE AND REASSIGNMENT OF SECURITY INTEREST IN PATENT (REEL/FRAME 062113/0001);ASSIGNOR:GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT;REEL/FRAME:063339/0525

Effective date: 20230410

Owner name: CITRIX SYSTEMS, INC., FLORIDA

Free format text: RELEASE AND REASSIGNMENT OF SECURITY INTEREST IN PATENT (REEL/FRAME 062113/0001);ASSIGNOR:GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT;REEL/FRAME:063339/0525

Effective date: 20230410

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT, DELAWARE

Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:CLOUD SOFTWARE GROUP, INC. (F/K/A TIBCO SOFTWARE INC.);CITRIX SYSTEMS, INC.;REEL/FRAME:063340/0164

Effective date: 20230410

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION