GB2492352A - Optimising application performance on basis of priority - Google Patents
Optimising application performance on basis of priority Download PDFInfo
- Publication number
- GB2492352A GB2492352A GB201110997A GB201110997A GB2492352A GB 2492352 A GB2492352 A GB 2492352A GB 201110997 A GB201110997 A GB 201110997A GB 201110997 A GB201110997 A GB 201110997A GB 2492352 A GB2492352 A GB 2492352A
- Authority
- GB
- United Kingdom
- Prior art keywords
- text
- applications
- software
- computer readable
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/545—Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/542—Intercept
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A method of improving throughput and latency of a plurality of applications that are running on a computer or a computer network is provided herein. The method is implemented by an optimization software engine that is operating according to stored data. The optimisation engine may run in a user mode, kernel mode or as firmware. In the method data associated with a priority level of an application specified by the user is stored. The method can then reorder at least one of the following application requests: (i) input and output; and (ii) computing resource allocation using the priority level data. The method will also modify algorithms of at least one of: (i) operating system scheduling; (ii) queuing; and (iii) computing resource allocation. The reordering step is performed by intermediate device drivers with the algorithm modification being performed by a software patch. The method may also collect further data on the usage of the computing resources to create a profile which can then be optimised for an application. The optimised profile can be changed or viewed over a local or remote interface.
Description
OPTIMIZING SOFTWARE APPLICATIONS PERFORMANCES BY
PRIORITIZING THEREOF
BACKGROUND
1. TECHNICAL FIELD
[00011 The present invention relates generally to application performance improvement. More particularly, the present invention improves latency and throughput of operations that are performed by applications. The present invention may be implemented either in software, as a set of intermediate device drivers.
programs. and operating system modifications or in hardware for high-performance storage and networking appliances.
2. DISCUSSION OF RELATED ART [0002] Several solutions exist in the market for application performance improvement. Some of the solutions utilize high performance servers and hardware information Technology (IT) appliances, which are sold at a high price point. Other solutions include software products that optimize the performance of specific software, such as Structured Query Language (SQL) databases or Java applications.
Yet other solutions, offer low cost products that use techniques, such as, shutting off unneeded services or defragmenting disks and registry. Other solutions reduce boot time or offer conflict resolution techniques to fix specific problems on a computer.
[0003] Yet, some other solutions in the existing art, tweak the operations of the operating system to achieve application performance improvement. However, none of the existing art incorporates main methods that are in use in the present invention, namely, reordering input-output operations by an originating application and modifying operating system scheduling, queuing and allocation algorithms.
[0004] As software grows more complex and feature-rich, its demands for computing resources continually outpaces improvements in hardware performance. On one hand many computers are outfitted with many software packages aimed to provide productivity, connectivity and security. On the other hand, operating systems add a large variety of built-in services to suit the diverse needs of different customers. As a result, performance on critical tasks is muddled, user experience may be reduced and productivity may be damaged.
BRIEF SUMMARY
[0005] Embodiments of the present invention provide methods and systems for prioritizing task performance and computing resource allocation according to requirements of a customer. Additionally, the present invention leverages operating system modifications to boost application performance. The resulting application performance improvement provides better value for money than the improvement that may be achieved by hardware upgrade. Additionally, the present invention provides a steady and permanent improvement in the performance of the applications.
[0006] These, additional, andlor other aspects and/or advantages of the present invention are: set forth in the detailed description which follows; possibly inferable from the detailed description; and/or earnable by practice of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The present invention will be more readily understood from the detailed description of embodiments thereof made in conjunction with the accompanying drawings of which: Figure 1 is a diagram, illustrating components of the optimization engine, according to some embodiments of the invention; Figure 2A is a diagram illustrating a storage access, according to existing art; Figure 2B is a diagram illustrating a prioritized storage access, according to some embodiments of the invention; Figure 3A is a diagram illustrating disk allocation, according to existing art; Figure 3B is a diagram illustrating prior tized disk allocation according to some embodiments of the invention; Figure 4A is a diagram illustrating memory algorithm replacement, according to existing art; Figure 4B is a diagram illustrating memory algorithm replacement including a replacement library, according to some embodiments of the invention; and Figure 5 is a diagram illustrating virtualized systems and cloud computing where the optimization engine may be implemented, according to some embodiments of the invention.
I
DETAILED DESCRIPTION
[0008] Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the foHowing description or illustrated in the drawings. The invention is applicable to other embodiments or of being practiced or carried out in various ways. Also, it is to be undcrstood that the phraseology and tcrminology employed hcrcin is for the purpose of description and should not be regarded as limiting.
[0009] For a better understanding of the invention, the usages of the following terms in the present disclosure are defined in a non-limiting manner: [0010] The term "latency' as used herein in this application, is defined as the measure of time delay experienced in a system.
[0011] The term "throughput" as used herein in this application, is defined as the amount of work that a computers-based system can do in a given time period.
[0012] The term "originating application" as used herein in this application, is defined as the application that issues a request for an input-output operation.
[0013] The term "110" as used herein in this application, is defined as input-output communication such as a data read or write request. between the components of an information processing system, and between an information processing system and an outside entity.
[0014] The term "patching" as used herein in this application, is defined as fixing or replacing software by using a different piece of software, whether in persistent storage (on disk) or in volatile storage (in memoiy).
[0015] The term "Central Process Unit (CPU)" as used herein in this application, is defined as a component of a computer system that carries out instructions of a computer program.
[0016] The term "thread" as used herein in this application, is defined as the smallest unit of processing that can be scheduled by an operating system. Threads generally result from a fork of a computer program into two or more concurrently running tasks.
[0017] The term "waking a thread" as used herein in this application, is defined as a situation when a thread resumes its operation after an inactive state. The thread resumes after expiration of an interval time or a reception of an interrupt or a signal.
[0018] The term "context switch" as used herein in this application, is defined as the computing process of storing and restoring state i.e. context of a CPU so that execution can be resumed from the same point at a later time.
[0019] The term cache' as used herein in this application, is defined as any component that transparently stores data for future requests in a memory layer of higher performance than the larger associated data storage, thus accelerating these requests.
[0020] The term "storage hierarchy" as used herein in this application, is defined as an arrangement of layers in which each layer may provide faster response for storage request and a smaller storage than the layer Mow it. E.g. CPU registers, CPU memory caches, main memory, disk caches, local disk drive and network area storage.
Further, each layer performs as a cache for the layer below it.
[0021] The term "preemptive multitasking" as used herein in this application, is defined as a style of computer multitasking in which the operating system initiates a context switch from a running process to another process.
[0022] The term "memory pooi" as used herein in this application, is defined as a data storage area allocated in advance from a storage area for the purposes of specific applications.
[0023] The term "mailslot" as used herein in this application, is a type of interprocess communication that allows communication between processes both locally and over a network.
[0024] The term "file" as used herein in this application, is defined as a block of arbitrary information, or resource for storing information which is available to a computer program. Operations in this application which refer to files also refer to other objects the operating system may expose as files, such as mailslots, pipes, and completion ports.
[0025] The term "Deferred Procedure Call (DPC)" as used herein in this application, is defined as an operating system mechanism which allows low-level operating system tasks such as interrupt service routines to queue associated tasks that require higher-level operating system processing tasks for later execution.
[0026] The term "process" as used herein in this application, is defined as an instance of a computer program that is being executed.
[0027] The term "foreground-background" as used herein in this application, is defined as windowing status of an application. A foreground application has its window displayed on the topmost layer on a display of a user, while a background application may have its window concealed, obscured or minimized. Operating system queuing algorithms typically favor foreground windows slightly.
[0028] The term Direct Memory Access (DMA)' as used herein in this application, is defined as a feature of computers that allows hardware subsystems within the computer to access memory of system for reading andlor writing independently of the Cpu.
[0029] The term "penpheral device" as used herein in this application, is defined as a computer device that is not part of the essential computer, e.g. mouse, keyboard, printer and CD-ROM drive, and external and removable storage, and pluggable communication devices such as Bluetooth, Eirewiie, WiFi adapters and more.
[0030] The tenn "bus" as used herein in this application, is defined as a subsystem that transfers data between computer components inside a computer or between computers.
[0031] The term "disk controller" as used herein in this application, is defined as the circuit which allows the CPU to communicate with any kind of disk drive such as hard disk, floppy disk.
[0032] The term "intermediate driver" as used herein in this application, is defined as a driver that occupies an intermediate layer in the driver hierarchy and communicates between other dnvers and operating system subsystems, as opposed, for example, to class dnvers which directly operate hardware devices.
[0033] The term "add-on" as used herein in this application, is defined as a piece of software which enhances another software application and generally may not run independently.
[0034] The term "windowing objects", as used herein in this application, is defined as the objects that are involved in the presentation of the graphical user interface to the user, and include window objects and graphical widgets as well as objects, handles and messages used in the graphical user inteitace subsystem of the operating system.
[0035] The term "plug-and-play", as used herein in this application, is defined as a capability of new hardware that when added to an existing computer may be automatically detected and configured.
[0036] The term "starvation", as used herein in this application, is defined as a situation in which one or more programs is waiting for resources that are occupied by programs, that may or may not be in the same set of programs that are starving.
[0037] Embodiments of the present invention provide methods and systems to improve performance of selected applications using the following innovative methods: (I) reordering I/O requests of applications and resource allocation requests; (ii) modifying operating system scheduling, queuing, and resource aHocation algorithms. The reordering is done by using intermediate device drivers, also known as filter drivers, as well as other device drivers and firmware where applicable. The modifying to the OS algorithm is done by means of modifying software in-place, known in the art as patching. Moreover the present invention uses methods already known in the art such as allocating more computing resources, allocating preferred namely, faster but scarce resources, allocating resources in advance, and the like.
[0038] Figure 1 is a diagram illustrating components of an optimization engine, according to some embodiments of the invention. Components 110-160 may be handled by: (i) reordering; (ii) resource allocation; and (iii) algorithm modification.
[0039] According to an aspect of the present invention, component 110 is arranged to handle Central Processing Unit (CPU) operations and thread scheduling. In a non limiting example of algorithm modification, operating system algorithms may be modified to U) increase thread scheduling priority; and (ii) increase thread quantum increase; In a non Umiting example of reordering (i) prioritize in the process of acquiring synchronization resources; (ii) prioritize in the process of waking threads that are waiting for synchronization resources; and (ii) prevent preemptive multitasking by lower priority processes during access to synchronization resources, known as priority inversion.
[0040] According to an aspect of the present invention, component 120 is arranged to handle memory operations including caches. In a non limiting example of resource allocation, pre-aliocating and maintaining memory pools for the use of high priority applications. In another non limiting example of algorithm modification (i) replacing default memory allocation algorithms with optimized algorithms; (ii) changing memoiy allocation arid release routines to perform asynchronously.
[0041] According to an aspect of the present invention, component 130 is arranged to handle storage file operations and disk systems operations. In a non limiting example of storage file operations, by changing file operation order, namely, reordering. In another non limiting example of resource allocation: (i) file caching: larger memory allocation, allocation from faster memory. preference during memory eviction; (ii) proactive loading of file content (i.e. pre-fetching) and proactive file decompression.
In yet another non limiting example of algonthrn modification, file caching: caching algorithm enhancement and parallelization for write-back operations.
[0042] According to another aspect of the present invention, in a non limiting example of disk systems operations by disk access prioritization. namely, reordering.
In another non-limiting example of resource allocation: ti) relocation of files to faster 1/0 areas (ii) allocation and reservation of free disk space from faster 110 areas. In yet another non limiting example of algonthm modification, disk space allocation and de-allocation priontization.
[0043] According to yet another aspect of the present invention, component 140 is arranged to handle system objects such as registly and windowing system objects. In a non-limiting example of reordering of registry operations, operation prioritization.
In a non-Urniting example of resource allocation: registry contents pre-fetching and caching. In a non limiting example of algorithm modification, changing write
operations to use asynchronous background writes.
[0044] ffi a non limiting example of algorithm modification: (i) changing windowing message sending algorithms to send messages directly at the window objects associated with high-priority applications; and (ii) changing windowing object allocation algorithms to reserve windowing object quotas and reorder windowing object allocation requests to prioritize high-priority applications.
[0045] In a non limiting example of prioritization: reordering windowing messages to pnoritize high-priority applications.
[0046] According to yet another aspect of the present invention, in a non limiting example of reordering of kernel operations: (i) I/O and DPC request prioritization. (ii) DMA request prioritization. In a non limiting example of resource allocation, increasing kernel resource limits. In a non limiting example of algorithm modification, p'ug-and-play subsystem device relationship handling pnoritization.
[0047] According to yet another aspect of the present invention, component 150 is arranged to handle network modules such as protocol stack and drivers. In a non limiting example of reordering: ti) packet prioritization; (ii) socket operations prioritization; (iii) channels and protocol prioritization. In a non limiting example of algorithm modification, preference to chosen applications when dropping packets. In a non limiting example of resource allocation, larger memory allocation for network buffers to chosen applications.
[0048] According to yet another aspect of the present invention, component 160 is arranged to handle peripheral device access. In a non limiting example of reordering: (i) operation prioritization for human interface devices; and (ii) bus communication priority for peripheral devices, in a non limiting example of resource allocation, allocation and reservation of memory buffers for bus communication and DMA communication.
[00491 Figure 2A is a diagram illustrating a storage access, according to existing art; An I/O request 242 from a low priority process 210A, an 110 request 241 from medium priority process 220A and an 110 request 240 from a high priority process 230A, sent to the disk controller 250A, are processed in an arbitrary order on the disk controller.
[0050] Figure 2B is a diagram illustrating a prioritized storage access, according to some embodiments of the invention. In prioritized storage access 110 requests sent from a process may be reordered according to the level of priority of the process. An request 242 from low priority process 210B, an I/O request 241 from a medium priority process 220B and an 110 request 240 from a high priority process 230B are processed by disk controller 250B in the order of their priority taking into account starvation prevention. Reordering is performed by intermediate device drivers, namely, filter drivers. Filter drivers are classified either as tile system filter drivers 260 or as disk filter drivers 270. At least one of: file system drivers 260, disk filter dnvers 270 may be used in different embodiments of the present invention.
[0051] Figure 3A is a diagram illustrating disk allocation, according to existing art, in which data blocks are positioned on disk regardless of the priority level of the applications associated with them; data block 311 associated with a high priority level application is located in a high processing speed area 310A next to data block 312 which is associated with a low priority level process, and next to it data block 313 which is associated with a medium priority level process, while high priority data block 314 and medium level priority data block 315 are positioned in a low processing speed area. Existing art operates two priority levels only.
[0052] Figure 3B is a diagram illustrating prioritized disk allocation according to some embodiments of the invention; High priority data blocks 311 and 314 are positioned in the highest processing speed area and medium priority data blocks 312 and 313 are positioned in the next highest processing speed area available, while all low priority data blocks are positioned in the next highest processing speed area available. According to sonic embodiments of the invention any number of priority levels may be configured. Also, there may be several applications in each level of priority.
[0053] Figure 4A is a diagram illustrating memory algorithm operation, according to existing art. A process memory address space 410 contains code segment 420 and data segment 460, the code segment containing the process code, including a memory allocation request 430. The memory allocation request is served by a memory allocation routine 450A provided by the runtime service library 440 loaded into the process memory. resulting in the allocation and placement of data objects 461, 462 and 463 in the data segment.
[0054] Figure 4B is a diagram illustrating algorithm replacement, according to some embodiments of the invention, using the memory algorithm operation as an example.
A replacement library 470 is loaded into the process memory and any calls to the memory allocation routines of runtime service Ubrary 440 are rewritten with calls to replacement allocation routine 450B, using code rewriting operations known as patching. Allocation routine 450B may be faster than allocation routine 450k resulting in better performance.
[0055] As shown above, a user may specify evel of priority to applications to boost productivity. The productivity is achieved by focusing resources of a compute!' on applications given high priority. As a result, the throughput and latency of I/O operations on performance of the applications improves to a great extent.
[0056] Figure 5 is a diagram illustrating virtualized systems and cloud computing where the optimization engine may be implemented. according to some embodiments of the invention. The present invention may be implemented on a cloud computing center 520, on a virtual machine 530 provided by that center, and on a client connecting to that virtual machine 510. The throughput and latency is improved for computing operations in the computing center, on the virtual machine, and the client, in addition to an improvement in the throughput and latency on the network connecting the client to the virtual machine, thus increasing the total benefit for the user. The present invention may be beneficial for cloud computing providers, since it may reduce the amount of hardware resources that are allocated to customers.
[0057] In the above description, an embodiment is an example or implementation of the invention. The various appearances of" one embodiment", "an embodiment" or "some embodiments" do not necessarily all refer to the same embodiments.
[0058] Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be imp'emented in a single embodiment.
[00591 Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above.
[0060] The invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described.
[0061] Meanings of technical and scientific tenns used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined.
Claims (1)
- <claim-text>CLAIMSWhat is claimed is: A method of improving throughput and latency of a plurality of applications that are running on a computer or a computer network, wherein the applications are competing over computing resources, by an optimization engine running in at least one of: (i) user-mode; tii) kernel-mode; and (iii) firmware, the method comprising: storing data associated with a priority level of applications specified by a user; reordering requests of applications comprising at least one of: (i) input-output; and (ii) computing resource allocation by the data in the storing; and modifying algorithms selected from a group comprising at least one of: U) operating system scheduling; (ii) queuing; and (iii) computing resource allocation, based on the stored data, associated with the priority level, to yield the optimization of the application modules.wherein the reordering is carried out by intermediate device drivers, and wherein the modifying is carried out by a software patch.</claim-text> <claim-text>2. The method according to claim I, further collecting data related to usage of computing resources by the applications and creating a profile.</claim-text> <claim-text>3. The method according to claim 2, further creating and storing an optimal configuration of the software according to the profile.</claim-text> <claim-text>4. The method according to claim 3, further changing the configuration and viewing the profile via at least one of: (i) local interface; and (ii) remote interface.</claim-text> <claim-text>5. The method according to claim 1, wherein the reordering and the modifying is carried out in at least one of: user-mode, kernel-mode, and firmware.</claim-text> <claim-text>6. The method according to claim I, further allocating computing resources in accordance to the priority level, allocating or reserving larger amounts of resources or higher peiforming resources to applications in a higher priority level.</claim-text> <claim-text>7. A software module for improving throughput and latency of a plurality of applications that are running on a computer or a computer network, wherein the applications are competing over computing resources, the software module consisting of an optimization engine associated with a configuration storage, the optimization engine comprising: intermediate device drivers; and a plurality of software patches; wherein the intermediate device drivers are arranged to reorder at least one of computing requests of a plurality of applications: (i) input and output; (ii) computing resource allocation, according to a prionty level of applications specified by the user, wherein the plurality of software patches is alTanged to modify algorithms of at least one of: (i) operating system scheduling; (ii) queuing; and (iii) computing resource allocation, according to a priority level of applications specified by the user, and wherein the intermediate device drivers and the plurality of software patches are operativdy associated with one processor.</claim-text> <claim-text>8. The method according to claim 1, wherein the software is installed on a multi-processing operating system.</claim-text> <claim-text>9. The method according to claim 1, wherein the software is installed as an add-on to a tile system.</claim-text> <claim-text>10. The method according to claim I, wherein the software is installed as an add-on to a storage stack.</claim-text> <claim-text>11. The method according to claim 1, wherein the software is installed on at least one of: host. guest. and client systems in virtualized systems.</claim-text> <claim-text>12. The method according to claim 1, wherein the software is installed on at least one of: host, guest, and client systems in cloud computing systems.</claim-text> <claim-text>13. The method according to claim I, wherein the software is installed on a mobile phone.</claim-text> <claim-text>14. A computer readable storage medium for improving throughput and latency of a plurality of applications that are running on a computer or a computer network, wherein the applications are competing over computing resources, by optimizing at least one of application modules running in: (i) user-mode; (ii) kernel-mode; and (iii) fIrmware, the computer readable storage medium having computer readable program embodied therewith, a computer readable program comprising: a computer readable program configured to store data associated with a priority level of applications specified by a user; a computer readable program configured to reorder an application requests comprising at least one of: (i) input-output; and (ii) computing resource allocation by the data in the storing; and a computer readable program configured to modify algorithms selected from a group comprising at least one of: (i) operating system scheduling; (ii) queuing; and (iii) computing resource allocation, based on the stored data, associated with the priority level, to yield the optimization of the application modu'es, wherein the reorder of application requests is canied out by intermediate device drivers, and wherein the modification of algorithms is carried out by a software patch.</claim-text> <claim-text>15. The computer readable storage medium according to claim 12, further comprising computer readable program configured to collect data related to usage of computing resources by the appfications and creating a profile.</claim-text> <claim-text>16. The computer readable storage medium according to claim 12, further comprising computer readaNe program configured to store an optimal configuration of the software according to the profile.</claim-text> <claim-text>17. The computer readable storage medium according to claim 12, further comprising computer readable program configured to change the configuration and view the profile via at least one of: (i) local interface; and (ii) remote interface.</claim-text> <claim-text>18. The computer readable storage medium according to claim 12. further compnsing computer readable program configured to allocate computing resources in advanced.</claim-text>
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB201110997A GB2492352A (en) | 2011-06-29 | 2011-06-29 | Optimising application performance on basis of priority |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB201110997A GB2492352A (en) | 2011-06-29 | 2011-06-29 | Optimising application performance on basis of priority |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201110997D0 GB201110997D0 (en) | 2011-08-10 |
GB2492352A true GB2492352A (en) | 2013-01-02 |
Family
ID=44485317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB201110997A Withdrawn GB2492352A (en) | 2011-06-29 | 2011-06-29 | Optimising application performance on basis of priority |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2492352A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593146A (en) * | 2013-10-21 | 2014-02-19 | 福建升腾资讯有限公司 | Overlay layer space switching method based on disk filtration and overlay layer building method |
US10514949B1 (en) | 2018-12-11 | 2019-12-24 | Signals Analytics Ltd. | Efficient data processing in a serverless environment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6021425A (en) * | 1992-04-03 | 2000-02-01 | International Business Machines Corporation | System and method for optimizing dispatch latency of tasks in a data processing system |
US20020078028A1 (en) * | 2000-12-18 | 2002-06-20 | Trevalon Inc. | Network server |
WO2007017296A1 (en) * | 2005-08-08 | 2007-02-15 | International Business Machines Corporation | Application system intelligent optimizer |
US20080065869A1 (en) * | 2006-09-11 | 2008-03-13 | Samsung Electronics Co., Ltd. | Computer system and control method thereof capable of changing performance mode using dedicated button |
-
2011
- 2011-06-29 GB GB201110997A patent/GB2492352A/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6021425A (en) * | 1992-04-03 | 2000-02-01 | International Business Machines Corporation | System and method for optimizing dispatch latency of tasks in a data processing system |
US20020078028A1 (en) * | 2000-12-18 | 2002-06-20 | Trevalon Inc. | Network server |
WO2007017296A1 (en) * | 2005-08-08 | 2007-02-15 | International Business Machines Corporation | Application system intelligent optimizer |
US20080065869A1 (en) * | 2006-09-11 | 2008-03-13 | Samsung Electronics Co., Ltd. | Computer system and control method thereof capable of changing performance mode using dedicated button |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593146A (en) * | 2013-10-21 | 2014-02-19 | 福建升腾资讯有限公司 | Overlay layer space switching method based on disk filtration and overlay layer building method |
CN103593146B (en) * | 2013-10-21 | 2016-04-20 | 福建升腾资讯有限公司 | Based on overlayer space changing method and the tectal construction method of disk filter |
US10514949B1 (en) | 2018-12-11 | 2019-12-24 | Signals Analytics Ltd. | Efficient data processing in a serverless environment |
Also Published As
Publication number | Publication date |
---|---|
GB201110997D0 (en) | 2011-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10901802B2 (en) | Method and apparatus for implementing virtual GPU and system | |
US10387202B2 (en) | Quality of service implementation in a networked storage system with hierarchical schedulers | |
US9619287B2 (en) | Methods and system for swapping memory in a virtual machine environment | |
US10467152B2 (en) | Dynamic cache management for in-memory data analytic platforms | |
US9442760B2 (en) | Job scheduling using expected server performance information | |
US7222343B2 (en) | Dynamic allocation of computer resources based on thread type | |
US9513962B2 (en) | Migrating a running, preempted workload in a grid computing system | |
US7849327B2 (en) | Technique to virtualize processor input/output resources | |
US8719831B2 (en) | Dynamically change allocation of resources to schedulers based on feedback and policies from the schedulers and availability of the resources | |
WO2016078178A1 (en) | Virtual cpu scheduling method | |
US8826270B1 (en) | Regulating memory bandwidth via CPU scheduling | |
US8566830B2 (en) | Local collections of tasks in a scheduler | |
EP3796168A1 (en) | Information processing apparatus, information processing method, and virtual machine connection management program | |
US20140208072A1 (en) | User-level manager to handle multi-processing on many-core coprocessor-based systems | |
US20120304171A1 (en) | Managing Data Input/Output Operations | |
US20080229319A1 (en) | Global Resource Allocation Control | |
US20110219373A1 (en) | Virtual machine management apparatus and virtualization method for virtualization-supporting terminal platform | |
AU2013206117A1 (en) | Hierarchical allocation of network bandwidth for quality of service | |
US8291426B2 (en) | Memory allocators corresponding to processor resources | |
EP3598310B1 (en) | Network interface device and host processing device | |
CN110795323A (en) | Load statistical method, device, storage medium and electronic equipment | |
GB2492352A (en) | Optimising application performance on basis of priority | |
US9405470B2 (en) | Data processing system and data processing method | |
US11934890B2 (en) | Opportunistic exclusive affinity for threads in a virtualized computing system | |
US9176910B2 (en) | Sending a next request to a resource before a completion interrupt for a previous request |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |