US20180321966A1 - Efficient detection and respone to spin waits in multi-processor virtual machines - Google Patents
Efficient detection and respone to spin waits in multi-processor virtual machines Download PDFInfo
- Publication number
- US20180321966A1 US20180321966A1 US16/031,816 US201816031816A US2018321966A1 US 20180321966 A1 US20180321966 A1 US 20180321966A1 US 201816031816 A US201816031816 A US 201816031816A US 2018321966 A1 US2018321966 A1 US 2018321966A1
- Authority
- US
- United States
- Prior art keywords
- time slice
- virtual
- instructions
- virtual processor
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/526—Mutual exclusion algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Definitions
- the presently disclosed subject matter relates to the field of computing, and more particularly, to computer virtualization, although virtualization is merely an exemplary and non-limiting field.
- Operating system kernels provide several mechanisms to synchronize data structures in multi-threaded systems. Many of these mechanisms use a technique called spin waiting, where a thread or processor will spend time in a loop waiting for a particular event to occur before it continues execution. Spin waits are typically used in cases where wait times will be much less than the cost of re-scheduling threads or where the environment is such that the thread scheduler cannot run.
- synchronization primitives examples include, but are not limited to: spinlocks, queued spinlocks, reader/writer locks, and barriers.
- spinlocks queued spinlocks
- reader/writer locks barriers.
- barriers barriers.
- well-designed operating systems will minimize the amount of time threads spend in regions of code that lead to these spin wait loops, since the time spent spin waiting is wasted time.
- hyper-threading some of a thread's resources are given to another hyper-thread, but such a thread is still blocked from making forward progress.
- Various aspects are disclosed herein for attenuating spin waiting in a virtual machine environment comprising a plurality of virtual machines and virtual processors.
- Selected virtual processors can be given time slice extensions in order to prevent such virtual processors from becoming de-scheduled (and hence causing other virtual processors to have to spin wait).
- Selected virtual processors can also be expressly scheduled so that they can be given higher priority to resources, resulting in reduced spin waits for other virtual processors waiting on such selected virtual processors.
- various spin wait detection techniques can be incorporated into the time slice extension and express scheduling mechanisms, in order to identify potential and existing spin waiting scenarios.
- a system for attenuating spin waiting of virtual processors in a virtual machine environment comprises a processor and a memory communicatively coupled to the processor and storing instructions that upon execution by the processor cause the system to examine a hint source for hints to determine if a virtual processor is accessing a synchronizing section that acquires a lock on an underlying physical resource that causes any other virtual processors to spin wait until the virtual processor releases the lock; provide a time slice extension to the virtual processor to increase the time allotted in its assigned time slice if the virtual processor is accessing the synchronizing section; and end the time slice extension prematurely.
- ending the time slice extension prematurely includes setting an intercept with said hint source and receiving the intercept from the hint source. In the same or another embodiment, ending the time slice extension prematurely includes receiving a signal from an enlightened guest operating system indicating that the synchronizing section has been cleared.
- the hint source in an illustrative embodiment, is at least one of a task priority register, an unenlightened guest operating system that is inspected, and an enlightened operating system.
- the scheduler may reside in a virtualizing layer configured to provide the time slice extension to the virtual processor. Another illustrative embodiment further comprises limiting the time slice extension to a predetermined period of time. Yet another embodiment further comprises limiting the number of time slice extensions provided to a predetermined number.
- Still another embodiment further comprises debiting the time slice extension provided to the virtual processor such that the virtual processor has less allocated time in a subsequent time slice.
- the lock may be at least one of a spinlock, a queued spinlock, a reader/writer lock, and a barrier; and wherein the synchronizing section comprises a region of code that determines access to resources by virtual processors.
- FIG. 1 illustrates a virtual machine environment, with a plurality of virtual machines, comprising a plurality of virtual processors and corresponding guest operating systems; the virtual machines are maintained by a virtualizing layer which may comprise a scheduler and other components, where the virtualizing layer virtualizes hardware for the plurality of virtual machines;
- a virtualizing layer which may comprise a scheduler and other components, where the virtualizing layer virtualizes hardware for the plurality of virtual machines;
- FIG. 2 illustrates various spin waiting attenuation mechanisms that reduce spin waiting via time slice extension, express scheduling, and spin wait detection
- FIG. 3 illustrates time slice extension mechanisms in order to attenuate the spin waiting of virtual processors in a virtual machine environment
- FIG. 4 provides a time line regarding how spin waiting can result in virtual processor scheduling congestion
- FIG. 5 shows how with time slice extensions, the problems illustrated in FIG. 4 can be obviated
- FIG. 6 shows that time slice extensions can be controlled to ensure resource parity among virtual processors
- FIG. 7 illustrates the notion of express scheduling so that selected virtual processors are given priority to run
- FIG. 8 provides an example of how express scheduling can work
- FIG. 9 illustrates an enlightened guest operating environment interacting with a virtualization layer in order to reduce spin waits
- FIG. 10 illustrates a computer readable medium bearing computer executable instructions discussed with respect to FIGS. 1-9 , above.
- FIG. 1 illustrates a virtual machine environment 100 , with a plurality of virtual machines 120 , 121 , comprising a plurality of virtual processors 110 , 112 , 114 , 116 , and corresponding guest operating systems 130 , 132 .
- the virtual machines 120 , 121 are maintained by a virtualizing layer 140 which may comprise of a scheduler 142 and other components (not shown), where the virtualizing layer 140 virtualizes hardware 150 for the plurality of virtual machines 120 , 121 .
- the plurality of virtual processors 110 , 112 , 114 , 116 can be the virtual counterparts of underlying hardware physical processors 160 , 162 .
- optimization mechanisms are disclosed in FIGS.
- Spin waiting occurs when a thread (or an underlying processor configured to execute a thread) spends time in a loop waiting for a particular event to occur before it continues execution. This may happen when one thread is waiting to acquire a lock on a resource when another thread has acquired (but not yet released) the lock.
- spin waiting can occur upon an attempt to enter a synchronization section that a de-scheduled virtual processor is executing.
- FIG. 2 illustrates various spin waiting attenuation mechanisms that reduce spin waiting via time slice extension 230 , express scheduling 232 , and spin wait detection 234 .
- a computing environment 250 which may include a multi-threading, a super-threading, or a hyper-threading environment, may have a plurality of threads 220 , 222 , 224 to be executed. If one of these threads, such as thread 2 222 , has acquired a spinlock 210 to a resource 212 (e.g. an underlying physical processor 160 , 162 ), the other threads 220 , 224 have to wait until thread 2 222 releases the lock. Thus, thread 1 220 and thread N 224 will spin wait in a loop until thread 2 222 has finished.
- a resource 212 e.g. an underlying physical processor 160 , 162
- spin attenuation mechanisms 240 include various aspects described in more detail below, but in general terms they include: time slice extension 230 , where virtual processors that have acquired spinlocks are given a time slice extension so that they can finish their task before being de-scheduled; express scheduling 232 , where virtual processors can be prioritized by switching context to such virtual processors so that they can run before any other processors; and, spin wait detection mechanisms 234 that can identify or at least guess at when spin waiting might become an issue, and then engage in either time slice extension 230 and/or express scheduling 232 .
- FIG. 3 illustrates time slice extension mechanisms in order to attenuate the spin waiting of virtual processors in a virtual machine environment.
- the presently disclosed aspects can be implemented as systems, methods, computer executable instructions residing in computer readable media, and so on. Thus, any disclosure of any particular system, method, or computer readable medium is not confined there to, but rather extends to other ways of implementing the disclosed subject matter.
- a hint source 310 may be examined 331 by a scheduler 142 running in a virtualizing layer 140 (although other software modules may also perform this function independently or in conjunction with the scheduler 142 ).
- the hint source 310 can be a source for hints in order to determine if a virtual processor 110 is accessing 350 a synchronizing section 320 .
- the synchronizing section may be a region of code that determines access to resources by virtual processors.
- a virtual processor that is accessing 350 a synchronizing section 320 may have acquired a lock on a resource (such as memory or a processor).
- the scheduler 142 can provide 332 a time slice extension 330 to the virtual processor 110 if the virtual processor 110 is accessing 350 the synchronizing section 320 . Moreover, the scheduler 142 can also limit 337 the time slice extension 330 to a predetermined period of time 336 ; and, it may limit 339 the granting of the time slice extension 330 to a predetermined number of times 338 . This may be done in order to assure that the other virtual processors and other devices are not starved for resources. Finally, the scheduler 142 can debit 341 the time slice extension 330 granted to the virtual processor 110 from a subsequent time slice granted to the virtual processor. Thus, at the next time slice or some subsequent time slice, the virtual processor 110 can be given less time acquire a resource.
- the hint source 310 itself, it can include any one of the following (or a combination thereof): (a) a task priority register 312 , (b) an unenlightened guest operating system 314 state (which may include the task priority register 312 ), and (c) an enlightened operating system 316 .
- the task priority register 312 is used by some operating systems to hold interrupt request levels. Since spinlocks 210 are typically acquired only at elevated interrupt request levels, examining the task priority register for elevated values provides a strong hint that virtual processors may be holding spinlocks.
- information exposed from enlightened guest operating systems such as whether virtual processors are holding spinlocks or executing barrier sections that other virtual processors are waiting on, would be very accurate since such systems are aware of virtualization and have the resources to keep track of acquired spinlocks.
- the typical unenlightened guest operating system could be examined for range of addresses where a thread holds a spinlock, user mode versus kernel mode state, various flags, and so on.
- FIG. 4 provides a time line regarding how spin waiting can result in virtual processor scheduling congestion
- FIG. 5 shows how with time slice extension the problems illustrated in FIG. 4 can be obviated.
- three physical processors are shown as being available (y-axis) over some period of time (x-axis). These physical processors can be available for virtual machine 1 (having a number of virtual processors) and virtual machine 2 (also having a number of virtual processors). Starting with physical processor 1 , a virtual processor running on the physical processor can acquire a lock 410 , and this lock is held by the virtual processor while it is de-scheduled.
- FIG. 5 illustrates how with time slice extension the problems illustrated in FIG. 4 can be obviated (or at least greatly attenuated).
- the first virtual processor running on physical processor 1
- This way of prolonging time slices can have the benefit for having the second virtual processor and the third virtual processor execute in a more timely manner (and spin wait less).
- FIG. 4 in FIG.
- the third virtual processor can acquire a lock 416 ′ and release it 420 ′ some time thereafter, and then the second virtual processor can acquire a lock 422 ′ and release it 424 ′.
- the time slice extensions can be given discriminately based on hints gleaned from the hint source 310 (as opposed to being granted as a matter of course).
- FIG. 6 illustrates how time slice extensions can be controlled to ensure resource parity among virtual processors.
- a module in the virtualizing layer 140 can set an intercept 610 associated with said hint source 310 (and this may be one method for prematurely ending an extension; another method may include an enlightened operating system that can indicate when a synchronizing section has been cleared). Ending 334 of any time slice extension 330 prematurely can be based on receipt of the intercept from the hint source 310 . In this way, the scheduler 142 can know that it is time to end an extension—otherwise a virtual processor with the time slice extension can keep on using the extension to the end (when it may not be desirable to do so).
- FIG. 7 illustrates the notion of express scheduling so that selected virtual processors are given priority to run.
- a virtual processor 110 can be chosen 720 to run when the virtual processor 110 has been previously de-scheduled. Such choosing can include a context switch into the address space of the virtual processor 110 .
- the priority of the virtual processor 110 can be increased or boosted to a predetermined level in order to increase the probability that the virtual processor 110 will run.
- a couple of limitations can also be put in place, such as: limiting 724 the express schedule time slice duration 730 to a predetermined period of time 726 in order to preserve resource parity among virtual processor and any other virtual processors (e.g., given current computing resources, the period of time could be 50 microseconds); and, limiting 728 a frequency of granting of the express schedule time slice duration 730 to a predetermined number of times 729 in order to preserve resource parity.
- any portion (or whole of) the express schedule time slice duration 730 can be debited 732 from any subsequent time slices associated with the virtual processor.
- FIG. 8 provides a concrete example of how express scheduling can work.
- a first virtual processor corresponding to a physical processor 1 can acquire a lock 410 , and this lock will be held by the first virtual processor until it is released 412 .
- the third physical processor (corresponding to physical processor 3 ), will spin wait after attempting to acquire a lock 414 and it will not be successful at acquiring the lock for some time.
- FIG. 8 illustrates is that while the third virtual processor is spin waiting, an express schedule request 810 can be made upon detection of such a spin wait.
- the request 810 can result in a context switch to the first virtual processor so that it can finish its task and release the lock 412 .
- Express scheduling allows the first virtual processor to gain priority (so that it can finish) and that eventually the third virtual processor can start its task and minimize the spin wait.
- the third virtual processor can acquire the lock 416 and then eventually release the lock 420 .
- spin waits can be variously identified when virtual processors are run. Instructions used to pace virtual processors in virtual machines can be intercepted 910 . These may include “PAUSE” instructions in certain architectures, such as x86. Information about said instructions can be recorded 912 . Then, some time afterwards, a history of this information about the instructions can be examined 914 in order to detect a pattern that correlates with long spin waits. Finally, long spin waits can be indicated 916 to various modules (such as express scheduling modules referenced above) if a particular pattern from a plurality of patterns is identified.
- modules such as express scheduling modules referenced above
- the long spin waits can be indicated to a scheduler that performs context switches to the first virtual processor mentioned above (i.e. the processor that has been de-scheduled).
- the pattern can indicate no spin waits when the number of the instructions are below a first predetermined standard (some heuristically determined number), and where the pattern can indicate spin waits when the number of the instructions are above a second predetermined standard 918 (some other heuristically determined number).
- FIG. 9 illustrates another aspect of the presently disclosed subject matter, including identifying spin waits prior to the above mentioned choosing of running virtual processors, where the identifying is a basis for the choosing of the virtual processors.
- an enlightened guest OS 135 is configured to receive recommendations 920 from a virtualizing layer 140 regarding spin wait thresholds 921 . Then, the enlightened guest operating system 135 can be further configured to record information 922 regarding spin wait loop counts 923 as part of a spinlock acquire process. Next, it is configured to compare the spin wait loop count 923 to the spin wait thresholds 921 . Finally, it is configured to indicate 925 long spin waits if the spin wait loop counts exceed the spin wait thresholds. It should be noted that a sophisticated guest operating system can also tune these thresholds based on the behavior of specific locks.
- a computer readable medium can store thereon computer executable instructions for attenuating spin waiting of virtual processors in a virtual machine environment.
- Such media can comprise a first subset of instructions for providing time slice extensions to a plurality of virtual processors 1010 ; a second subset of instructions for providing express scheduling to the plurality of virtual processors 1012 ; and a third subset of instructions for detecting spin waiting in any of the plurality of virtual processors, where the third subset of instructions is combinable with the first subset of instructions and with the second subset of instructions in order to attenuate spin waiting 1014 .
- additional sets of instructions can be used to capture the various other aspects disclosed herein, and that the three presently disclosed subsets of instructions can vary in detail per the present disclosure.
- the first subset of instructions can further comprise instructions 1020 for: accessing a hint source regarding whether a virtual processor of the plurality of virtual processors has acquired a spinlock; providing the virtual processor with a time extension in order to at least prevent the at least one processor from becoming de-scheduled; bounding the time extension to a predetermined amount of time; bounding the granting of the time extension to a predetermined number of times; and debiting the time extension from any subsequent time slices.
- the second subset of instructions can further comprise instructions for: choosing a set of virtual processors from the plurality of virtual processors to run per an express schedule request; boosting priorities of the set of virtual processors resulting in the set of virtual processors running to comply with the express schedule request; bounding the express schedule request to a predetermined time slice duration; bounding the express schedule request to a predetermined number of times; and debiting any time associated with the express schedule request from any subsequent time slices of the set of virtual processors.
- the second subset of instructions can further comprise instructions for: targeting the set of virtual processors that are de-scheduled virtual processors; recording information about the set of virtual processors; and retrieving information when spin waits are identified and passing any detection of the spin waits to a virtualizing layer expressly running the set of virtual processors.
- the second subset of instructions can further comprise instructions for: receiving hints from a hint source when a synchronization session is accessed; and, based on such hints, prematurely end any time slice additional to time slices associated with the set of virtual processors.
- the third subset of instructions can further comprise instructions for: setting intercepts for monitoring a predetermined instruction set used to pace the plurality of virtual processors; recording information regarding the predetermined instruction set; examining a history of the recorded predetermined instruction set; determining any patterns in the history; and identifying a spin wait based on the patterns.
- the third subset can further comprises instructions for: receiving via hypercalls (i.e.
- the first, second, and third subset of instructions can attenuate spin waits for a plurality of virtual processors distributed across a plurality of virtual machines and in a plurality of virtual machine environments.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Executing Machine-Instructions (AREA)
- Multi Processors (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 14/945,206 filed on Nov. 18, 2015, which is a continuation of U.S. patent application Ser. No. 12/182,971 filed on Jul. 30, 2008, the entire contents are incorporated herein by reference.
- The presently disclosed subject matter relates to the field of computing, and more particularly, to computer virtualization, although virtualization is merely an exemplary and non-limiting field.
- Operating system kernels provide several mechanisms to synchronize data structures in multi-threaded systems. Many of these mechanisms use a technique called spin waiting, where a thread or processor will spend time in a loop waiting for a particular event to occur before it continues execution. Spin waits are typically used in cases where wait times will be much less than the cost of re-scheduling threads or where the environment is such that the thread scheduler cannot run.
- Examples of synchronization primitives that use this technique include, but are not limited to: spinlocks, queued spinlocks, reader/writer locks, and barriers. In general, well-designed operating systems will minimize the amount of time threads spend in regions of code that lead to these spin wait loops, since the time spent spin waiting is wasted time. At best, in the case of hyper-threading, some of a thread's resources are given to another hyper-thread, but such a thread is still blocked from making forward progress.
- Furthermore, the assumption that spin waits will be performed only for short durations can be unintentionally broken when an operating system is running in a virtual machine environment. Consequently, the time spent spin waiting can increase greatly in virtual machine environments and can prevent a virtual machine from making forward progress.
- Gang scheduling, in which all virtual processors of a virtual machine are scheduled in tandem, has been used in the past to avoid lock problems (resulting from spin waits). However, this approach often does not make efficient use of the physical system's processor(s). Gang scheduling can create un-schedulable holes where none of the sets of virtual processors from de-scheduled virtual machines will fit into given resources.
- Furthermore, requiring all of the virtual processors from a virtual machine to run at the same time and for the same duration can result in cases where some of the virtual processors have no work to do but will run anyway. Both of these issues, long spin waits and gang scheduling, lead to under-utilization of system processors and significant throughput reductions. Thus, other techniques are needed in the art to solve the above described problems.
- Various aspects are disclosed herein for attenuating spin waiting in a virtual machine environment comprising a plurality of virtual machines and virtual processors. Selected virtual processors can be given time slice extensions in order to prevent such virtual processors from becoming de-scheduled (and hence causing other virtual processors to have to spin wait). Selected virtual processors can also be expressly scheduled so that they can be given higher priority to resources, resulting in reduced spin waits for other virtual processors waiting on such selected virtual processors. Finally, various spin wait detection techniques can be incorporated into the time slice extension and express scheduling mechanisms, in order to identify potential and existing spin waiting scenarios.
- In an illustrative embodiment, a system for attenuating spin waiting of virtual processors in a virtual machine environment comprises a processor and a memory communicatively coupled to the processor and storing instructions that upon execution by the processor cause the system to examine a hint source for hints to determine if a virtual processor is accessing a synchronizing section that acquires a lock on an underlying physical resource that causes any other virtual processors to spin wait until the virtual processor releases the lock; provide a time slice extension to the virtual processor to increase the time allotted in its assigned time slice if the virtual processor is accessing the synchronizing section; and end the time slice extension prematurely.
- In one embodiment, ending the time slice extension prematurely includes setting an intercept with said hint source and receiving the intercept from the hint source. In the same or another embodiment, ending the time slice extension prematurely includes receiving a signal from an enlightened guest operating system indicating that the synchronizing section has been cleared. The hint source, in an illustrative embodiment, is at least one of a task priority register, an unenlightened guest operating system that is inspected, and an enlightened operating system. The scheduler may reside in a virtualizing layer configured to provide the time slice extension to the virtual processor. Another illustrative embodiment further comprises limiting the time slice extension to a predetermined period of time. Yet another embodiment further comprises limiting the number of time slice extensions provided to a predetermined number. Still another embodiment further comprises debiting the time slice extension provided to the virtual processor such that the virtual processor has less allocated time in a subsequent time slice. The lock may be at least one of a spinlock, a queued spinlock, a reader/writer lock, and a barrier; and wherein the synchronizing section comprises a region of code that determines access to resources by virtual processors.
- It should be noted that this Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
- The foregoing Summary, as well as the following Detailed Description, is better understood when read in conjunction with the appended drawings. In order to illustrate the present disclosure, various aspects of the disclosure are illustrated. However, the disclosure is not limited to the specific aspects shown. The following figures are included:
-
FIG. 1 illustrates a virtual machine environment, with a plurality of virtual machines, comprising a plurality of virtual processors and corresponding guest operating systems; the virtual machines are maintained by a virtualizing layer which may comprise a scheduler and other components, where the virtualizing layer virtualizes hardware for the plurality of virtual machines; -
FIG. 2 illustrates various spin waiting attenuation mechanisms that reduce spin waiting via time slice extension, express scheduling, and spin wait detection; -
FIG. 3 illustrates time slice extension mechanisms in order to attenuate the spin waiting of virtual processors in a virtual machine environment; -
FIG. 4 provides a time line regarding how spin waiting can result in virtual processor scheduling congestion; -
FIG. 5 shows how with time slice extensions, the problems illustrated inFIG. 4 can be obviated; -
FIG. 6 shows that time slice extensions can be controlled to ensure resource parity among virtual processors; -
FIG. 7 illustrates the notion of express scheduling so that selected virtual processors are given priority to run; -
FIG. 8 provides an example of how express scheduling can work; -
FIG. 9 illustrates an enlightened guest operating environment interacting with a virtualization layer in order to reduce spin waits; -
FIG. 10 illustrates a computer readable medium bearing computer executable instructions discussed with respect toFIGS. 1-9 , above. -
FIG. 1 illustrates avirtual machine environment 100, with a plurality ofvirtual machines virtual processors guest operating systems virtual machines layer 140 which may comprise of ascheduler 142 and other components (not shown), where the virtualizinglayer 140 virtualizeshardware 150 for the plurality ofvirtual machines virtual processors physical processors FIGS. 2-10 , in order to efficiently schedule thevirtual processors physical processors - Spin waiting occurs when a thread (or an underlying processor configured to execute a thread) spends time in a loop waiting for a particular event to occur before it continues execution. This may happen when one thread is waiting to acquire a lock on a resource when another thread has acquired (but not yet released) the lock. In regard to virtualization, spin waiting can occur upon an attempt to enter a synchronization section that a de-scheduled virtual processor is executing.
- In order to remedy such spin waiting,
FIG. 2 illustrates various spin waiting attenuation mechanisms that reduce spin waiting via time slice extension 230, expressscheduling 232, andspin wait detection 234. PerFIG. 2 , acomputing environment 250, which may include a multi-threading, a super-threading, or a hyper-threading environment, may have a plurality ofthreads thread 2 222, has acquired aspinlock 210 to a resource 212 (e.g. an underlyingphysical processor 160, 162), theother threads thread 2 222 releases the lock. Thus,thread 1 220 andthread N 224 will spin wait in a loop untilthread 2 222 has finished. - It will be readily appreciated by those skilled in the art that the presently described mechanisms are not limited to
typical spinlocks 210, but rather also contemplate queued spinlocks, reader/writer locks, barriers, and so on. The presently shown spin attenuation mechanisms 240 include various aspects described in more detail below, but in general terms they include: time slice extension 230, where virtual processors that have acquired spinlocks are given a time slice extension so that they can finish their task before being de-scheduled;express scheduling 232, where virtual processors can be prioritized by switching context to such virtual processors so that they can run before any other processors; and, spinwait detection mechanisms 234 that can identify or at least guess at when spin waiting might become an issue, and then engage in either time slice extension 230 and/or expressscheduling 232. -
FIG. 3 illustrates time slice extension mechanisms in order to attenuate the spin waiting of virtual processors in a virtual machine environment. The presently disclosed aspects can be implemented as systems, methods, computer executable instructions residing in computer readable media, and so on. Thus, any disclosure of any particular system, method, or computer readable medium is not confined there to, but rather extends to other ways of implementing the disclosed subject matter. - Turning to
FIG. 3 , ahint source 310 may be examined 331 by ascheduler 142 running in a virtualizing layer 140 (although other software modules may also perform this function independently or in conjunction with the scheduler 142). Thehint source 310 can be a source for hints in order to determine if avirtual processor 110 is accessing 350 asynchronizing section 320. The synchronizing section may be a region of code that determines access to resources by virtual processors. Thus, a virtual processor that is accessing 350 asynchronizing section 320 may have acquired a lock on a resource (such as memory or a processor). - The
scheduler 142 can provide 332 atime slice extension 330 to thevirtual processor 110 if thevirtual processor 110 is accessing 350 thesynchronizing section 320. Moreover, thescheduler 142 can also limit 337 thetime slice extension 330 to a predetermined period oftime 336; and, it may limit 339 the granting of thetime slice extension 330 to a predetermined number oftimes 338. This may be done in order to assure that the other virtual processors and other devices are not starved for resources. Finally, thescheduler 142can debit 341 thetime slice extension 330 granted to thevirtual processor 110 from a subsequent time slice granted to the virtual processor. Thus, at the next time slice or some subsequent time slice, thevirtual processor 110 can be given less time acquire a resource. - Regarding the
hint source 310 itself, it can include any one of the following (or a combination thereof): (a) atask priority register 312, (b) an unenlightenedguest operating system 314 state (which may include the task priority register 312), and (c) anenlightened operating system 316. Thetask priority register 312 is used by some operating systems to hold interrupt request levels. Sincespinlocks 210 are typically acquired only at elevated interrupt request levels, examining the task priority register for elevated values provides a strong hint that virtual processors may be holding spinlocks. Alternatively, information exposed from enlightened guest operating systems, such as whether virtual processors are holding spinlocks or executing barrier sections that other virtual processors are waiting on, would be very accurate since such systems are aware of virtualization and have the resources to keep track of acquired spinlocks. Moreover, the typical unenlightened guest operating system could be examined for range of addresses where a thread holds a spinlock, user mode versus kernel mode state, various flags, and so on. -
FIG. 4 provides a time line regarding how spin waiting can result in virtual processor scheduling congestion, whileFIG. 5 shows how with time slice extension the problems illustrated inFIG. 4 can be obviated. Turning first toFIG. 4 , three physical processors are shown as being available (y-axis) over some period of time (x-axis). These physical processors can be available for virtual machine 1 (having a number of virtual processors) and virtual machine 2 (also having a number of virtual processors). Starting withphysical processor 1, a virtual processor running on the physical processor can acquire alock 410, and this lock is held by the virtual processor while it is de-scheduled. This causes the other virtual processors to spin wait: the virtual processor running onphysical processor 3 can keep try to acquire alock 414 but it will fail and thus spin wait; likewise, the virtual processor running onphysical processor 2 will also spin wait because it will fail to acquire alock 418. - Only when the first virtual processor releases the
lock 412, can the other virtual processors acquire locks: the second virtual processor will eventually acquire alock 416 and then release it 420, and the remaining virtual processor will acquire alock 422 and then release it 424. But, this way of scheduling virtual processors causes congestion and excessive spin waiting. - In contrast,
FIG. 5 illustrates how with time slice extension the problems illustrated inFIG. 4 can be obviated (or at least greatly attenuated). InFIG. 5 , the first virtual processor (running on physical processor 1) can acquire alock 410′ for a certain time slice, but then it can receive an additional time slice extension and release 412′ a spinlock at a later time then it would have without the extension. This way of prolonging time slices can have the benefit for having the second virtual processor and the third virtual processor execute in a more timely manner (and spin wait less). Thus, in contrast toFIG. 4 , inFIG. 5 , the third virtual processor can acquire alock 416′ and release it 420′ some time thereafter, and then the second virtual processor can acquire alock 422′ and release it 424′. It should be noted that the time slice extensions can be given discriminately based on hints gleaned from the hint source 310 (as opposed to being granted as a matter of course). - In the scenario when time slice extensions are granted, it is desirable to make sure that such extensions are not too long. Thus,
FIG. 6 illustrates how time slice extensions can be controlled to ensure resource parity among virtual processors. A module in thevirtualizing layer 140 can set anintercept 610 associated with said hint source 310 (and this may be one method for prematurely ending an extension; another method may include an enlightened operating system that can indicate when a synchronizing section has been cleared). Ending 334 of anytime slice extension 330 prematurely can be based on receipt of the intercept from thehint source 310. In this way, thescheduler 142 can know that it is time to end an extension—otherwise a virtual processor with the time slice extension can keep on using the extension to the end (when it may not be desirable to do so). - Next,
FIG. 7 illustrates the notion of express scheduling so that selected virtual processors are given priority to run. For example, avirtual processor 110 can be chosen 720 to run when thevirtual processor 110 has been previously de-scheduled. Such choosing can include a context switch into the address space of thevirtual processor 110. The priority of thevirtual processor 110 can be increased or boosted to a predetermined level in order to increase the probability that thevirtual processor 110 will run. However, a couple of limitations can also be put in place, such as: limiting 724 the express scheduletime slice duration 730 to a predetermined period oftime 726 in order to preserve resource parity among virtual processor and any other virtual processors (e.g., given current computing resources, the period of time could be 50 microseconds); and, limiting 728 a frequency of granting of the express scheduletime slice duration 730 to a predetermined number oftimes 729 in order to preserve resource parity. Furthermore, any portion (or whole of) the express scheduletime slice duration 730 can be debited 732 from any subsequent time slices associated with the virtual processor. -
FIG. 8 provides a concrete example of how express scheduling can work. A first virtual processor corresponding to a physical processor 1 (y-axis), can acquire alock 410, and this lock will be held by the first virtual processor until it is released 412. In the meantime, the third physical processor (corresponding to physical processor 3), will spin wait after attempting to acquire alock 414 and it will not be successful at acquiring the lock for some time. - What
FIG. 8 illustrates is that while the third virtual processor is spin waiting, an express schedule request 810 can be made upon detection of such a spin wait. The request 810 can result in a context switch to the first virtual processor so that it can finish its task and release thelock 412. Express scheduling allows the first virtual processor to gain priority (so that it can finish) and that eventually the third virtual processor can start its task and minimize the spin wait. Thus, perFIG. 8 , once the first virtual processor releases itslock 412, the third virtual processor can acquire thelock 416 and then eventually release thelock 420. - In another aspect of the presently disclosed subject matter, spin waits can be variously identified when virtual processors are run. Instructions used to pace virtual processors in virtual machines can be intercepted 910. These may include “PAUSE” instructions in certain architectures, such as x86. Information about said instructions can be recorded 912. Then, some time afterwards, a history of this information about the instructions can be examined 914 in order to detect a pattern that correlates with long spin waits. Finally, long spin waits can be indicated 916 to various modules (such as express scheduling modules referenced above) if a particular pattern from a plurality of patterns is identified.
- In various exemplary implementations, the long spin waits can be indicated to a scheduler that performs context switches to the first virtual processor mentioned above (i.e. the processor that has been de-scheduled). As for the pattern, it can indicate no spin waits when the number of the instructions are below a first predetermined standard (some heuristically determined number), and where the pattern can indicate spin waits when the number of the instructions are above a second predetermined standard 918 (some other heuristically determined number). Of course, these are merely exemplary (and hence non-limiting) aspects that could be additionally supplemented or substituted by any of the other aspects discussed herein.
-
FIG. 9 illustrates another aspect of the presently disclosed subject matter, including identifying spin waits prior to the above mentioned choosing of running virtual processors, where the identifying is a basis for the choosing of the virtual processors. PerFIG. 9 , anenlightened guest OS 135 is configured to receiverecommendations 920 from avirtualizing layer 140 regarding spin waitthresholds 921. Then, the enlightenedguest operating system 135 can be further configured to recordinformation 922 regarding spin wait loop counts 923 as part of a spinlock acquire process. Next, it is configured to compare the spin wait loop count 923 to the spin waitthresholds 921. Finally, it is configured to indicate 925 long spin waits if the spin wait loop counts exceed the spin wait thresholds. It should be noted that a sophisticated guest operating system can also tune these thresholds based on the behavior of specific locks. - Any of the above mentioned aspects can be implemented in methods, systems, computer readable media, or any type of manufacture. For example, per
FIG. 10 , a computer readable medium can store thereon computer executable instructions for attenuating spin waiting of virtual processors in a virtual machine environment. Such media can comprise a first subset of instructions for providing time slice extensions to a plurality of virtual processors 1010; a second subset of instructions for providing express scheduling to the plurality ofvirtual processors 1012; and a third subset of instructions for detecting spin waiting in any of the plurality of virtual processors, where the third subset of instructions is combinable with the first subset of instructions and with the second subset of instructions in order to attenuate spin waiting 1014. It will be appreciated by those skilled in the art that additional sets of instructions can be used to capture the various other aspects disclosed herein, and that the three presently disclosed subsets of instructions can vary in detail per the present disclosure. - For example, the first subset of instructions can further comprise
instructions 1020 for: accessing a hint source regarding whether a virtual processor of the plurality of virtual processors has acquired a spinlock; providing the virtual processor with a time extension in order to at least prevent the at least one processor from becoming de-scheduled; bounding the time extension to a predetermined amount of time; bounding the granting of the time extension to a predetermined number of times; and debiting the time extension from any subsequent time slices. - Again, by way of example, the second subset of instructions can further comprise instructions for: choosing a set of virtual processors from the plurality of virtual processors to run per an express schedule request; boosting priorities of the set of virtual processors resulting in the set of virtual processors running to comply with the express schedule request; bounding the express schedule request to a predetermined time slice duration; bounding the express schedule request to a predetermined number of times; and debiting any time associated with the express schedule request from any subsequent time slices of the set of virtual processors.
- The second subset of instructions can further comprise instructions for: targeting the set of virtual processors that are de-scheduled virtual processors; recording information about the set of virtual processors; and retrieving information when spin waits are identified and passing any detection of the spin waits to a virtualizing layer expressly running the set of virtual processors. Moreover, the second subset of instructions can further comprise instructions for: receiving hints from a hint source when a synchronization session is accessed; and, based on such hints, prematurely end any time slice additional to time slices associated with the set of virtual processors.
- Similarly to the first and second subset of instructions, the third subset of instructions can further comprise instructions for: setting intercepts for monitoring a predetermined instruction set used to pace the plurality of virtual processors; recording information regarding the predetermined instruction set; examining a history of the recorded predetermined instruction set; determining any patterns in the history; and identifying a spin wait based on the patterns. The third subset, moreover, can further comprises instructions for: receiving via hypercalls (i.e. notifications) from a virtualizing layer a recommended threshold for determining spin waits of the plurality of virtual processors; recording an iteration count of a spin wait loop; comparing the iteration count to the recommended threshold; and identifying a spin wait associated with a virtual processor from the plurality of virtual processors if the iteration count exceeds the recommended threshold. The first, second, and third subset of instructions can attenuate spin waits for a plurality of virtual processors distributed across a plurality of virtual machines and in a plurality of virtual machine environments.
- Lastly, while the present disclosure has been described in connection with the preferred aspects, as illustrated in the various figures, it is understood that other similar aspects may be used or modifications and additions may be made to the described aspects for performing the same function of the present disclosure without deviating therefrom. For example, in various aspects of the disclosure, various mechanisms were disclosed for efficient detection and response to spin waits in multi-processor virtual machines. However, other equivalent mechanisms to these described aspects are also contemplated by the teachings herein. Therefore, the present disclosure should not be limited to any single aspect, but rather construed in breadth and scope in accordance with the appended claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/031,816 US20180321966A1 (en) | 2008-07-30 | 2018-07-10 | Efficient detection and respone to spin waits in multi-processor virtual machines |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/182,971 US9201673B2 (en) | 2008-07-30 | 2008-07-30 | Efficient detection and response to spin waits in multi-processor virtual machines |
US14/945,206 US10067782B2 (en) | 2008-07-30 | 2015-11-18 | Efficient detection and response to spin waits in multi-processor virtual machines |
US16/031,816 US20180321966A1 (en) | 2008-07-30 | 2018-07-10 | Efficient detection and respone to spin waits in multi-processor virtual machines |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/945,206 Continuation US10067782B2 (en) | 2008-07-30 | 2015-11-18 | Efficient detection and response to spin waits in multi-processor virtual machines |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180321966A1 true US20180321966A1 (en) | 2018-11-08 |
Family
ID=41609665
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/182,971 Active 2031-09-15 US9201673B2 (en) | 2008-07-30 | 2008-07-30 | Efficient detection and response to spin waits in multi-processor virtual machines |
US14/945,206 Active 2029-03-12 US10067782B2 (en) | 2008-07-30 | 2015-11-18 | Efficient detection and response to spin waits in multi-processor virtual machines |
US16/031,816 Abandoned US20180321966A1 (en) | 2008-07-30 | 2018-07-10 | Efficient detection and respone to spin waits in multi-processor virtual machines |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/182,971 Active 2031-09-15 US9201673B2 (en) | 2008-07-30 | 2008-07-30 | Efficient detection and response to spin waits in multi-processor virtual machines |
US14/945,206 Active 2029-03-12 US10067782B2 (en) | 2008-07-30 | 2015-11-18 | Efficient detection and response to spin waits in multi-processor virtual machines |
Country Status (1)
Country | Link |
---|---|
US (3) | US9201673B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111278052A (en) * | 2020-01-20 | 2020-06-12 | 重庆大学 | Industrial field data multi-priority scheduling method based on 5G slice |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8566946B1 (en) * | 2006-04-20 | 2013-10-22 | Fireeye, Inc. | Malware containment on connection |
US9244732B2 (en) * | 2009-08-28 | 2016-01-26 | Vmware, Inc. | Compensating threads for microarchitectural resource contentions by prioritizing scheduling and execution |
US9086922B2 (en) * | 2009-10-26 | 2015-07-21 | Microsoft Technology Licensing, Llc | Opportunistically scheduling and adjusting time slices |
US8683495B1 (en) * | 2010-06-30 | 2014-03-25 | Emc Corporation | Sync point coordination providing high throughput job processing across distributed virtual infrastructure |
WO2012093496A1 (en) * | 2011-01-07 | 2012-07-12 | 富士通株式会社 | Multitasking scheduling method, and multi-core processor system |
US8635615B2 (en) | 2011-05-14 | 2014-01-21 | Industrial Technology Research Institute | Apparatus and method for managing hypercalls in a hypervisor and the hypervisor thereof |
US9110878B2 (en) * | 2012-01-18 | 2015-08-18 | International Business Machines Corporation | Use of a warning track interruption facility by a program |
US8850450B2 (en) * | 2012-01-18 | 2014-09-30 | International Business Machines Corporation | Warning track interruption facility |
US9104508B2 (en) | 2012-01-18 | 2015-08-11 | International Business Machines Corporation | Providing by one program to another program access to a warning track facility |
US10187452B2 (en) | 2012-08-23 | 2019-01-22 | TidalScale, Inc. | Hierarchical dynamic scheduling |
US20140229940A1 (en) * | 2013-02-14 | 2014-08-14 | General Dynamics C4 Systems, Inc. | Methods and apparatus for synchronizing multiple processors of a virtual machine |
US20140244273A1 (en) * | 2013-02-27 | 2014-08-28 | Jean Laroche | Voice-controlled communication connections |
US20180040319A1 (en) * | 2013-12-04 | 2018-02-08 | LifeAssist Technologies Inc | Method for Implementing A Voice Controlled Notification System |
US9898289B2 (en) * | 2014-10-20 | 2018-02-20 | International Business Machines Corporation | Coordinated start interpretive execution exit for a multithreaded processor |
US9411629B1 (en) * | 2015-03-10 | 2016-08-09 | International Business Machines Corporation | Reducing virtual machine pre-emption in virtualized environment |
US10083068B2 (en) | 2016-03-29 | 2018-09-25 | Microsoft Technology Licensing, Llc | Fast transfer of workload between multiple processors |
US10579421B2 (en) | 2016-08-29 | 2020-03-03 | TidalScale, Inc. | Dynamic scheduling of virtual processors in a distributed system |
CN108255572A (en) * | 2016-12-29 | 2018-07-06 | 华为技术有限公司 | A kind of VCPU switching methods and physical host |
JP2018180768A (en) * | 2017-04-07 | 2018-11-15 | ルネサスエレクトロニクス株式会社 | Semiconductor device |
US11126474B1 (en) * | 2017-06-14 | 2021-09-21 | Amazon Technologies, Inc. | Reducing resource lock time for a virtual processing unit |
US10579274B2 (en) | 2017-06-27 | 2020-03-03 | TidalScale, Inc. | Hierarchical stalling strategies for handling stalling events in a virtualized environment |
US10817347B2 (en) | 2017-08-31 | 2020-10-27 | TidalScale, Inc. | Entanglement of pages and guest threads |
US10592281B1 (en) | 2017-09-28 | 2020-03-17 | Amazon Technologies, Inc. | Wait optimizer for recording an order of first entry into a wait mode by a virtual central processing unit |
JP2019067289A (en) * | 2017-10-04 | 2019-04-25 | ルネサスエレクトロニクス株式会社 | Semiconductor apparatus |
CN111008053A (en) * | 2019-10-25 | 2020-04-14 | 西安雷风电子科技有限公司 | Automatic synchronization method and device for virtual desktop |
CN111209079A (en) * | 2019-12-27 | 2020-05-29 | 山东乾云启创信息科技股份有限公司 | Scheduling method, device and medium based on Roc processor |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6516427B1 (en) * | 1999-11-05 | 2003-02-04 | Hewlett-Packard Company | Network-based remote diagnostic facility |
US20050076186A1 (en) * | 2003-10-03 | 2005-04-07 | Microsoft Corporation | Systems and methods for improving the x86 architecture for processor virtualization, and software systems and methods for utilizing the improvements |
US20060009204A1 (en) * | 2003-11-03 | 2006-01-12 | Starhome Gmbh | Telephone functionality for generic applications in a mobile handset |
US20060155930A1 (en) * | 2005-01-10 | 2006-07-13 | Microsoft Corporation | System and methods for an overlay disk and cache using portable flash memory |
US20060212876A1 (en) * | 2001-09-21 | 2006-09-21 | Buch Deep K | High performance synchronization of accesses by threads to shared resources |
US20080040560A1 (en) * | 2006-03-16 | 2008-02-14 | Charles Brian Hall | Lightweight Single Reader Locks |
US20080209168A1 (en) * | 2004-09-29 | 2008-08-28 | Daisuke Yokota | Information Processing Apparatus, Process Control Method, and Computer Program |
US20090083708A1 (en) * | 2007-04-05 | 2009-03-26 | International Business Machines Corporation | Method and system for aspect scoping in a modularity runtime |
US20090204963A1 (en) * | 2008-02-07 | 2009-08-13 | Arm Limited | Reducing memory usage of a data processing task performed using a virtual machine |
US7594234B1 (en) * | 2004-06-04 | 2009-09-22 | Sun Microsystems, Inc. | Adaptive spin-then-block mutual exclusion in multi-threaded processing |
US20100031269A1 (en) * | 2008-07-29 | 2010-02-04 | International Business Machines Corporation | Lock Contention Reduction |
US20100095278A1 (en) * | 2008-10-09 | 2010-04-15 | Nageshappa Prashanth K | Tracing a calltree of a specified root method |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6105053A (en) * | 1995-06-23 | 2000-08-15 | Emc Corporation | Operating system for a non-uniform memory access multiprocessor system |
US6766515B1 (en) * | 1997-02-18 | 2004-07-20 | Silicon Graphics, Inc. | Distributed scheduling of parallel jobs with no kernel-to-kernel communication |
US7415708B2 (en) * | 2003-06-26 | 2008-08-19 | Intel Corporation | Virtual machine management using processor state information |
US7552426B2 (en) * | 2003-10-14 | 2009-06-23 | Microsoft Corporation | Systems and methods for using synthetic instructions in a virtual machine |
JP4287799B2 (en) * | 2004-07-29 | 2009-07-01 | 富士通株式会社 | Processor system and thread switching control method |
US7930694B2 (en) * | 2004-09-08 | 2011-04-19 | Oracle America, Inc. | Method and apparatus for critical section prediction for intelligent lock elision |
US20060130062A1 (en) * | 2004-12-14 | 2006-06-15 | International Business Machines Corporation | Scheduling threads in a multi-threaded computer |
US8621458B2 (en) * | 2004-12-21 | 2013-12-31 | Microsoft Corporation | Systems and methods for exposing processor topology for virtual machines |
US7752620B2 (en) * | 2005-06-06 | 2010-07-06 | International Business Machines Corporation | Administration of locks for critical sections of computer programs in a computer that supports a multiplicity of logical partitions |
US8024739B2 (en) * | 2007-01-09 | 2011-09-20 | International Business Machines Corporation | System for indicating and scheduling additional execution time based on determining whether the execution unit has yielded previously within a predetermined period of time |
-
2008
- 2008-07-30 US US12/182,971 patent/US9201673B2/en active Active
-
2015
- 2015-11-18 US US14/945,206 patent/US10067782B2/en active Active
-
2018
- 2018-07-10 US US16/031,816 patent/US20180321966A1/en not_active Abandoned
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6516427B1 (en) * | 1999-11-05 | 2003-02-04 | Hewlett-Packard Company | Network-based remote diagnostic facility |
US20060212876A1 (en) * | 2001-09-21 | 2006-09-21 | Buch Deep K | High performance synchronization of accesses by threads to shared resources |
US20050076186A1 (en) * | 2003-10-03 | 2005-04-07 | Microsoft Corporation | Systems and methods for improving the x86 architecture for processor virtualization, and software systems and methods for utilizing the improvements |
US20060009204A1 (en) * | 2003-11-03 | 2006-01-12 | Starhome Gmbh | Telephone functionality for generic applications in a mobile handset |
US7594234B1 (en) * | 2004-06-04 | 2009-09-22 | Sun Microsystems, Inc. | Adaptive spin-then-block mutual exclusion in multi-threaded processing |
US20080209168A1 (en) * | 2004-09-29 | 2008-08-28 | Daisuke Yokota | Information Processing Apparatus, Process Control Method, and Computer Program |
US20060155930A1 (en) * | 2005-01-10 | 2006-07-13 | Microsoft Corporation | System and methods for an overlay disk and cache using portable flash memory |
US20080040560A1 (en) * | 2006-03-16 | 2008-02-14 | Charles Brian Hall | Lightweight Single Reader Locks |
US20090083708A1 (en) * | 2007-04-05 | 2009-03-26 | International Business Machines Corporation | Method and system for aspect scoping in a modularity runtime |
US20090204963A1 (en) * | 2008-02-07 | 2009-08-13 | Arm Limited | Reducing memory usage of a data processing task performed using a virtual machine |
US20100031269A1 (en) * | 2008-07-29 | 2010-02-04 | International Business Machines Corporation | Lock Contention Reduction |
US20100095278A1 (en) * | 2008-10-09 | 2010-04-15 | Nageshappa Prashanth K | Tracing a calltree of a specified root method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111278052A (en) * | 2020-01-20 | 2020-06-12 | 重庆大学 | Industrial field data multi-priority scheduling method based on 5G slice |
Also Published As
Publication number | Publication date |
---|---|
US20160154666A1 (en) | 2016-06-02 |
US9201673B2 (en) | 2015-12-01 |
US10067782B2 (en) | 2018-09-04 |
US20100031254A1 (en) | 2010-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10067782B2 (en) | Efficient detection and response to spin waits in multi-processor virtual machines | |
US7752620B2 (en) | Administration of locks for critical sections of computer programs in a computer that supports a multiplicity of logical partitions | |
US7987452B2 (en) | Profile-driven lock handling | |
KR100911796B1 (en) | Multi processor and multi thread safe message queue with hardware assistance | |
JP5467661B2 (en) | Method, system, and computer program for prioritization for contention arbitration in transaction memory management (priority for contention arbitration in transaction memory management) | |
EP2372542B1 (en) | Virtual machine, virtual machine monitor and computer control method | |
US9098337B2 (en) | Scheduling virtual central processing units of virtual machines among physical processing units | |
US20140026143A1 (en) | Exclusive access control method and computer product | |
US20090144742A1 (en) | Method, system and computer program to optimize deterministic event record and replay | |
EP3048527A1 (en) | Sharing idled processor execution resources | |
US9760411B2 (en) | Switching a locking mode of an object in a multi-thread program | |
US20150301871A1 (en) | Busy lock and a passive lock for embedded load management | |
US6295602B1 (en) | Event-driven serialization of access to shared resources | |
CN116225728A (en) | Task execution method and device based on coroutine, storage medium and electronic equipment | |
WO2023241307A1 (en) | Method and apparatus for managing threads | |
Spliet et al. | Fast on average, predictable in the worst case: Exploring real-time futexes in LITMUSRT | |
Osborne et al. | Simultaneous multithreading applied to real time | |
US20050240699A1 (en) | Safe process deactivation | |
US11194615B2 (en) | Dynamic pause exiting | |
US11119831B2 (en) | Systems and methods for interrupting latency optimized two-phase spinlock | |
Joe et al. | Effects of dynamic isolation for full virtualized RTOS and GPOS guests | |
US20090241111A1 (en) | Recording medium having instruction log acquiring program recorded therein and virtual computer system | |
US12032474B2 (en) | Computer-readable recording medium storing acceleration test program, acceleration test method, and acceleration test apparatus | |
US20220413996A1 (en) | Computer-readable recording medium storing acceleration test program, acceleration test method, and acceleration test apparatus | |
JP7301892B2 (en) | A system that implements multithreaded applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHIN, YAU NING;VEGA, RENE ANTONIO;SHEU, JOHN TE-JUI;AND OTHERS;SIGNING DATES FROM 20080731 TO 20080818;REEL/FRAME:046618/0515 Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:046618/0752 Effective date: 20141014 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |