KR101330609B1 - Method For Scheduling of Mobile Multi-Core Virtualization System To Guarantee Real Time Process - Google Patents

Method For Scheduling of Mobile Multi-Core Virtualization System To Guarantee Real Time Process Download PDF

Info

Publication number
KR101330609B1
KR101330609B1 KR1020120034379A KR20120034379A KR101330609B1 KR 101330609 B1 KR101330609 B1 KR 101330609B1 KR 1020120034379 A KR1020120034379 A KR 1020120034379A KR 20120034379 A KR20120034379 A KR 20120034379A KR 101330609 B1 KR101330609 B1 KR 101330609B1
Authority
KR
South Korea
Prior art keywords
cpu
virtual cpu
physical
virtual
interrupt processing
Prior art date
Application number
KR1020120034379A
Other languages
Korean (ko)
Other versions
KR20130112180A (en
Inventor
강용호
장창복
Original Assignee
주식회사 알투소프트
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 알투소프트 filed Critical 주식회사 알투소프트
Priority to KR1020120034379A priority Critical patent/KR101330609B1/en
Publication of KR20130112180A publication Critical patent/KR20130112180A/en
Application granted granted Critical
Publication of KR101330609B1 publication Critical patent/KR101330609B1/en

Links

Images

Abstract

The present invention relates to a method of performing a virtualization task scheduling in a scheduler of a mobile multicore virtualization system. The method includes determining a location and a type of a corresponding interrupt when an interrupt occurs, and moving a virtual CPU, which was running for the interrupt processing, to a waiting queue. Or inserting a virtual CPU for interrupt processing into an execution queue of domain 0, selecting a physical CPU to allocate a virtual CPU for interrupt processing based on preset policy information, and executing a queue of the selected physical CPU. And inserting a virtual CPU for interrupt processing into the virtual CPU for interrupt processing.
According to the present invention, when interrupt processing in a multicore system having two or more cores, the scheduler of the multicore system guarantees real-time processing and selects an optimal physical CPU among the assignable physical CPUs to allocate a virtual CPU. There is an effect that can work more efficiently than the existing method.

Description

Scheme for Scheduling of Mobile Multi-Core Virtualization System To Guarantee Real Time Process

The present invention relates to a scheduling method for real-time processing in a mobile multicore virtualization system. More specifically, the scheduler of a mobile multicore system assigns a highest priority to an interrupt generated when an interrupt of a virtual CPU is processed for a specific physical CPU. The present invention relates to a technology for guaranteeing real-time execution and enabling faster processing by selecting an optimal physical CPU when assigning and assigning a virtual CPU for interrupt processing to a physical CPU.

With the development of processor performance in embedded systems, interest in virtualization of embedded systems is increasing. Virtualization generally means applying multiple independent operating systems on a single host computer. That is, by using a virtual machine monitor (VMM) on one hardware, various services can be provided to multiple users, thereby reducing costs and efficiently using system resources.

In order to operate multiple operating systems (OSs) in a multicore system having two or more cores, that is, multiple physical CPUs, a virtualization technology that allocates a virtual CPU and performs a task is required. In other words, the virtual CPU is virtualized to create a plurality of virtual CPUs, and the operating system is performed using the virtual CPU scheduled at the hypervisor level, which is virtualization software. This requires a scheduler that operates at the hypervisor level to properly allocate the virtual CPU to the operating system.

Traditional mobile virtualization scheduling research is mostly about how to allocate multiple virtualized CPUs to a single physical CPU. Allocating multiple virtual CPUs to a single physical CPU requires a policy to ensure that the physical CPU resources are balanced and load balanced properly, and that the hypervisor determines how many virtual CPUs to allocate to each domain. The physical CPU utilization rate is set in the scheduler, and the scheduler allocates each virtual CPU to the domain according to this utilization rate.

Therefore, even for interrupt processing requiring fast execution, calculation for maintaining fairness and load balance is required, and the virtual CPU with the highest weight is selected and allocated to the physical CPU according to the calculation. The selected virtual CPU may or may not be interrupted, and there is a delay due to weight calculation, which does not guarantee real time processing. In a mobile virtualization system that typically uses a separate I / O driver model, interrupts that occur in the guest domain must be handled in domain 0, which performs privileged mode. This results in later processing because interrupt-related processing must be performed once again in domain 0 when an interrupt occurs. In addition, in a single processor, this method needs to send a virtual CPU running in domain 0 to the wait queue and schedule an interrupt processing virtual CPU, which requires a second context exchange, which makes virtual real-time processing difficult.

In other words, the scheduler of the existing multicore virtualization system is one of the virtual CPUs allocated to the physical CPUs in consideration of various factors (weights per domain, etc.) during virtual CPU migration and interrupt processing for a specific physical CPU. The virtual CPU with the highest priority is selected to change the currently running virtual CPU of the physical CPU where the virtual CPU change event has occurred. This can be an advantage in terms of ensuring load balancing and fairness, but considers various factors when allocating virtual CPUs to physical CPUs, even if I / O interrupts and virtual CPU change events are set to top priority. Therefore, the interrupt processing virtual CPU, which requires fast processing, may not be allocated to the physical CPU, and time is required because the virtual CPU to be allocated is selected in consideration of various factors. In addition, when using a separate I / O driver model, because many I / O related interrupts must be handled in domain 0, the utilization of domain 0 increases momentarily, and thus the state of the virtual CPU for domain 0 Can be idle. When in this idle state, the physical CPU also remains idle, which is not appropriate for domain 0, which must handle all I / O interrupts.

However, due to the development of mobile device performance, multicore processors are being installed in mobile devices and embedded devices. Therefore, the study on scheduling in multicore processors that minimizes the context exchange and quickly handles interrupts using multicore processes is required. to be.

However, the scheduling algorithms of virtualization systems that use the current discrete I / O driver model focus on CPU fairness and cannot process real-time I / O in a limited amount of time. There is a problem that is difficult to apply to the embedded device.

Therefore, in the present invention, priority is given to the interrupt processing virtual CPU to ensure real-time processing so that scheduling can be performed immediately without delay due to fairness and load balancing calculation, and the virtual CPU is allocated among several available physical CPUs. In this case, the optimal physical CPU is selected by considering the shared cache of each core, time slice, and physical CPU information of the idle state. It is main purpose to do it.

According to an aspect of the present invention for achieving the above object, the step of identifying the location and type of the interrupt when the interrupt occurs, moving the virtual CPU running for the interrupt processing to the queue or for interrupt processing Inserting a virtual CPU into an execution queue of domain 0, selecting a physical CPU to allocate a virtual CPU for interrupt processing based on preset policy information, and performing virtual processing for interrupt processing on the selected execution CPU of the physical CPU There is provided a scheduling method of a mobile multi-core virtualization system comprising inserting a CPU so that the virtual CPU for the interrupt processing is prioritized.

Here, if the response interrupt to the virtual CPU running in the selected physical CPU after the interrupt processing, it is preferable to insert the virtual CPU in the waiting queue to the execution queue and the virtual CPU is executed first.

The policy information may include at least one of an interrupt occurrence history, cache sharing, a time slice, pre-executed virtual CPU information, and an idle state.

The selecting of the physical CPU to which the virtual CPU for the interrupt processing is allocated based on preset policy information may be performed based on the policy information when there are two or more physical CPUs to which the virtual CPU for the interrupt processing is allocated. It is more preferable to select a physical CPU and to allocate a virtual CPU that is not processed by the selected physical CPU to the remaining physical CPU when the remaining physical CPU is available.

According to the present invention, when interrupt processing in a multicore system having two or more cores, the scheduler of the multicore system guarantees real-time processing and selects an optimal physical CPU among the assignable physical CPUs to allocate a virtual CPU. There is an effect that can work more efficiently than the existing method.

1 is a scheduling overview diagram of a mobile multicore virtualization system.
2 is a block diagram of a scheduling apparatus of a mobile multicore virtualization system according to an exemplary embodiment.
3 is a flowchart of a scheduling method of a mobile multicore virtualization system according to an embodiment of the present invention.
4 is a diagram illustrating an example of a scheduling process result according to an interrupt occurrence.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. In order to clearly illustrate the present invention, parts not related to the description are omitted, and similar parts are denoted by like reference characters throughout the specification.

Throughout the specification, when a part is said to "include" a certain component, it means that it can further include other components, except to exclude other components unless otherwise stated.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

In addition, the suffix "module" and " part "for constituent elements used in the following description are given or mixed in consideration of ease of specification, and do not have their own meaning or role.

1 is a scheduling overview diagram of a mobile multicore virtualization system.

1 is a diagram illustrating physical CPUs managed by a scheduler of a hypervisor (P_CPU0 to P_CPU3), caches shared by each physical CPU (L2 Cache of P_CPU0 and P_CPU1, L2 Cache of P_CPU2 and P_CPU3), and virtualized. It illustrates a running operating system (Domain 0 to Domain 2), a virtual CPU (VCPU0 to VCPU17) for CPU virtualization to perform each domain, and an execution queue list structure for scheduling.

In the execution queue for scheduling, the virtual CPUs are arranged in order of high scheduling priority. Each virtual CPU has a scheduling priority of any one of 'Interrupt', 'Normal', and 'Idle'. If the virtual CPU is the same priority, the virtual CPU has the highest priority.

If the physical CPU itself is in an idle state, the scheduling priority of the virtual CPU assigned to that physical CPU is set to 'Idle', and the virtual CPU that has requested I / O work and resource lock When the PC wakes up, the scheduling priority of the corresponding virtual CPU is set to 'Interrupt', which is the highest priority.

In FIG. 1, P_CPU3 among physical CPUs is a processor that executes a virtual CPU for domain 0, indicating that an overhead for domain 0 has occurred and is currently in an idle state.

Each physical CPU is occupied by a virtual CPU that is arranged in its own scheduling queue, the Run Queue, and sometimes occupies the physical CPU by migrating to a virtual CPU in the Run Queue of another physical CPU. In this case, a policy is required to guarantee fairness and load balance of each virtual CPU. In the present invention, the policy for ensuring fairness and load balance is maintained through the number of virtual CPUs performed in each execution queue. However, interrupt processing that requires real-time processing when measuring the number of such virtual CPUs is characterized by excluding from the fairness and load balance policies.

Other policies may be considered. However, even if any fairness and load balance policies are considered, the calculation operation for fairness and load balance is not performed during interrupt processing proposed in the present invention, and among several physical CPUs. There is no change in the idea of selecting an optimal physical CPU and immediately assigning an interrupt virtual CPU.

In other words, as described in the related art, when an interrupt occurs in general, in a virtualization system using a separate I / O driver model, the guest domains (Domain 1 and Domain 2) where the front-end driver is located and the back-end driver are located I / O related interrupts generated in the guest domain are requested to the back-end driver in domain 0 through the front-end driver. Therefore, such I / O related interrupt processing is performed in domain 0. Therefore, if an I / O related interrupt occurs in domain 1, the virtual CPU running on that physical CPU is inserted into the 'wait queue' until it stops executing and handles the interrupt and performs an I / O request to domain 0. . In domain 0, a virtual CPU for interrupt processing is inserted into an execution queue of domain 0 for interrupt processing occurring in domain 1. In the conventional scheduling method, such interrupt processing calculates the weight of the virtual CPU existing in each run queue by the scheduler in consideration of the fairness and the load balance of the physical CPU, and selects the virtual CPU having the highest weight value to select the physical CPU. Will be assigned. Therefore, a delay occurs by the weight calculation time, and there is no guarantee that the interrupt processing virtual CPU will be allocated to the physical CPU. This process has a drawback that it cannot be used where real-time processing such as mobile devices and embedded devices is required. Multicore processors are generally designed to share one cache (L2 Cache) for every two cores. This shared cache is used to overcome speed differences with other devices. It is a storage space that stores data needed by the CPU in advance while the CPU processes the data. The more frequently two cores store data in L2 Cache, the better the performance.

2 is a block diagram of a scheduling apparatus of a mobile multicore virtualization system according to an exemplary embodiment.

As shown in FIG. 2, the scheduler 100 of the mobile multi-core virtualization system according to an embodiment of the present invention may include an interrupt processing module 110, a virtual CPU processing module 120, a processor selection module 130, and a processor. It is configured to include an allocation module 140.

The interrupt processing module 110 monitors and detects interrupts generated when each virtual CPU is executed, and determines types of interrupts.

 The virtual CPU processing module 120 performs a function of moving and prioritizing the virtual CPU to the standby queue and the execution queue in order to process the generated interrupt.

The processor selecting module 130 that selects a physical CPU determines which physical CPU to allocate to assign the selected virtual CPU to each physical CPU.

The processor allocation module 140 assigns the selected virtual CPU to the selected processor. The virtual CPU selected by the virtual CPU processing module 120 is allocated to the physical CPUs selected by the processor selection module 130.

More specific details of the scheduling of the mobile multicore virtualization system made by each module will be described in more detail with reference to FIG. 3.

3 is a flowchart of a scheduling method of a mobile multicore virtualization system according to an embodiment of the present invention.

As shown in FIG. 3, when an interrupt occurs, the position and type of occurrence of the interrupt are determined.

In step S20, the virtual CPU is placed on the standby queue or the virtual CPU located on the standby queue is moved to the execution queue according to the type of interrupt (I / O request or I / O processing result notification, etc.). This is done. In addition, in the case of an interrupt regarding an I / O request, an interrupt processing virtual CPU having the highest priority is inserted into an execution queue of a physical CPU on which domain 0 is being performed for I / O interrupt processing.

Then, in step S30, an executable processor is selected to handle the interrupt that occurred, and the physical CPU information shown in Table 1 below can be used for this determination.

Figure 112012026639968-pat00001

In the example of Table 1, the CPU field is a name of a physical CPU constituting the multicore, and the interrupt field includes information on whether an interrupt has occurred. The recently used virtual CPU field contains information on the virtual CPUs recently performed by each physical CPU. The status field contains the current state of the physical CPU and consists of three values: running, waiting, and idle. The shared cache field is configured to have the same numeric value when each physical CPU shares cache information. In Table 1, P_CPU0 and P_CPU1 having a value of 0 share information in the cache, and P_CPU2 and P_CPU3 having a value of 1 share information in the cache. The domain field is information on a domain in which each physical CPU is running, and indicates that P_CPU0 and P_CPU3 execute a virtual CPU for domain 0. The run queue field represents the number of queues waiting to be executed on each physical CPU.

Policies for processor selection include CPUs with interrupts, CPUs sharing L2 Cache, CPU selection based on deadline (Time Slice, etc.), selection using virtual CPU information executed in CPU, CPU in idle state, etc. This can be considered.

According to the processor selected in step S30, the selected virtual CPU is allocated to the processor in steps S40 and S50. Step S40 is a case where there is one processor available and assigns it to that processor, and step S50 finds and allocates an optimal processor to handle an interrupt when there are two or more assignable processors, and if other processors are available, The state indicates the allocation of virtual CPUs to the processor.

 In more detail with respect to steps S40 and S50, several cases may occur in the selection decision of the physical CPU.

First, there is one physical CPU that can be allocated. In this case, if there is a physical CPU with a history of executing the virtual CPU that has recently interrupted, the physical CPU stops executing the virtual CPU and moves the running virtual CPU to the waiting queue. It exists in a paused state. Therefore, the physical CPU is waiting to be allocated with a new virtual CPU, and this is the case of allocating an interrupt processing virtual CPU to this physical CPU.

The second is when there are two or more physical CPUs that can be allocated. For example, a physical CPU having a history of performing a virtual CPU with a recent interrupt and a physical CPU in an idle state may execute a new virtual CPU. Although the idle physical CPUs are running for a while depending on the domain's physical CPU utilization, the corresponding domain changes can use the physical CPUs.

For example, according to the physical CPU information shown in FIG. 3, the physical CPU on which the interrupt is currently generated is P_CPU1 and P_CPU3 is in an Idle state. Therefore, the currently available physical CPUs are P_CPU1 in the waiting state and P_CPU3 in the Idle state, and are allocated to the corresponding physical CPU because it is optimally allocated to P_CPU1 in consideration of information on the shared cache and the recently used virtual CPU. After that, for Idle state P_CPU3, even though it is Idle, but its target is for domain 0, P_CPU3 can use the same shared cache as P_CPU0 in domain 0, so the processing of waiting virtual CPU (VCPU6) in P_CPU1 is changed to P_CPU3. And the processing of the virtual CPU waiting in P_CPU3 changes the physical CPU information to process in P_CPU1. Table 2 below is information changed after the physical CPU selection decision is made in Table 1, respectively.

Figure 112012026639968-pat00002

That is, since P_CPU3 is set to process the virtual CPU for domain 0 and has an overhead for the virtual CPU for domain 0, and is currently idle, the virtual CPU for domain 1 or domain 2 is not domain 0. In the case of processing the overhead problem does not occur. Therefore, in this regard, in the interrupt processing, the interrupt processing is performed in P_CPU1, which has a recent interrupt processing history, which makes interrupt processing easier, and accordingly, another virtual CPU (for example, FIG. 4) that cannot be processed in P_CPU1. VCPU6 and VCPU8) are processed in P_CPU3, which is idle, thereby improving real-time throughput for interrupts and increasing the efficiency of virtual CPU work.

FIG. 4 is a diagram illustrating an example of a result of scheduling processing according to an interrupt occurrence, and for clarity, when an interrupt occurs while performing a virtual CPU (VCPU5) on a physical CPU (P_CPU1) based on physical CPU information, FIG. The result of a series of processing performed in the CPU P_CPU1 is described as an example.

First, when an interrupt is generated, the virtual CPU VCPU5, which is running on the physical CPU P_CPU1 for the interrupt processing, moves to the waiting queue (1).

The scheduler 100 inserts the interrupt processing related virtual CPU VCPU18 into the execution queue of the physical CPU P_CPU0 currently performing domain 0 (2).

The scheduler retrieves the available physical CPUs using the physical CPU information of FIG. 3 to immediately allocate the virtual CPU (VCPU18) to the physical CPUs, selects the physical CPUs in consideration of the various factors described above, and according to the selection result. Is assigned to P_CPU1 (3-1).

In addition, since the P_CPU3 is a CPU occupied by the domain 0 even in the current idle state, the scheduler 100 allocates the VCPU6, which is not allocated due to the interrupt processing of the VCPU5, to the P_CPU3 to increase the utilization rate (3-2). At this time, the scheduler changes the physical CPU information to allocate virtual CPUs in the execution queue of P_CPU1 to P_CPU1 as P_CPU3.

If a response interrupt to VCPU5 occurs for the processing of VCPU5 after the interrupt processing, VCPU5 in the waiting queue 1 is inserted into the execution queue 1 (4). VCPU5 has the highest priority at this time and is executed immediately by the scheduler. The processing at this time is the same as the above processing method (5).

However, since the usable physical CPU is one P_CPU1, the allocated physical CPU is P_CPU1. Finally, the virtual CPU located in the execution queue 3 is allocated to P_CPU1 by physical CPU information when the idle state is released (6).

In FIG. 4, an example of a scheduling process result according to an interrupt occurrence has been described. However, this is merely an example. Scheduling for various cases may be generated, and the present invention may generate an interrupt regardless of the number of various cases. The highest priority to the virtual CPU for interrupt processing, which means that no calculations are performed for fairness and load balancing during interrupt processing regardless of the policy of fairness and load balancing, and the optimal physical CPU among multiple physical CPUs. It is obvious that all of them are included in the technical spirit of the present invention as long as they include the concept of allocating an interrupt virtual CPU immediately by selecting.

100: scheduler 110: interrupt processing module
120: virtual CPU processing module 130: processor selection module
140: processor allocation module

Claims (4)

  1. In the method of performing virtualization job scheduling in the scheduler of the mobile multi-core virtualization system,
    Identifying the location and type of occurrence of an interrupt when an interrupt occurs;
    Moving a virtual CPU running for the generated interrupt processing to a standby queue or inserting the virtual CPU for interrupt processing into an execution queue of domain 0;
    Selecting a physical CPU to allocate a virtual CPU for interrupt processing based on preset policy information;
    And inserting a virtual CPU for interrupt processing into an execution queue of the selected physical CPU so that the virtual CPU for interrupt processing is processed first.
  2. The method of claim 1,
    When the response interrupt to the virtual CPU running on the selected physical CPU after the interrupt processing, the corresponding virtual CPU in the waiting queue is inserted into the execution queue and the virtual CPU is executed first priority. Scheduling method of core virtualization system.
  3. The method of claim 1,
    The policy information may include at least one of interrupt history, cache sharing, time slice, executed virtual CPU information, and idle state.
  4. The method of claim 1,
    Selecting a physical CPU to allocate a virtual CPU for interrupt processing based on the preset policy information
    If there are two or more physical CPUs to allocate the virtual CPU for interrupt processing, the optimal physical CPU is selected based on the policy information, and if the remaining physical CPUs are available, the virtual CPUs that are not processed by the selected physical CPU are available. And a CPU is allocated to the remaining physical CPUs.
KR1020120034379A 2012-04-03 2012-04-03 Method For Scheduling of Mobile Multi-Core Virtualization System To Guarantee Real Time Process KR101330609B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120034379A KR101330609B1 (en) 2012-04-03 2012-04-03 Method For Scheduling of Mobile Multi-Core Virtualization System To Guarantee Real Time Process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120034379A KR101330609B1 (en) 2012-04-03 2012-04-03 Method For Scheduling of Mobile Multi-Core Virtualization System To Guarantee Real Time Process

Publications (2)

Publication Number Publication Date
KR20130112180A KR20130112180A (en) 2013-10-14
KR101330609B1 true KR101330609B1 (en) 2013-11-18

Family

ID=49633225

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120034379A KR101330609B1 (en) 2012-04-03 2012-04-03 Method For Scheduling of Mobile Multi-Core Virtualization System To Guarantee Real Time Process

Country Status (1)

Country Link
KR (1) KR101330609B1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102033434B1 (en) 2014-01-28 2019-10-17 한국전자통신연구원 Apparatus and method for multi core emulation based on dynamic context switching
KR20170081952A (en) 2016-01-05 2017-07-13 한국전자통신연구원 Multi-core simulation system and method based on shared translation block cache
CN107102966A (en) * 2016-02-22 2017-08-29 龙芯中科技术有限公司 multi-core processor chip, interrupt control method and controller

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120017294A (en) * 2010-08-18 2012-02-28 삼성전자주식회사 System and method of scheduling
KR20120019330A (en) * 2010-08-25 2012-03-06 삼성전자주식회사 Scheduling apparatus and method for a multicore system
KR20120068572A (en) * 2010-12-17 2012-06-27 삼성전자주식회사 Apparatus and method for compilation of program on multi core system
KR20120070303A (en) * 2010-12-21 2012-06-29 삼성전자주식회사 Apparatus for fair scheduling of synchronization in realtime multi-core systems and method of the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120017294A (en) * 2010-08-18 2012-02-28 삼성전자주식회사 System and method of scheduling
KR20120019330A (en) * 2010-08-25 2012-03-06 삼성전자주식회사 Scheduling apparatus and method for a multicore system
KR20120068572A (en) * 2010-12-17 2012-06-27 삼성전자주식회사 Apparatus and method for compilation of program on multi core system
KR20120070303A (en) * 2010-12-21 2012-06-29 삼성전자주식회사 Apparatus for fair scheduling of synchronization in realtime multi-core systems and method of the same

Also Published As

Publication number Publication date
KR20130112180A (en) 2013-10-14

Similar Documents

Publication Publication Date Title
Zhang et al. Dynamic heterogeneity-aware resource provisioning in the cloud
Lee et al. Supporting soft real-time tasks in the xen hypervisor
US8667496B2 (en) Methods and systems of managing resources allocated to guest virtual machines
US8095929B1 (en) Method and system for determining a cost-benefit metric for potential virtual machine migrations
US8161491B2 (en) Soft real-time load balancer
Leverich et al. Reconciling high server utilization and sub-millisecond quality-of-service
EP2071458B1 (en) Power control method for virtual machine and virtual computer system
US9152467B2 (en) Method for simultaneous scheduling of processes and offloading computation on many-core coprocessors
EP2411915B1 (en) Virtual non-uniform memory architecture for virtual machines
Song et al. Multi-tiered on-demand resource scheduling for VM-based data center
Samal et al. Analysis of variants in Round Robin Algorithms for load balancing in Cloud Computing
US8838801B2 (en) Cloud optimization using workload analysis
CN101452406B (en) Cluster load balance method transparent for operating system
KR20120111734A (en) Hypervisor isolation of processor cores
Steinder et al. Server virtualization in autonomic management of heterogeneous workloads
US10417048B2 (en) Mechanism for scheduling execution of threads for fair resource allocation in a multi-threaded and/or multi-core processing system
US8910153B2 (en) Managing virtualized accelerators using admission control, load balancing and scheduling
Herman et al. RTOS support for multicore mixed-criticality systems
KR101658035B1 (en) Virtual machine monitor and scheduling method of virtual machine monitor
Sotomayor et al. Capacity leasing in cloud systems using the opennebula engine
AU2014311463B2 (en) Virtual machine monitor configured to support latency sensitive virtual machines
US9183016B2 (en) Adaptive task scheduling of Hadoop in a virtualized environment
US8312464B2 (en) Hardware based dynamic load balancing of message passing interface tasks by modifying tasks
Mei et al. Performance measurements and analysis of network i/o applications in virtualized cloud
Boutcher et al. Does virtualization make disk scheduling passé?

Legal Events

Date Code Title Description
A201 Request for examination
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20161104

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20171110

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20181107

Year of fee payment: 6