KR20130074953A - Apparatus and method for dynamic virtual machine placement - Google Patents
Apparatus and method for dynamic virtual machine placement Download PDFInfo
- Publication number
- KR20130074953A KR20130074953A KR1020110143097A KR20110143097A KR20130074953A KR 20130074953 A KR20130074953 A KR 20130074953A KR 1020110143097 A KR1020110143097 A KR 1020110143097A KR 20110143097 A KR20110143097 A KR 20110143097A KR 20130074953 A KR20130074953 A KR 20130074953A
- Authority
- KR
- South Korea
- Prior art keywords
- virtual machine
- cache miss
- miss rate
- virtual
- node
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/161—Computing infrastructure, e.g. computer clusters, blade chassis or hardware partitioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/167—Interprocessor communication using a common memory, e.g. mailbox
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45504—Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The present invention relates to an apparatus and method for providing a guide for efficiently using resources of a cloud computing system.
In order to solve the above technical problem, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention, in the cloud computing system including the cluster, the cache miss rate of the plurality of virtual machines included in the plurality of nodes included in the cluster And a cache miss rate collector for collecting and a virtual machine relocator for relocating the virtual machines according to the cache miss rate.
Description
The present invention relates to an apparatus and method for dynamically deploying virtual machines to efficiently use resources of a cloud computing system.
Conventional process placement techniques have used a method for solving a shared resource conflict problem in one physical machine. However, in a cloud environment, a plurality of physical machines exist, and there may be more than one virtual machine. In this case, even when the shared resource conflict problem is solved in one virtual machine, when a shared resource conflict occurs between the virtual machines, a problem of lowering the efficiency of resource usage occurs. Examples of such shared resources include cache memory, memory bandwidth, network bandwidth, disk performance, and the like. If two virtual machines each use CPUs that share a cache memory (LLC), and an application running in both virtual machines frequently uses the cache memory, the cache hit ratio uses the cache exclusively. Because it is lower than usual, there is a problem that causes the performance of the virtual machine to degrade. This is the same problem that occurs with other shared resources.
1 is a view showing a virtual machine arrangement result according to an embodiment of the prior art. When a virtual machine uses shared resources of a system in which it is deployed in competition with other virtual machines, virtual machines that use a lot of shared resources may be deployed as a single system, which may cause performance degradation of the virtual machine.
In this case, at the time of scheduling the virtual machine, the system is idle and the virtual machine is placed, but in the future, the virtual machines frequently use cache resources, which causes performance degradation of the entire system. When two virtual machines indicated by dotted lines in FIG. 1 are deployed, there is no cache-intensive virtual machine, but the cache miss rate of the virtual machines operating on the system increases when the cache usage of another virtual machine increases rapidly after being deployed. The damage is reduced.
As described above, in using the existing cloud computing system, there is a problem that virtual machines using the resource suffer from performance degradation due to collision between shared resources. Allocation of shared resources within one physical machine may not provide enough shared resources for multiple virtual machines, while other physical machines may be idle. Therefore, there is a problem in that the efficient shared resource management of the entire cloud computing system. In addition, in the related art, the relocation method for the virtual machine is used only to balance the CPU load and the memory load, and does not consider the performance degradation of the virtual machine due to shared resources. Therefore, a virtual machine relocation method was required to prevent performance degradation of each virtual machine.
The present invention is to solve the above problems, the technical problem to be achieved in the present invention is a dynamic virtual machine arrangement apparatus and method, to enable efficient use of shared resources to the user without experiencing the above problems. .
In order to solve the above technical problem, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention, in the cloud computing system including the cluster, the cache miss rate of the plurality of virtual machines included in the plurality of nodes included in the cluster And a cache miss rate collector for collecting and a virtual machine relocator for relocating the virtual machines according to the cache miss rate.
In addition, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention, the virtual machine relocation unit includes a global phase virtual machine relocation unit, the global phase virtual machine relocation unit, the node having the highest and lowest cache miss rate by receiving cache miss rate information A node selector for selecting respectively, a virtual machine selector for selecting a virtual machine having the highest and lowest cache miss rates, respectively, and a node virtual machine repositioner for redeploying the selected virtual machines by interchange It is characterized by including.
In addition, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention, the virtual machine relocator includes a local phase virtual machine relocator, Local phase virtual machine relocator, to arrange the virtual machine included in each node according to the cache miss rate A virtual machine aligning unit, a virtual machine grouping unit that groups the sorted virtual machines into groups including the same number of virtual machines as the number of sockets included in each node, and a plurality of sockets of the virtual machines included in the group for each group. It characterized in that it comprises a socket virtual machine repositioning unit for relocating to.
In addition, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention is characterized in that the socket virtual machine relocation unit relocates the virtual machine having the highest cache miss rate and the virtual machine having the lowest cache miss rate among the virtual machines to be included in the same socket. It is done.
In addition, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention is characterized in that the virtual machine relocation unit relocates the virtual machine using live-migration.
In order to solve the above technical problem, the dynamic virtual machine deployment method according to an embodiment of the present invention in the cloud computing system including a cluster, the cache miss rate of the plurality of virtual machines included in a plurality of nodes included in the cluster Collecting and relocating the virtual machine according to the cache miss rate.
In addition, in the dynamic virtual machine placement method according to an embodiment of the present invention, the step of relocating the virtual machine to receive the cache miss rate information to select the node having the highest and lowest cache miss rate, respectively, the selected highest and lowest cache miss rate Selecting at each node having the highest and lowest cache miss rates and redeploying the selected virtual machines interchangeably.
In addition, the dynamic virtual machine deployment method according to an embodiment of the present invention, the step of relocating the virtual machine to arrange the virtual machine included in each node according to the cache miss rate, the sorted virtual machines of the socket included in each node And grouping the groups into groups including the same number of virtual machines and relocating the virtual machines included in the group into a plurality of sockets.
In addition, the dynamic virtual machine placement method according to an embodiment of the present invention is the step of relocating to the socket is to relocate the virtual machine with the highest cache miss rate and the virtual machine with the lowest cache miss rate among the virtual machines to be included in the same socket It features.
In addition, the dynamic virtual machine deployment method according to an embodiment of the present invention is characterized in that the step of relocating the virtual machine to relocate the virtual machine using live-migration.
According to one embodiment of the invention, it is possible to provide a user with a virtual machine of fair performance.
According to one embodiment of the present invention, it is possible to increase the resource efficiency of the entire cluster system.
According to an embodiment of the present invention, it is possible to improve performance degradation of a virtual machine due to shared resource collision in a cloud computing system.
1 is a view showing a virtual machine arrangement result according to an embodiment of the prior art.
2 is a view showing the structure of OpenNebula according to an embodiment of the present invention.
3 is a diagram illustrating a process of transmitting virtual machine monitoring information according to an embodiment of the present invention.
4 is a diagram illustrating a live-migration process according to an embodiment of the present invention.
5 illustrates a dual socket multicore CPU according to an embodiment of the present invention.
6 is a diagram illustrating virtual machine placement and relocation between physical machines according to an embodiment of the present invention.
7 illustrates a dynamic virtual machine relocation apparatus according to an embodiment of the present invention.
8 illustrates a local phase virtual machine relocation unit according to an embodiment of the present invention.
9 illustrates a local phase virtual machine relocation according to an embodiment of the present invention.
10 is a flowchart illustrating a local phase virtual machine relocation method according to an embodiment of the present invention.
11 illustrates a global phase virtual machine relocator according to an embodiment of the present invention.
12 illustrates a global phase virtual machine relocation according to an embodiment of the present invention.
13 is a flowchart illustrating a global phase virtual machine relocation method according to an embodiment of the present invention.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings and accompanying drawings, but the present invention is not limited to or limited by the embodiments.
As used herein, terms used in the present invention are selected from general terms that are widely used in the present invention while taking into account the functions of the present invention, but these may vary depending on the intention or custom of a person skilled in the art or the emergence of new technologies. In addition, in certain cases, there may be a term arbitrarily selected by the applicant, in which case the meaning thereof will be described in the description of the corresponding invention. Therefore, it is intended that the terminology used herein should be interpreted based on the meaning of the term rather than on the name of the term, and on the entire contents of the specification.
In the present invention, cloud computing is a technology for integrating and providing resources of clusters existing in different physical locations with virtualization technology. In other words, it refers to a service in which IT resources such as hardware and software are borrowed when necessary and paid for using them. The user may be provided with a performance suitable for an application to be computed using a virtual machine regardless of the performance of each cluster resource.
Hereinafter, the cloud computing system, the cluster, and the virtual machine are not limited to the term, but a system that provides computing using distributed resources and a distributed computing resource and distributed computing resources included therein as a single computing resource. Means computing resources.
In a cloud computing system, there is a difference between a resource used frequently and a low frequency used for each virtual machine. Therefore, even if virtual machines use a single physical machine, performance degradation may vary depending on how the virtual machines are placed. For example, performance degradation can be avoided when a large amount of cache memory is used and a large amount of cache memory is used and a small amount of the mixed amount of memory is used. The relatively low effect of performance interference allows for more efficient use of physical machines.
A method for minimizing performance degradation due to cache memory sharing proposed by the present invention is to create program sets in which overall performance interference is minimized for given programs, and place each of them in a CPU.
In order to minimize performance interference, taking into account the cache memory capacity required for each program, if the sum of the cache memory requirements of programs running on one CPU, that is, cores sharing cache memory, is similar to the cache memory capacity of the CPU, do.
However, measuring the cache memory requirements of a program is not easy. Therefore, the cache miss rate (LLC miss) is used to measure the required amount of cache memory. The higher the cache miss rate, the higher the cache memory requirement. The cache miss rate has the advantage that it can be identified without any additional device.
The present invention describes a method of reducing the performance interference effect on the cache memory among the resources shared in the physical machine in one embodiment.
The cloud computing system of the present invention assumes an IaaS (Infrastructure as a Service) based cloud computing platform. The infrastructure is provided with a virtual machine that can be used by customers. One example of the software providing such a function is OpenNubula, which is open software.
2 is a view showing the structure of OpenNebula according to an embodiment of the present invention.
The lowest layer of the
In addition, there is a
The private cloud system
3 is a diagram illustrating a process of transmitting virtual machine monitoring information according to an embodiment of the present invention.
The virtual machine monitoring of the present invention not only shows the usage of the CPU, memory and network used, but also which CPU-affinity the virtual machine is currently operating on, the number of misses of the last level cache (LLC) and the execution of the virtual machine. You can monitor the number of instructions periodically. In this case, CPU-affinity indicates which CPU socket a virtual machine is better suited for, and indicates the degree of convergence between the virtual machine and the CPU socket.
Each
4 is a diagram illustrating a live-migration process according to an embodiment of the present invention.
Live-migration means to migrate a current power-on virtual machine to another physical machine without having to switch it off. This allows you to redeploy to other physical machines without interrupting the computing work being performed by the virtual machine, which ensures continuity of work.
The scheduler operating in the front-
The scheduler periodically monitors the information (35) of the cluster nodes, and when there are more LLC misses in a particular node, the scheduler selects a virtual machine that puts the most load on the node, and determines that the load is relatively low. It is possible to live-migrate a virtual machine.
For example, when the virtual machine
Through the operation of the scheduler, it is possible to adjust the load between the nodes of the cluster included in the cloud system, thereby enabling efficient use without degrading the performance of the virtual machines in the cluster.
5 illustrates a dual socket multicore CPU according to an embodiment of the present invention.
The dual socket multicore CPU includes two
The plurality of
Similarly, since the
6 is a diagram illustrating virtual machine placement and relocation between physical machines according to an embodiment of the present invention.
In FIG. 6 (a) each physical machine includes
The
6 (b) shows an operation of live-migration of a virtual machine between hardware. As a result of observation by the virtual machine monitor 1 (52) of the hardware 1 (51), when the resource demand for the hardware 1 (51) increases and the performance of the virtual machine decreases, the hardware 2 (55) having a relatively low resource requirement is required. Perform live-migration on the virtual machine.
In the figure, the live-migration of the
By relocating the virtual machine as described above, it is possible to reduce the variation of resource requirements for each hardware. As a result, it is possible to prevent the performance degradation of the virtual machine and to use it efficiently.
7 illustrates a dynamic virtual machine relocation apparatus according to an embodiment of the present invention.
The cache
The
8 illustrates a local phase virtual machine relocation unit according to an embodiment of the present invention.
Local phase virtual machine relocation according to the present invention operates in the following order.
First, the
When the grouping is completed, the socket virtual
This allows virtual machines in each group to be relocated to different sockets. The virtual machines in each group have similar cache memory requirements, so that multiple sockets can be split and relocated so that they have an even memory requirement.
9 illustrates a local phase virtual machine relocation according to an embodiment of the present invention.
Referring to the situation in which eight
When the indexing of the virtual machines is completed, the
After group indexing is completed, the virtual machines are relocated from
In the above embodiment, a case where two sockets and eight virtual machines are described as an example, but the present invention is not limited thereto.
10 is a flowchart illustrating a local phase virtual machine relocation method according to an embodiment of the present invention.
The local phase virtual machine relocation method aligns virtual machines operating on a plurality of sockets included in one system (S12). In this case, the sorting method sorts in descending order from the virtual machines having the highest cache miss rate based on the cache miss rate of each virtual machine, and indexes them in the same order as the sorting is completed.
Grouping is performed for each virtual machine whose virtual machine indexing is completed (S14). Grouping methods are grouped in the order indexed above, and the number of virtual machines in one group is the same as the number of sockets in a single system. Indexing for each group alternately indexes the group with the highest cache miss rate and the group with the lowest cache miss rate. Detailed description thereof has been described with reference to FIG. 9.
When indexing for the group is completed, the relocation of the virtual machines belonging to each group is performed (S16). The relocation sequence is based on group indexing and relocates virtual machines belonging to the same group to different sockets. At this time, the virtual machine with the highest cache miss rate and the virtual machine with the lowest cache miss rate are relocated to be included in the same socket. This process is repeated until all the virtual machines in all groups are relocated.
Through the above process, it is possible to relocate the cache miss rate of the virtual machines in each socket included in a single system to be equal. As a result, the memory requirements are equal, thus preventing the performance of the virtual machines and effectively using the shared resources. .
11 illustrates a global phase virtual machine relocator according to an embodiment of the present invention.
The
When the node selection is completed, the virtual
When the selection for the virtual machine is completed, the node virtual
12 illustrates a global phase virtual machine relocation according to an embodiment of the present invention.
Two nodes in which nine virtual machines operate on one node will be described as an example. In FIG. 12, the hatched rectangles represent virtual machines with high cache miss rates and the hatched rectangles represent virtual machines with low cache miss rates.
In a system in which a virtual machine operates, the load of the system may be biased to a specific node 91 as shown in FIG. 12 (a). In the global phase, in this case, the
13 is a flowchart illustrating a global phase virtual machine relocation method according to an embodiment of the present invention.
The global phase virtual machine relocation method selects the node containing the virtual machine to be relocated. To this end, the node with the highest cache miss rate and the node with the lowest cache miss rate are selected (S21). The cache miss rate of each node may be calculated as the sum of the cache miss rates of the virtual machines included in each node.
This process selects the nodes with the highest load and the node with the lowest load among the nodes in the system. Both nodes are subject to global phase virtual machine relocation.
The next step is to select the virtual machine to relocate. The virtual machine having the highest cache miss rate is selected at the node having the highest cache miss rate selected above, and the virtual machine having the lowest cache miss rate is selected at the node having the lowest cache miss rate (S23).
When the selection of the virtual machines is completed, the highest miss rate virtual machine is relocated to the lowest miss rate virtual node for the two virtual machines selected above, and the lowest miss rate virtual machine is relocated to the highest miss rate virtual node (S25). During this relocation process it is possible to relocate using the live-migration method.
The process may be repeated until the cache miss rate difference between the node with the highest cache miss rate and the node with the lowest cache miss rate is less than or equal to a predetermined threshold value.
Through the above method, the global phase virtual machine relocation method equally adjusts the resource requirements of each node to alleviate the overload on a specific node, thereby preventing performance degradation of the virtual machine and enabling efficient use of shared resources.
While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. This is possible. Therefore, the scope of the present invention should not be limited by the described embodiments, but should be determined by the equivalents of the appended claims, as well as the appended claims.
31: Front-End 32: LLC miss and cpu-affinity information
33: Virtual machine relocation target node 34: Virtual machine relocation target node
35: information of cluster nodes
Claims (10)
A cache miss rate collector configured to collect cache miss rates of a plurality of virtual machines included in a plurality of nodes included in the cluster;
And a virtual machine relocator configured to relocate the virtual machine according to the cache miss rate.
The global phase virtual machine relocation unit,
A node selector configured to receive the cache miss rate information and select nodes having the highest and lowest cache miss rates, respectively;
A virtual machine selector configured to select a virtual machine having a highest and lowest cache miss rate at each node having the selected highest and lowest cache miss rate; And
And a node virtual machine relocator configured to interchange and relocate the selected virtual machine.
The local phase virtual machine relocator,
A virtual machine aligning unit that sorts the virtual machines included in the sockets in each node according to a cache miss rate;
A virtual machine grouping unit for grouping the sorted virtual machines into groups including the same number of virtual machines as the number of sockets included in each node; And
And a socket virtual machine relocator configured to relocate the virtual machines included in the group to a plurality of sockets for each group.
And relocating the virtual machine having the highest cache miss rate and the virtual machine having the lowest cache miss rate among the virtual machines to be included in the same socket.
Collecting cache miss rates of a plurality of virtual machines included in a plurality of nodes included in the cluster;
And relocating a virtual machine according to the cache miss rate.
Receiving the cache miss rate information and selecting nodes having the highest and lowest cache miss rates, respectively;
Selecting a virtual machine having a highest and lowest cache miss rate at each node having the selected highest and lowest cache miss rate; And
Interchanging and relocating the selected virtual machine.
Sorting the virtual machines included in the sockets in each node according to a cache miss rate;
Grouping the sorted virtual machines into groups including the same number of virtual machines as the number of sockets included in each node; And
Relocating a virtual machine included in the group for each group to a plurality of sockets.
And relocating the virtual machine having the highest cache miss rate and the virtual machine having the lowest cache miss rate among the virtual machines to be included in the same socket.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110143097A KR20130074953A (en) | 2011-12-27 | 2011-12-27 | Apparatus and method for dynamic virtual machine placement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110143097A KR20130074953A (en) | 2011-12-27 | 2011-12-27 | Apparatus and method for dynamic virtual machine placement |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20130074953A true KR20130074953A (en) | 2013-07-05 |
Family
ID=48988922
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020110143097A KR20130074953A (en) | 2011-12-27 | 2011-12-27 | Apparatus and method for dynamic virtual machine placement |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20130074953A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150016820A (en) * | 2013-08-05 | 2015-02-13 | 한국전자통신연구원 | System and method for virtual machine placement and management on cluster system |
KR101502225B1 (en) * | 2013-07-31 | 2015-03-12 | 서울대학교산학협력단 | Virtual machine allocation method to minimize performance interference between virtual machines |
WO2015099701A1 (en) | 2013-12-24 | 2015-07-02 | Intel Corporation | Cloud compute scheduling using a heuristic contention model |
WO2022039292A1 (en) * | 2020-08-19 | 2022-02-24 | 서울대학교산학협력단 | Edge computing method, electronic device, and system for providing cache update and bandwidth allocation for wireless virtual reality |
KR102571782B1 (en) * | 2022-12-16 | 2023-08-29 | 스트라토 주식회사 | Apparatus and method for virtual machine relocation using resource management pool |
-
2011
- 2011-12-27 KR KR1020110143097A patent/KR20130074953A/en not_active Application Discontinuation
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101502225B1 (en) * | 2013-07-31 | 2015-03-12 | 서울대학교산학협력단 | Virtual machine allocation method to minimize performance interference between virtual machines |
KR20150016820A (en) * | 2013-08-05 | 2015-02-13 | 한국전자통신연구원 | System and method for virtual machine placement and management on cluster system |
WO2015099701A1 (en) | 2013-12-24 | 2015-07-02 | Intel Corporation | Cloud compute scheduling using a heuristic contention model |
EP3087503A4 (en) * | 2013-12-24 | 2018-01-17 | Intel Corporation | Cloud compute scheduling using a heuristic contention model |
US11212235B2 (en) | 2013-12-24 | 2021-12-28 | Intel Corporation | Cloud compute scheduling using a heuristic contention model |
US11689471B2 (en) | 2013-12-24 | 2023-06-27 | Intel Corporation | Cloud compute scheduling using a heuristic contention model |
WO2022039292A1 (en) * | 2020-08-19 | 2022-02-24 | 서울대학교산학협력단 | Edge computing method, electronic device, and system for providing cache update and bandwidth allocation for wireless virtual reality |
KR102571782B1 (en) * | 2022-12-16 | 2023-08-29 | 스트라토 주식회사 | Apparatus and method for virtual machine relocation using resource management pool |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10776151B2 (en) | Adaptive CPU NUMA scheduling | |
US10334034B2 (en) | Virtual machine live migration method, virtual machine deployment method, server, and cluster system | |
CN108701059B (en) | Multi-tenant resource allocation method and system | |
Park et al. | Locality-aware dynamic VM reconfiguration on MapReduce clouds | |
US8352942B2 (en) | Virtual-machine control apparatus and virtual-machine moving method | |
US8510747B2 (en) | Method and device for implementing load balance of data center resources | |
US8874744B2 (en) | System and method for automatically optimizing capacity between server clusters | |
US20170097845A1 (en) | System and Method for Optimizing Placements of Virtual Machines on Hypervisor Hosts | |
US20130167152A1 (en) | Multi-core-based computing apparatus having hierarchical scheduler and hierarchical scheduling method | |
US20140173620A1 (en) | Resource allocation method and resource management platform | |
Chatzistergiou et al. | Fast heuristics for near-optimal task allocation in data stream processing over clusters | |
JP2008191949A (en) | Multi-core system, and method for distributing load of the same | |
JP2007272263A (en) | Method for managing computer, computer system, and management program | |
KR101356033B1 (en) | Hybrid Main Memory System and Task Scheduling Method therefor | |
WO2012036960A1 (en) | Dynamic creation and destruction of io resources based on actual load and resource availability | |
KR20130074953A (en) | Apparatus and method for dynamic virtual machine placement | |
CN110221920A (en) | Dispositions method, device, storage medium and system | |
US20090183166A1 (en) | Algorithm to share physical processors to maximize processor cache usage and topologies | |
US8458719B2 (en) | Storage management in a data processing system | |
CN104714845B (en) | Resource dynamic regulation method, device and more kernel operating systems | |
JP2013210833A (en) | Job management device, job management method and program | |
Panda et al. | Novel service broker and load balancing policies for cloudsim-based visual modeller | |
US20120042322A1 (en) | Hybrid Program Balancing | |
US20140126481A1 (en) | Block Scheduling Method For Scalable And Flexible Scheduling In A HSUPA System | |
US20210373972A1 (en) | Vgpu scheduling policy-aware migration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application |