KR20130074953A - Apparatus and method for dynamic virtual machine placement - Google Patents

Apparatus and method for dynamic virtual machine placement Download PDF

Info

Publication number
KR20130074953A
KR20130074953A KR1020110143097A KR20110143097A KR20130074953A KR 20130074953 A KR20130074953 A KR 20130074953A KR 1020110143097 A KR1020110143097 A KR 1020110143097A KR 20110143097 A KR20110143097 A KR 20110143097A KR 20130074953 A KR20130074953 A KR 20130074953A
Authority
KR
South Korea
Prior art keywords
virtual machine
cache miss
miss rate
virtual
node
Prior art date
Application number
KR1020110143097A
Other languages
Korean (ko)
Inventor
최영리
최동훈
박동인
허재혁
안정섭
김창대
Original Assignee
한국과학기술정보연구원
한국과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국과학기술정보연구원, 한국과학기술원 filed Critical 한국과학기술정보연구원
Priority to KR1020110143097A priority Critical patent/KR20130074953A/en
Publication of KR20130074953A publication Critical patent/KR20130074953A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/161Computing infrastructure, e.g. computer clusters, blade chassis or hardware partitioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/167Interprocessor communication using a common memory, e.g. mailbox
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention relates to an apparatus and method for providing a guide for efficiently using resources of a cloud computing system.
In order to solve the above technical problem, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention, in the cloud computing system including the cluster, the cache miss rate of the plurality of virtual machines included in the plurality of nodes included in the cluster And a cache miss rate collector for collecting and a virtual machine relocator for relocating the virtual machines according to the cache miss rate.

Description

Apparatus and method for dynamic virtual machine placement}

The present invention relates to an apparatus and method for dynamically deploying virtual machines to efficiently use resources of a cloud computing system.

Conventional process placement techniques have used a method for solving a shared resource conflict problem in one physical machine. However, in a cloud environment, a plurality of physical machines exist, and there may be more than one virtual machine. In this case, even when the shared resource conflict problem is solved in one virtual machine, when a shared resource conflict occurs between the virtual machines, a problem of lowering the efficiency of resource usage occurs. Examples of such shared resources include cache memory, memory bandwidth, network bandwidth, disk performance, and the like. If two virtual machines each use CPUs that share a cache memory (LLC), and an application running in both virtual machines frequently uses the cache memory, the cache hit ratio uses the cache exclusively. Because it is lower than usual, there is a problem that causes the performance of the virtual machine to degrade. This is the same problem that occurs with other shared resources.

1 is a view showing a virtual machine arrangement result according to an embodiment of the prior art. When a virtual machine uses shared resources of a system in which it is deployed in competition with other virtual machines, virtual machines that use a lot of shared resources may be deployed as a single system, which may cause performance degradation of the virtual machine.

In this case, at the time of scheduling the virtual machine, the system is idle and the virtual machine is placed, but in the future, the virtual machines frequently use cache resources, which causes performance degradation of the entire system. When two virtual machines indicated by dotted lines in FIG. 1 are deployed, there is no cache-intensive virtual machine, but the cache miss rate of the virtual machines operating on the system increases when the cache usage of another virtual machine increases rapidly after being deployed. The damage is reduced.

As described above, in using the existing cloud computing system, there is a problem that virtual machines using the resource suffer from performance degradation due to collision between shared resources. Allocation of shared resources within one physical machine may not provide enough shared resources for multiple virtual machines, while other physical machines may be idle. Therefore, there is a problem in that the efficient shared resource management of the entire cloud computing system. In addition, in the related art, the relocation method for the virtual machine is used only to balance the CPU load and the memory load, and does not consider the performance degradation of the virtual machine due to shared resources. Therefore, a virtual machine relocation method was required to prevent performance degradation of each virtual machine.

The present invention is to solve the above problems, the technical problem to be achieved in the present invention is a dynamic virtual machine arrangement apparatus and method, to enable efficient use of shared resources to the user without experiencing the above problems. .

In order to solve the above technical problem, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention, in the cloud computing system including the cluster, the cache miss rate of the plurality of virtual machines included in the plurality of nodes included in the cluster And a cache miss rate collector for collecting and a virtual machine relocator for relocating the virtual machines according to the cache miss rate.

In addition, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention, the virtual machine relocation unit includes a global phase virtual machine relocation unit, the global phase virtual machine relocation unit, the node having the highest and lowest cache miss rate by receiving cache miss rate information A node selector for selecting respectively, a virtual machine selector for selecting a virtual machine having the highest and lowest cache miss rates, respectively, and a node virtual machine repositioner for redeploying the selected virtual machines by interchange It is characterized by including.

In addition, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention, the virtual machine relocator includes a local phase virtual machine relocator, Local phase virtual machine relocator, to arrange the virtual machine included in each node according to the cache miss rate A virtual machine aligning unit, a virtual machine grouping unit that groups the sorted virtual machines into groups including the same number of virtual machines as the number of sockets included in each node, and a plurality of sockets of the virtual machines included in the group for each group. It characterized in that it comprises a socket virtual machine repositioning unit for relocating to.

In addition, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention is characterized in that the socket virtual machine relocation unit relocates the virtual machine having the highest cache miss rate and the virtual machine having the lowest cache miss rate among the virtual machines to be included in the same socket. It is done.

In addition, the dynamic virtual machine deployment apparatus according to an embodiment of the present invention is characterized in that the virtual machine relocation unit relocates the virtual machine using live-migration.

In order to solve the above technical problem, the dynamic virtual machine deployment method according to an embodiment of the present invention in the cloud computing system including a cluster, the cache miss rate of the plurality of virtual machines included in a plurality of nodes included in the cluster Collecting and relocating the virtual machine according to the cache miss rate.

In addition, in the dynamic virtual machine placement method according to an embodiment of the present invention, the step of relocating the virtual machine to receive the cache miss rate information to select the node having the highest and lowest cache miss rate, respectively, the selected highest and lowest cache miss rate Selecting at each node having the highest and lowest cache miss rates and redeploying the selected virtual machines interchangeably.

In addition, the dynamic virtual machine deployment method according to an embodiment of the present invention, the step of relocating the virtual machine to arrange the virtual machine included in each node according to the cache miss rate, the sorted virtual machines of the socket included in each node And grouping the groups into groups including the same number of virtual machines and relocating the virtual machines included in the group into a plurality of sockets.

In addition, the dynamic virtual machine placement method according to an embodiment of the present invention is the step of relocating to the socket is to relocate the virtual machine with the highest cache miss rate and the virtual machine with the lowest cache miss rate among the virtual machines to be included in the same socket It features.

In addition, the dynamic virtual machine deployment method according to an embodiment of the present invention is characterized in that the step of relocating the virtual machine to relocate the virtual machine using live-migration.

According to one embodiment of the invention, it is possible to provide a user with a virtual machine of fair performance.

According to one embodiment of the present invention, it is possible to increase the resource efficiency of the entire cluster system.

According to an embodiment of the present invention, it is possible to improve performance degradation of a virtual machine due to shared resource collision in a cloud computing system.

1 is a view showing a virtual machine arrangement result according to an embodiment of the prior art.
2 is a view showing the structure of OpenNebula according to an embodiment of the present invention.
3 is a diagram illustrating a process of transmitting virtual machine monitoring information according to an embodiment of the present invention.
4 is a diagram illustrating a live-migration process according to an embodiment of the present invention.
5 illustrates a dual socket multicore CPU according to an embodiment of the present invention.
6 is a diagram illustrating virtual machine placement and relocation between physical machines according to an embodiment of the present invention.
7 illustrates a dynamic virtual machine relocation apparatus according to an embodiment of the present invention.
8 illustrates a local phase virtual machine relocation unit according to an embodiment of the present invention.
9 illustrates a local phase virtual machine relocation according to an embodiment of the present invention.
10 is a flowchart illustrating a local phase virtual machine relocation method according to an embodiment of the present invention.
11 illustrates a global phase virtual machine relocator according to an embodiment of the present invention.
12 illustrates a global phase virtual machine relocation according to an embodiment of the present invention.
13 is a flowchart illustrating a global phase virtual machine relocation method according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings and accompanying drawings, but the present invention is not limited to or limited by the embodiments.

As used herein, terms used in the present invention are selected from general terms that are widely used in the present invention while taking into account the functions of the present invention, but these may vary depending on the intention or custom of a person skilled in the art or the emergence of new technologies. In addition, in certain cases, there may be a term arbitrarily selected by the applicant, in which case the meaning thereof will be described in the description of the corresponding invention. Therefore, it is intended that the terminology used herein should be interpreted based on the meaning of the term rather than on the name of the term, and on the entire contents of the specification.

In the present invention, cloud computing is a technology for integrating and providing resources of clusters existing in different physical locations with virtualization technology. In other words, it refers to a service in which IT resources such as hardware and software are borrowed when necessary and paid for using them. The user may be provided with a performance suitable for an application to be computed using a virtual machine regardless of the performance of each cluster resource.

Hereinafter, the cloud computing system, the cluster, and the virtual machine are not limited to the term, but a system that provides computing using distributed resources and a distributed computing resource and distributed computing resources included therein as a single computing resource. Means computing resources.

In a cloud computing system, there is a difference between a resource used frequently and a low frequency used for each virtual machine. Therefore, even if virtual machines use a single physical machine, performance degradation may vary depending on how the virtual machines are placed. For example, performance degradation can be avoided when a large amount of cache memory is used and a large amount of cache memory is used and a small amount of the mixed amount of memory is used. The relatively low effect of performance interference allows for more efficient use of physical machines.

A method for minimizing performance degradation due to cache memory sharing proposed by the present invention is to create program sets in which overall performance interference is minimized for given programs, and place each of them in a CPU.

In order to minimize performance interference, taking into account the cache memory capacity required for each program, if the sum of the cache memory requirements of programs running on one CPU, that is, cores sharing cache memory, is similar to the cache memory capacity of the CPU, do.

However, measuring the cache memory requirements of a program is not easy. Therefore, the cache miss rate (LLC miss) is used to measure the required amount of cache memory. The higher the cache miss rate, the higher the cache memory requirement. The cache miss rate has the advantage that it can be identified without any additional device.

The present invention describes a method of reducing the performance interference effect on the cache memory among the resources shared in the physical machine in one embodiment.

The cloud computing system of the present invention assumes an IaaS (Infrastructure as a Service) based cloud computing platform. The infrastructure is provided with a virtual machine that can be used by customers. One example of the software providing such a function is OpenNubula, which is open software.

2 is a view showing the structure of OpenNebula according to an embodiment of the present invention.

The lowest layer of the private cloud 10 has several physical machines 11, all of which are equipped with virtualization software. Virtualization software is available from Xen, KVM or VMware.

In addition, there is a Virtual Infrastructure Manager 12 that manages and manages the resources of these physical machines 11. It manages basic plural information of each physical machine 11, and can allocate and deploy a new virtual machine according to a user's request.

The private cloud system internal users 13 can use the physical machines 11 via the internal interface 14, and in the case of external users 15, it is accessible via the public interface 16. In addition, the private cloud 10 may be mixed with the external cloud 17 to provide the composite cloud system 18.

3 is a diagram illustrating a process of transmitting virtual machine monitoring information according to an embodiment of the present invention.

The virtual machine monitoring of the present invention not only shows the usage of the CPU, memory and network used, but also which CPU-affinity the virtual machine is currently operating on, the number of misses of the last level cache (LLC) and the execution of the virtual machine. You can monitor the number of instructions periodically. In this case, CPU-affinity indicates which CPU socket a virtual machine is better suited for, and indicates the degree of convergence between the virtual machine and the CPU socket.

Each node 21 periodically monitors the number of occurrences of the LLC miss of each virtual machine and the CPU-affinity information 23. Periodic observation is possible by the Xenonmon.py 20 monitoring tool operating at each node 21. When a request comes from the front-end 24, it delivers the most recent observation data. Finally, the front-end 24 records the information in the data base 25. Based on this information, it is possible for a virtual machine administrator to relocate a virtual machine.

4 is a diagram illustrating a live-migration process according to an embodiment of the present invention.

Live-migration means to migrate a current power-on virtual machine to another physical machine without having to switch it off. This allows you to redeploy to other physical machines without interrupting the computing work being performed by the virtual machine, which ensures continuity of work.

The scheduler operating in the front-end 31 may periodically observe the monitoring information of the virtual machine. It is possible to know which node of the current system the virtual machine is operating on and the current LLC miss and cpu-affinity information 32. Therefore, the scheduling policy can be adaptively performed based on this information. The virtual machine can be moved according to the situation by using XML-RPC provided by OpenNebula to move the virtual machine.

The scheduler periodically monitors the information (35) of the cluster nodes, and when there are more LLC misses in a particular node, the scheduler selects a virtual machine that puts the most load on the node, and determines that the load is relatively low. It is possible to live-migrate a virtual machine.

For example, when the virtual machine relocation target node 33 has a higher number of LLC misses than other nodes as shown in FIG. 4, the virtual machine having the highest LLC miss among the virtual machines operating in the virtual machine relocation target node 33. Can be moved to a virtual machine relocation relative node 34 having a relatively low LLC miss. In the drawing, the virtual machine relocation target node 33 selects VM1 which generates the most load in the node because the monitoring result of LLC miss is 30000, which is more than that of other nodes. Next, the virtual machine VM1 selected above is live-migrated to the virtual machine relocation partner node 34 having a relatively low load among the nodes that are being monitored together.

Through the operation of the scheduler, it is possible to adjust the load between the nodes of the cluster included in the cloud system, thereby enabling efficient use without degrading the performance of the virtual machines in the cluster.

5 illustrates a dual socket multicore CPU according to an embodiment of the present invention.

The dual socket multicore CPU includes two sockets 41 and 42, and each socket may include a plurality of processors 43 and a LLC (Last Level Cache) 44 shared by the processors. The CPU may also include external memories 45 and 46 shared by two sockets outside the socket.

The plurality of processors 43 included in one socket 41 share the LLC 44 which is a cache memory. Therefore, if the programs running on the processor 43 in a particular socket are composed only of programs with high cache memory usage, the performance of the socket may be degraded due to high shared resource requirements. In this case, it is possible to relocate some virtual machines in which each program is executed from one socket 41 to another socket 42. In the present invention, the method of controlling the load between the sockets is defined as a local phase virtual machine relocation, thereby equalizing LLC misses occurring in the virtual machines included in each socket (41, 42) and thereby through each virtual. It is possible to increase the resource efficiency by preventing the performance degradation of the machine. Local phase relocation is described in detail with reference to FIG. 9 below.

Similarly, since the sockets 41 and 42 share the external memories 45 and 46, when the memory demand for a specific physical machine increases, the performance of the virtual machine may occur. To this end, it is possible to equalize the cache miss rate of each computing node in the cluster to prevent performance degradation of each virtual machine. Virtual machine relocation between physical machines is defined as Global phase virtual machine relocation, which will be described in detail with reference to FIG. 12 below.

6 is a diagram illustrating virtual machine placement and relocation between physical machines according to an embodiment of the present invention.

In FIG. 6 (a) each physical machine includes hardware 51, virtual machine monitor 52 and virtual machines 53, 54, 55. Each of the virtual machines 53, 54, and 55 may operate in a separate operating system (OS), for example, window, linux, and solaris.

The hardware 1 51 is shared by the plurality of virtual machines 53, 54, and 55, and the virtual machine monitor 1 52 may monitor resource requirements, LLC miss and cpu-affinity of each virtual machine. In this case, the hardware 1 51 may represent a resource in a physical machine unit or a socket unit in the CPU.

6 (b) shows an operation of live-migration of a virtual machine between hardware. As a result of observation by the virtual machine monitor 1 (52) of the hardware 1 (51), when the resource demand for the hardware 1 (51) increases and the performance of the virtual machine decreases, the hardware 2 (55) having a relatively low resource requirement is required. Perform live-migration on the virtual machine.

In the figure, the live-migration of the virtual machine VM3 55 is shown. Through this, the VM3 55 can be relocated to the hardware 2 56 without switching from the current power-on state to the off state. The relocated VM3 55 may perform computing using the resources of hardware2 and the monitoring of the VM3 55 after relocation is monitored by the virtual machine monitor 2 57. The physical machine-to-machine relocation described above is accomplished by global phase virtual machine relocation.

By relocating the virtual machine as described above, it is possible to reduce the variation of resource requirements for each hardware. As a result, it is possible to prevent the performance degradation of the virtual machine and to use it efficiently.

7 illustrates a dynamic virtual machine relocation apparatus according to an embodiment of the present invention.

The cache miss rate collector 71 collects a cache miss rate of a virtual machine included in each node included in the cluster. This will give you information about which nodes are overloaded and which will cause performance degradation of the virtual machine.

The virtual machine relocator 73 relocates the virtual machine based on the cache miss rate collected above. The virtual machine relocator 73 includes a global phase virtual machine relocator and a local phase virtual machine relocator. Here, the global phase virtual machine relocator may be used to relocate virtual machines between nodes, and the local phase virtual machine relocator may be used to relocate virtual machines between sockets. Each virtual machine relocation method is described in detail below.

8 illustrates a local phase virtual machine relocation unit according to an embodiment of the present invention.

Local phase virtual machine relocation according to the present invention operates in the following order.

First, the virtual machine sorter 75 sorts the virtual machines based on a cache miss rate of each virtual machine. This allows the virtual machines to be sorted in order of high cache memory requirements. When the sorting of the virtual machines is completed, the virtual machine grouping unit 76 groups the virtual machines into groups including the same number of virtual machines as the number of sockets included in the system.

When the grouping is completed, the socket virtual machine relocation unit 77 selects a group having the highest cache miss rate and relocates the virtual machines included in the group to different sockets. Next, select the group with the lowest cache miss rate and place the virtual machines in that group on different sockets as above. At this time, the virtual machine with the lowest cache miss rate is allocated to the socket to which the virtual machine with the highest cache miss rate is allocated. By repeating this process, the cache miss rate of each socket can be kept as even as possible. This relocation may be performed at regular intervals or at the command of the user.

This allows virtual machines in each group to be relocated to different sockets. The virtual machines in each group have similar cache memory requirements, so that multiple sockets can be split and relocated so that they have an even memory requirement.

9 illustrates a local phase virtual machine relocation according to an embodiment of the present invention.

Referring to the situation in which eight virtual machines 83 are operating in a system 80 including two sockets 81 and 82, the virtual machine aligning unit 75 may determine a virtual machine based on a cache miss rate. The indexes 83 are sorted in descending order. As a result, each virtual machine 83 is indexed from VM1 having the highest cache miss rate to VM8 having the lowest cache miss rate.

When the indexing of the virtual machines is completed, the virtual machines 83 are grouped by two, that is, the number of sockets 81 and 82. At this time, group indexing alternately selects the group having the highest cache miss rate and the group having the lowest cache miss rate. In FIG. 9, VM1 and VM2 are indexed as group 1, VM7 and VM8 as group 2, VM3 and VM4 as group 3, and VM5 and VM6 as group 4.

After group indexing is completed, the virtual machines are relocated from group 1 to group 2, group 3, and group 4 according to the group indexing. As a result, VM1 in group 1 is socket 0 (81), VM2 in group 1 is socket 1 (82), VM8 in group 2 is socket 0 (81), VM7 in group 2 is socket 1 (82), VM3 of group 3 is relocated to socket 0 (81), VM4 of group 3 to socket 1 (82), VM6 of group 4 to socket 0 (81), and VM5 of group 1 to socket 1 (82). By placing VM1 with the highest cache miss rate and VM8 with the lowest cache miss rate on the same socket 0 (81), reconciliation between the cache miss rates of the virtual machines results in evenly responsive demand for shared resources between each socket. will be. By repeating this process, you ensure that virtual machines running on each socket have the same possible cache miss rates.

In the above embodiment, a case where two sockets and eight virtual machines are described as an example, but the present invention is not limited thereto.

10 is a flowchart illustrating a local phase virtual machine relocation method according to an embodiment of the present invention.

The local phase virtual machine relocation method aligns virtual machines operating on a plurality of sockets included in one system (S12). In this case, the sorting method sorts in descending order from the virtual machines having the highest cache miss rate based on the cache miss rate of each virtual machine, and indexes them in the same order as the sorting is completed.

Grouping is performed for each virtual machine whose virtual machine indexing is completed (S14). Grouping methods are grouped in the order indexed above, and the number of virtual machines in one group is the same as the number of sockets in a single system. Indexing for each group alternately indexes the group with the highest cache miss rate and the group with the lowest cache miss rate. Detailed description thereof has been described with reference to FIG. 9.

When indexing for the group is completed, the relocation of the virtual machines belonging to each group is performed (S16). The relocation sequence is based on group indexing and relocates virtual machines belonging to the same group to different sockets. At this time, the virtual machine with the highest cache miss rate and the virtual machine with the lowest cache miss rate are relocated to be included in the same socket. This process is repeated until all the virtual machines in all groups are relocated.

Through the above process, it is possible to relocate the cache miss rate of the virtual machines in each socket included in a single system to be equal. As a result, the memory requirements are equal, thus preventing the performance of the virtual machines and effectively using the shared resources. .

11 illustrates a global phase virtual machine relocator according to an embodiment of the present invention.

The node selector 101 receives a cache miss rate and selects a node having the highest cache miss rate and a node having the lowest cache miss rate among the plurality of nodes. The cache miss rate of each node may be calculated as the sum of the cache miss rates of the virtual machines included in each node.

When the node selection is completed, the virtual machine selecting unit 102 selects the virtual machine having the highest cache miss rate among the virtual machines included in the node having the highest cache miss rate selected above. Also, the virtual machine having the lowest cache miss rate is selected from among the virtual machines included in the node having the lowest cache miss rate selected above.

When the selection for the virtual machine is completed, the node virtual machine relocation unit 103 relocates the virtual machine having the highest cache miss rate selected above to the node having the lowest cache miss rate, and the virtual machine having the lowest cache miss rate has the highest cache miss rate. Relocate to node. This virtual machine relocation unit allows each node to receive virtual machines with equal cache miss rates, and as a result, equalizes the resource requirements of each node to prevent performance degradation of the virtual machine and to make efficient use of resources. Can be.

12 illustrates a global phase virtual machine relocation according to an embodiment of the present invention.

Two nodes in which nine virtual machines operate on one node will be described as an example. In FIG. 12, the hatched rectangles represent virtual machines with high cache miss rates and the hatched rectangles represent virtual machines with low cache miss rates.

In a system in which a virtual machine operates, the load of the system may be biased to a specific node 91 as shown in FIG. 12 (a). In the global phase, in this case, the node 111 having the highest cache miss rate selects the virtual machine having the highest cache miss rate. Conversely, the node 112 with the lowest cache miss rate is selected for the virtual machine with the lowest cache miss rate. The location of the nodes is changed as shown in FIG. 12 (b) using the Live-migration technique for the two selected machines. This process is repeated until the difference between the node with the highest cache miss rate and the lowest node does not deviate from the preset threshold. 12 (c) is a view showing the progress of this process, Figure 12 (d) is a view showing the result of this process is repeated. As a result, virtual machines with high cache miss rates that were concentrated on specific nodes in FIG. 12 (a) can be rebalanced as shown in FIG. 12 (d) through the global phase virtual machine relocation above to maintain the load on each node evenly. . This prevents performance degradation of the virtual machine due to load concentration and enables efficient resource utilization. In the above embodiment, a case where there are two nodes and nine virtual machines included in each node has been described as an example, but the present invention is not limited thereto.

13 is a flowchart illustrating a global phase virtual machine relocation method according to an embodiment of the present invention.

The global phase virtual machine relocation method selects the node containing the virtual machine to be relocated. To this end, the node with the highest cache miss rate and the node with the lowest cache miss rate are selected (S21). The cache miss rate of each node may be calculated as the sum of the cache miss rates of the virtual machines included in each node.

This process selects the nodes with the highest load and the node with the lowest load among the nodes in the system. Both nodes are subject to global phase virtual machine relocation.

The next step is to select the virtual machine to relocate. The virtual machine having the highest cache miss rate is selected at the node having the highest cache miss rate selected above, and the virtual machine having the lowest cache miss rate is selected at the node having the lowest cache miss rate (S23).

When the selection of the virtual machines is completed, the highest miss rate virtual machine is relocated to the lowest miss rate virtual node for the two virtual machines selected above, and the lowest miss rate virtual machine is relocated to the highest miss rate virtual node (S25). During this relocation process it is possible to relocate using the live-migration method.

The process may be repeated until the cache miss rate difference between the node with the highest cache miss rate and the node with the lowest cache miss rate is less than or equal to a predetermined threshold value.

Through the above method, the global phase virtual machine relocation method equally adjusts the resource requirements of each node to alleviate the overload on a specific node, thereby preventing performance degradation of the virtual machine and enabling efficient use of shared resources.

While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. This is possible. Therefore, the scope of the present invention should not be limited by the described embodiments, but should be determined by the equivalents of the appended claims, as well as the appended claims.

31: Front-End 32: LLC miss and cpu-affinity information
33: Virtual machine relocation target node 34: Virtual machine relocation target node
35: information of cluster nodes

Claims (10)

In a cloud computing system with a cluster,
A cache miss rate collector configured to collect cache miss rates of a plurality of virtual machines included in a plurality of nodes included in the cluster;
And a virtual machine relocator configured to relocate the virtual machine according to the cache miss rate.
The method of claim 1, wherein the virtual machine repositioning unit comprises a global phase virtual machine repositioning unit,
The global phase virtual machine relocation unit,
A node selector configured to receive the cache miss rate information and select nodes having the highest and lowest cache miss rates, respectively;
A virtual machine selector configured to select a virtual machine having a highest and lowest cache miss rate at each node having the selected highest and lowest cache miss rate; And
And a node virtual machine relocator configured to interchange and relocate the selected virtual machine.
The virtual machine relocator of claim 1, wherein the virtual machine relocator includes a local phase virtual machine relocator.
The local phase virtual machine relocator,
A virtual machine aligning unit that sorts the virtual machines included in the sockets in each node according to a cache miss rate;
A virtual machine grouping unit for grouping the sorted virtual machines into groups including the same number of virtual machines as the number of sockets included in each node; And
And a socket virtual machine relocator configured to relocate the virtual machines included in the group to a plurality of sockets for each group.
The method of claim 3, wherein the socket virtual machine repositioning unit
And relocating the virtual machine having the highest cache miss rate and the virtual machine having the lowest cache miss rate among the virtual machines to be included in the same socket.
The dynamic virtual machine relocation apparatus of claim 1, wherein the virtual machine relocation unit relocates the virtual machine using live-migration. In a cloud computing system with a cluster,
Collecting cache miss rates of a plurality of virtual machines included in a plurality of nodes included in the cluster;
And relocating a virtual machine according to the cache miss rate.
7. The method of claim 6, wherein relocating the virtual machine
Receiving the cache miss rate information and selecting nodes having the highest and lowest cache miss rates, respectively;
Selecting a virtual machine having a highest and lowest cache miss rate at each node having the selected highest and lowest cache miss rate; And
Interchanging and relocating the selected virtual machine.
7. The method of claim 6, wherein relocating the virtual machine
Sorting the virtual machines included in the sockets in each node according to a cache miss rate;
Grouping the sorted virtual machines into groups including the same number of virtual machines as the number of sockets included in each node; And
Relocating a virtual machine included in the group for each group to a plurality of sockets.
9. The method of claim 8, wherein relocating to the socket
And relocating the virtual machine having the highest cache miss rate and the virtual machine having the lowest cache miss rate among the virtual machines to be included in the same socket.
7. The method of claim 6, wherein relocating the virtual machine comprises relocating the virtual machine using live-migration.
KR1020110143097A 2011-12-27 2011-12-27 Apparatus and method for dynamic virtual machine placement KR20130074953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110143097A KR20130074953A (en) 2011-12-27 2011-12-27 Apparatus and method for dynamic virtual machine placement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110143097A KR20130074953A (en) 2011-12-27 2011-12-27 Apparatus and method for dynamic virtual machine placement

Publications (1)

Publication Number Publication Date
KR20130074953A true KR20130074953A (en) 2013-07-05

Family

ID=48988922

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110143097A KR20130074953A (en) 2011-12-27 2011-12-27 Apparatus and method for dynamic virtual machine placement

Country Status (1)

Country Link
KR (1) KR20130074953A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150016820A (en) * 2013-08-05 2015-02-13 한국전자통신연구원 System and method for virtual machine placement and management on cluster system
KR101502225B1 (en) * 2013-07-31 2015-03-12 서울대학교산학협력단 Virtual machine allocation method to minimize performance interference between virtual machines
WO2015099701A1 (en) 2013-12-24 2015-07-02 Intel Corporation Cloud compute scheduling using a heuristic contention model
WO2022039292A1 (en) * 2020-08-19 2022-02-24 서울대학교산학협력단 Edge computing method, electronic device, and system for providing cache update and bandwidth allocation for wireless virtual reality
KR102571782B1 (en) * 2022-12-16 2023-08-29 스트라토 주식회사 Apparatus and method for virtual machine relocation using resource management pool

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101502225B1 (en) * 2013-07-31 2015-03-12 서울대학교산학협력단 Virtual machine allocation method to minimize performance interference between virtual machines
KR20150016820A (en) * 2013-08-05 2015-02-13 한국전자통신연구원 System and method for virtual machine placement and management on cluster system
WO2015099701A1 (en) 2013-12-24 2015-07-02 Intel Corporation Cloud compute scheduling using a heuristic contention model
EP3087503A4 (en) * 2013-12-24 2018-01-17 Intel Corporation Cloud compute scheduling using a heuristic contention model
US11212235B2 (en) 2013-12-24 2021-12-28 Intel Corporation Cloud compute scheduling using a heuristic contention model
US11689471B2 (en) 2013-12-24 2023-06-27 Intel Corporation Cloud compute scheduling using a heuristic contention model
WO2022039292A1 (en) * 2020-08-19 2022-02-24 서울대학교산학협력단 Edge computing method, electronic device, and system for providing cache update and bandwidth allocation for wireless virtual reality
KR102571782B1 (en) * 2022-12-16 2023-08-29 스트라토 주식회사 Apparatus and method for virtual machine relocation using resource management pool

Similar Documents

Publication Publication Date Title
US10776151B2 (en) Adaptive CPU NUMA scheduling
US10334034B2 (en) Virtual machine live migration method, virtual machine deployment method, server, and cluster system
CN108701059B (en) Multi-tenant resource allocation method and system
Park et al. Locality-aware dynamic VM reconfiguration on MapReduce clouds
US8352942B2 (en) Virtual-machine control apparatus and virtual-machine moving method
US8510747B2 (en) Method and device for implementing load balance of data center resources
US8874744B2 (en) System and method for automatically optimizing capacity between server clusters
US20170097845A1 (en) System and Method for Optimizing Placements of Virtual Machines on Hypervisor Hosts
US20130167152A1 (en) Multi-core-based computing apparatus having hierarchical scheduler and hierarchical scheduling method
US20140173620A1 (en) Resource allocation method and resource management platform
Chatzistergiou et al. Fast heuristics for near-optimal task allocation in data stream processing over clusters
JP2008191949A (en) Multi-core system, and method for distributing load of the same
JP2007272263A (en) Method for managing computer, computer system, and management program
KR101356033B1 (en) Hybrid Main Memory System and Task Scheduling Method therefor
WO2012036960A1 (en) Dynamic creation and destruction of io resources based on actual load and resource availability
KR20130074953A (en) Apparatus and method for dynamic virtual machine placement
CN110221920A (en) Dispositions method, device, storage medium and system
US20090183166A1 (en) Algorithm to share physical processors to maximize processor cache usage and topologies
US8458719B2 (en) Storage management in a data processing system
CN104714845B (en) Resource dynamic regulation method, device and more kernel operating systems
JP2013210833A (en) Job management device, job management method and program
Panda et al. Novel service broker and load balancing policies for cloudsim-based visual modeller
US20120042322A1 (en) Hybrid Program Balancing
US20140126481A1 (en) Block Scheduling Method For Scalable And Flexible Scheduling In A HSUPA System
US20210373972A1 (en) Vgpu scheduling policy-aware migration

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application