CN103729230A - Method and computer system for memory management on virtual machine system - Google Patents

Method and computer system for memory management on virtual machine system Download PDF

Info

Publication number
CN103729230A
CN103729230A CN201310456258.8A CN201310456258A CN103729230A CN 103729230 A CN103729230 A CN 103729230A CN 201310456258 A CN201310456258 A CN 201310456258A CN 103729230 A CN103729230 A CN 103729230A
Authority
CN
China
Prior art keywords
virtual machine
working set
set size
stored memory
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310456258.8A
Other languages
Chinese (zh)
Other versions
CN103729230B (en
Inventor
李翰林
阙志克
姜瑞豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/951,475 external-priority patent/US9128843B2/en
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Publication of CN103729230A publication Critical patent/CN103729230A/en
Application granted granted Critical
Publication of CN103729230B publication Critical patent/CN103729230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A method and a computer system for memory management on a virtual machine system are provided. The memory management method includes the following steps. First, a working set size of each of a plurality of virtual machines on the virtual machine system is obtained by at least one processor, wherein the working set size is an amount of memory required to run applications on each of the virtual machines. Then, an amount of storage memory is allocated to each of the virtual machines by the at least one processor according to the working set size of each of the virtual machines and at least one swapin or refault event, wherein the storage memory is a part of memory available from the computer system.

Description

The EMS memory management process of dummy machine system and computer system
Technical field
The present invention relates to a kind of technology of carrying out memory management on dummy machine system.
Background technology
Computer virtualized is a kind of technology that relates to virtual machine creating, this virtual machine is just as a physical computer with operating system, and computer virtualized framework conventionally by support the ability of multiple operating systems to limit simultaneously on single physical computer platform.For example, the computing machine of operation Microsoft's Window system can have the virtual machine with (SuSE) Linux OS by keyholed back plate.Main frame is that virtualized actual physics machine occurs, and virtual machine is counted as client computer.Super manager (supervisor), is called virtual machine monitor (VMM) definitely, is by virtual hardware resource and virtual hardware interface is presented to the software layer of at least one virtual machine.Super manager and traditional operating system to for the treatment of hardware resource manage, and operation with respect to carry out virtual machine some management function time mode used similar.Virtual machine can be called " passenger plane ", and can be called " passenger plane operating system " in the operating system of virtual machine internal operation.
The current internal memory that is subject to of virtualized environment limits, and the physical memory that this means main frame is the obstruction of the utilization of resources in data center.Internal memory virtualization is removed physical memory resource to couple from data center, and subsequently these resource clusterings is become to virtual memory pool, and the application program that this virtual memory pool can move by passenger plane operating system or on passenger plane operating system conducts interviews.With regard to internal memory virtualization, it is one of important topic of memory source management and use that internal memory is shared.
When existing multiple virtual machines to move on low internal memory main frame, it is of crucial importance that the internal memory distribution of virtual machine will become application programs performance.Physical memory should be distributed between virtual machine in fair mode, and this kind of operation is defined as " internal memory balance ".
The simple form of internal memory balance be by free physical memory divided by virtual machine number, and give the internal memory of each equivalent in these virtual machines.But Ci Zhong mechanism can not consider in these virtual machines the working set size of each, wherein working set size is the required amount of ram of application program on each in these virtual machines of operation.That is to say, this method implicit assumption has each in these virtual machines to equate, is included in the application program and the input service load that on virtual machine, move.
Another kind method is to each in these virtual machines by the percentage allocation of the free physical memory being directly proportional to each working set size.Intuition is that relatively large internal memory is given to virtual machine memory source to larger demand.Under this kind of distribution, the difference between working set size and the internal memory of its distribution of virtual machine is also directly proportional to the working set size of virtual machine.This means when the Memory Allocation of virtual machine is dropped to a fixed percentage of its working set size from its working set size, for having compared with for the virtual machine of large working set size, any extra event punishment (for example, again cache mistake (refault) or change to event (swap in)) may be higher.
In order to prevent that virtual machine from going down because serious performance occurs low memory, in the virtual machine that may need to make to move in same host, the performance cost of each (performance overhead) equates, mode is by using more suitably internal memory balancing.
Summary of the invention
The disclosure one embodiment relates to the EMS memory management process of the dummy machine system of being deposited by computer system.Described EMS memory management process comprises following steps.First, obtain in the multiple virtual machines on described dummy machine system the working set size of each by least one processor, wherein said working set size is the required amount of ram of application program on each in the described virtual machine of operation.Subsequently, according in described virtual machine each described working set size and at least one change to or cache mistake again, by described at least one processor, stored memory amount is distributed to each in described virtual machine, wherein said stored memory is a part for free memory in described computer system.
Another embodiment of the disclosure relates to a kind of computer system, and this computer system comprises Installed System Memory and at least one processor.Described at least one processor is couple on described Installed System Memory and carries out following operation, to carry out memory management on dummy machine system.Described at least one processor obtains in the multiple virtual machines on dummy machine system the working set size of each, and wherein said working set size is the required amount of ram of application program on each in these virtual machines of operation.According in these virtual machines each working set size and at least one change to or cache mistake again, described at least one processor is further distributed to each in these virtual machines by stored memory amount, and wherein stored memory is a part for free memory in Installed System Memory.
In order to make aforementioned and other Characteristics and advantages easy to understand of the present invention, below in connection with accompanying drawing, describe some embodiment in detail, thereby the present invention is described in further detail.
Accompanying drawing explanation
Add accompanying drawing to help further to understand the present invention, and described accompanying drawing is incorporated in this instructions and form the part of this instructions.Accompanying drawing is used for illustrating embodiments of the invention, and is used from explaination principle of the present invention together with describing one.But the scope of the present invention that these accompanying drawings are not intended to appended claims to limit limits.
Figure 1A is according to the block scheme of the computer system of one embodiment of the invention.
Figure 1B is according to the block scheme of the dummy machine system of one embodiment of the invention.
Fig. 2 is according to the EMS memory management process on the dummy machine system of one embodiment of the invention.
Embodiment
With detailed reference to every embodiment of the present invention, each example of described embodiment will describe in the accompanying drawings.At accompanying drawing with in describing, with identical reference number, refer to same or analogous part as far as possible.
For illustrative purposes, in the following embodiment, used a processor and an Installed System Memory, but the present invention is not limited to this.In other embodiments, can use more than one processor.
Figure 1A is according to the block scheme of the computer system of one embodiment of the invention.With reference to figure 1, computer system 100 comprises processor 110, Installed System Memory 120 and other standards peripheral components (not shown), and wherein Installed System Memory 120 is couple on processor 110.
Processor 110 can be special or special processor, it is configured to realize particular task by carrying out machine-readable software code language, this software code language limits the function relevant with operation, thereby by communicating by letter to implement feature operation with the miscellaneous part of computer system 100.
The softwares such as Installed System Memory 120 storage operation systems, and temporarily store current data or the application program of effectively or often using.Therefore, Installed System Memory 120, also referred to as physical memory, it can be internal memory faster faster of a kind of access time, for example, random-access memory (ram), static RAM (SRAM) or dynamic RAM (DRAM).
Virtual memory is a kind of technology for the resource of Installed System Memory 120 is managed.This technology provides the illusion with a large amount of internal memories.Virtual memory and Installed System Memory 120 are all divided into the memory address piece of multiple adjacency, and these memory address pieces are also referred to as memory pages (page).
Super manager is arranged in computer system 100 and supports virtual machine to carry out space, and in described virtual machine is carried out space, physics (entity) is changed and carried out multiple virtual machines simultaneously.Figure 1B is according to the block scheme of the dummy machine system of one embodiment of the invention.
Together with Figure 1A, with reference to Figure 1B, dummy machine system 100' comprises multiple virtual machine 1501-150N, super manager 160 and virtual hardware 170.It should be noted that every embodiment of the present invention contains the simultaneously computer system 100 of hosted virtual machine 1501-150N, and for simplicity and be easy to the object of explanation, except as otherwise noted, two virtual machines 1501 shown in the following embodiment and 150N.Each in virtual machine 1501 and 150N comprises passenger plane operating system, for example, and passenger plane operating system 1551 or 155N, and various guest software application program (not shown).Each in passenger plane operating system comprises passenger plane kernel, for example, and passenger plane kernel 1561 or 156N.The virtual hardware 170 that comprises processor, internal memory and I/O device is extracted and is assigned as the virtual processor, virtual memory and the virtual i/o device that are connected to top operation virtual machine 1501 and 150N.Super manager 160 manages virtual machine 1501 and 150N, and simulation hardware and firmware resource are provided.In one of multinomial embodiment, Linux distributes and can be installed as passenger plane operating system 1551 and the 155N in virtual machine 150, in order to carry out the application program of any support, and the open source software Xen that supports most of Linux to distribute may be provided in super manager 160.In passenger plane kernel 1561 and 156N, each can be dom0 kernel, and in passenger plane operating system 1551 and 155N, each comprises balloon driver (balloon driver) (not shown).In conjunction with super manager 160, balloon driver can be distributed or deallocate the virtual memory of virtual machine 1551 and 155N by invoke memory management algorithm.In order to reach this object, can tackle changing to and cache mistake again at passenger plane kernel 1561 and 156N place, thereby quantize the performance cost of passenger plane virtual machine 1501 and 150N, and the amount of ram of distributing to virtual machine 1501 and 150N can regulate, with page regeneration (pagereclaiming) mechanism by utilizing passenger plane operating system 1551 and 155N, the expense of each in virtual machine 1501 and 150N is equated.
For the page, regenerate, standard that processor 110 uses least recently used (Least Recently Used:LRU) is determined and is removed the page and keep LRU list 157 order used, described LRU list is according to the last access time of the internal memory of two kinds of main Types (anonymous internal memory and page cache), to once being sorted by the memory pages of virtual machine 1501 and 150N access.The memory pages of anonymous internal memory is used by the heap of consumer process and stack, and the memory pages of page cache backs up by data in magnetic disk, and wherein, in access for the first time, after data in magnetic disk, content is buffered in internal memory to reduce following magnetic disc i/o.
On dummy machine system 100', if the memory pages in LRU list is anonymous internal memory, passenger plane kernel 1561 and 156N can be by content exchange to exchange disk (swap space) (not shown) so, the corresponding PTE not presenting in process is carried out to mark, discharge subsequently corresponding memory pages.Subsequently, if memory pages is accessed again, will carry out in the following manner so COA(copy-on-access: copy access) mechanism: content of pages is brought into newly assigned memory pages from exchange disk, that is, changed to.Or if the memory pages in LRU list belongs to page cache, when content of pages is contaminated, passenger plane kernel 1561 and 156N can refresh this content of pages output (flush) in exchange disk so, the described page is discharged subsequently.After next file access, passenger plane kernel 1561 and 156N must carry out disk access (be called again and read) again content is brought back to the newly assigned page in page cache.
Fig. 2 is according to the EMS memory management process on the dummy machine system of one embodiment of the invention.
Before with reference to Fig. 2, it should be noted that a part for Installed System Memory 120 can be used for virtual machine 1501-150N, and this part of Installed System Memory is defined as " stored memory (storage memory) ".Refer now to Fig. 2 together with the parts in Figure 1A and Figure 1B, by processor 110, obtain in these virtual machines 1501-150N on dummy machine system 100' the working set size (step S201) of each.Subsequently, according in these virtual machines 1501-150N each working set size and at least one change to or cache mistake again, by processor 110, stored memory amount is distributed to each (the step S203) in these virtual machines 1501-150N.It should be noted that the summation that changes to counting and again read counting can be defined as expense counting, described expense counting is also the number of pages being read in virtual machine.
More particularly, in one of every embodiment, first suppose each change to and again the expense of cache mistake on different virtual machine 1501-150N, be identical, wherein said different virtual machine has different operating load and different storage allocation amounts.Given stored memory amount M availwith N virtual machine, wherein virtual machine 150i has working set size WSS i, wherein i=1,2 ..., N, processor 110 is the big or small summation of the working set of each from these virtual machines
Figure BDA0000390332650000051
in deduct stored memory M avail, and subsequently subtracting each other result divided by virtual machine number N.Phase division result is defined as the first minimizing item.Subsequently, processor 110 is from working set size WSS iin deduct the first minimizing item, be finally that virtual machine 150i generates Memory Allocation or balloon target (balloon target).That is to say, the stored memory amount of each in virtual machine 1501-150N of distributing to meets equation (1):
BT i = WSS i - ( Σ i = 1 N WSS i ) - M Avail N Equation (1)
Wherein BT irepresent to distribute to the balloon target of virtual machine 150i(or virtual machine 150i) stored memory amount, WSS ithe working set size that represents virtual machine 150i, N represents virtual machine number, and M availrepresent stored memory amount.
As several examples of the present embodiment, suppose the respectively given working set size 600MB of two virtual machines 1501 and 150N and 300MB, and supposition stored memory is 600MB.First, by the stored memory amount lacking divided by 2(, 150MB), and by processor 110, identical wantage is distributed to two virtual machines 1501 and 150N.Subsequently, virtual machine 1501 and 150N can be assigned respectively 450MB(and lack 150MB) and 150MB(lack 150MB).After Memory Allocation, virtual machine 1501 and 150N expection have identical expense counting.
Identical each change to and the expense hypothesis of cache mistake is always incorrect again on different virtual machine 1501-150N because each, change to or again the time cost of cache mistake in a virtual machine or can be different between different virtual machine.For example, the amount of ram being assigned to when virtual machine is during well below its working set, (for example, the metadata of switching subsystem being had to more modification), each change operation of can slowing down.In order to process this situation, in another embodiment, its target is intended to changing to the time of reading is carried out balance again between all virtual machine 1501-150N.Changing to and again reading the summation of time of each virtual machine 150i is called overhead time overhead_time i, wherein i=1,2 ..., n.The minimizing of the Memory Allocation of virtual machine 150i is defined as the second minimizing item S i, its with change to and again read spent average overhead time overhead_time iinverse is directly proportional, because overhead time overhead_time ilarger, the minimizing of the Memory Allocation of virtual machine 150i is just less.Subsequently, processor 110 deducts the second minimizing item from working set size WSSi, finally for virtual machine 150i generates Memory Allocation or balloon target.That is to say, distribute to the amount of ram of each in virtual machine 1501-150N and meet equation (2):
BT i = WSS i - [ ( Σ i = 1 N WSS i ) - M Avail ] × 1 overhead _ time i Σ i = 1 N 1 overhead _ time i Equation (2)
Wherein, BT irepresent to distribute to the stored memory amount of virtual machine i, WSS ithe working set size that represents virtual machine i, N represents virtual machine number, M availrepresent stored memory amount.
As the numerical example of the present embodiment, suppose that two virtual machines 1501 and 150N also distinguish given working set size 600MB and 300MB, and stored memory is 600MB.Suppose the overhead time overhead_time of virtual machine 1501 and 150N ithan being 2:1, wherein i=1, N.In the present embodiment, wantage with overhead time overhead_time iwhat be inversely proportional to is assigned to virtual machine 1501 and 150N(, and virtual machine 1501 lacks 100MB, and virtual machine 150N lacks 200MB).Therefore, for the final storage allocation of virtual machine 1501, be 500MB, and be 100MB for the final storage allocation of virtual machine 105N.The overhead time overhead_time of virtual machine 1501 and 105N iexpection is able to balance, and performance cost can realize equalization.
It should be noted that for the urgent memory pool of super manager 160, the freememory that super manager 160 detected when processor 110 is lower than the lower limit of configuration, for example, lower than 1% time, starts internal memory balanced controls.Processor 110 can be from virtual machine 1501-150N receives in each balloon driver and changes to and reading information again, determines in virtual machine 1501-150N the working set of each, and correspondingly calculates the balloon target making new advances.By aforementioned EMS memory management process, all virtual machine 1501-150N can moderately demote, and can too not lack memory source simultaneously.
In one embodiment, can implement above-mentioned EMS memory management process by carrying out ready program on the computing machine at personal computer and workstation etc.Described program is stored in (for example, hard disk, floppy disk, CD-ROM, MO and DVD) on computer-readable recording medium, reads, and carried out by computing machine from computer readable medium.Described program can distribute by networks such as internet.
In a word, by utilizing the existing page regeneration sector of passenger plane operating system, EMS memory management process in the present invention is through design, with according in these virtual machines each working set size and at least one change to or cache mistake again, stored memory amount is distributed to the each virtual machine in host computer system.In this way, in the virtual machine that can make to move in same host system, the performance cost of each equates, thereby prevents the performance degradation that virtual machine is serious because low memory produces.
Those skilled in the art will be easy to understand, not depart from the scope of the present invention or spirit in the situation that, can make various modifications and variations to the structure of disclosed embodiment.In view of above, if modifications and variations of the present invention belong to described claims with and equivalent within the scope of, described modifications and variations are just contained in the present invention so.

Claims (14)

1. an EMS memory management process for dummy machine system, described EMS memory management process comprises:
By at least one processor, obtain in the multiple virtual machines on described dummy machine system the working set size of each, described working set size is the required amount of ram of application program on each in the described virtual machine of operation; And
According in described virtual machine each described working set size and at least one change to or cache mistake again, by described at least one processor, stored memory amount is distributed to each in described virtual machine, described stored memory is a part for free memory in described computer system.
2. EMS memory management process according to claim 1, it is characterized in that, according in described virtual machine each described working set size and described at least one change to or cache mistake again, each this step of described stored memory amount being distributed in described virtual machine by described at least one processor comprises:
According to the described working set size and first of each in described virtual machine, reduce item or second and reduce item, by described at least one processor, described stored memory amount is distributed to each in described virtual machine, wherein, described first reduces item and described stored memory amount, described virtual machine number on described dummy machine system, and in described virtual machine, the summation of the described working set size of each is associated, wherein, described second reduces item and described stored memory amount, the summation of the described working set size of each in described virtual machine, and according to described at least one change to or the overhead time of cache mistake is associated again.
3. EMS memory management process according to claim 2, it is characterized in that, according to the described working set size and described first of each in described virtual machine, reduce item, each this step of described stored memory amount being distributed in described virtual machine by described at least one processor comprises:
According to the summation of the described working set size of each in each described working set size, described virtual machine in described virtual machine, and the described virtual machine number on described dummy machine system, by described at least one processor, calculate described first and reduce item; And
Thereby by described at least one processor, described working set size is deducted to described the first minimizing described stored memory amount is distributed to each in described virtual machine.
4. EMS memory management process according to claim 3, is characterized in that, distributes to the described amount of ram of each in described virtual machine and meets equation (1):
BT i = WSS i - ( Σ i = 1 N WSS i ) - M Avail N Equation (1)
Wherein, BT irepresent to distribute to the described stored memory amount of described virtual machine i, WSS ithe described working set size that represents described virtual machine i, N represents described virtual machine number, and M availrepresent described stored memory amount.
5. EMS memory management process according to claim 2, it is characterized in that, according to the described working set size and described second of each in described virtual machine, reduce item, each this step of described stored memory amount being distributed in described virtual machine by described at least one processor comprises:
According to the described virtual machine number on the summation of the described working set size of each in each described working set size, described virtual machine in described virtual machine, described stored memory amount, described dummy machine system, and according to described at least one change to or described overhead time of cache mistake again, by described at least one processor, calculate described second and reduce; And
Thereby by described at least one processor, described working set size is deducted to described the second minimizing described stored memory amount is distributed to each in described virtual machine.
6. EMS memory management process according to claim 5, is characterized in that, described second reduces item was inversely proportional to the described overhead time.
7. EMS memory management process according to claim 6, is characterized in that, the described stored memory amount of each in described virtual machine of distributing to meets equation (2):
BT i = WSS i - [ ( Σ i = 1 N WSS i ) - M Avail ] × 1 overhead _ time i Σ i = 1 N 1 overhead _ time i Equation (2)
Wherein, BT irepresent to distribute to the described stored memory amount of described virtual machine i, WSS ithe described working set size that represents described virtual machine i, N represents described virtual machine number, M availrepresent described stored memory amount, overhead_time irepresent the overhead time of virtual machine i.
8. a computing system, it comprises:
Installed System Memory;
At least one processor, described at least one processor is couple on described Installed System Memory, described at least one processor executable operations, for carry out memory management on dummy machine system, described operation comprises:
The working set size of each in multiple virtual machines of acquisition on described dummy machine system, described working set size is the required amount of ram of application program on each in the described virtual machine of operation; And
According in described virtual machine each described working set size and at least one change to or cache mistake again, stored memory amount is distributed to each in described virtual machine, described stored memory is a part for free memory in described Installed System Memory.
9. computer system according to claim 8, it is characterized in that, according to the described working set size and first of each in described virtual machine, reduce item or second and reduce item, described at least one processor is distributed to each in described virtual machine by described stored memory amount, wherein, described first reduces item and described stored memory amount, described virtual machine number on described dummy machine system, and in described virtual machine, the summation of the described working set size of each is associated, wherein, described second reduces item and described stored memory amount, the summation of the described working set size of each in described virtual machine, and according to described at least one change to or the overhead time of cache mistake is associated again.
10. computer system according to claim 9, it is characterized in that, according to the described virtual machine number on summation and the described dummy machine system of the described working set size of each in each described working set size, described virtual machine in described virtual machine, described at least one processor calculate described first reduce, thereby and described working set size deduct described first reduce item described stored memory amount is distributed to each in described virtual machine.
11. computer systems according to claim 10, is characterized in that, distribute to the described amount of ram of each in described virtual machine and meet equation (1):
BT i = WSS i - ( Σ i = 1 N WSS i ) - M Avail N Equation (1)
Wherein, BT irepresent to distribute to the described stored memory amount of described virtual machine i, WSS ithe described working set size that represents described virtual machine i, N represents described virtual machine number, and M availrepresent described stored memory amount.
12. computer systems according to claim 9, according to each described working set size in described virtual machine, the summation of the described working set size of each in described virtual machine, described stored memory amount, described virtual machine number on described dummy machine system and according to described at least one change to or described overhead time of cache mistake again, described at least one processor calculates described second and reduces item, thereby and by described at least one processor, described working set is deducted to described second and reduce and described stored memory amount is distributed to each in described virtual machine.
13. computer systems according to claim 12, is characterized in that, described second reduces item was inversely proportional to the described overhead time.
14. computer systems according to claim 13, is characterized in that, the described stored memory amount of each in described virtual machine of distributing to meets equation (2):
BT i = WSS i - [ ( Σ i = 1 N WSS i ) - M Avail ] × 1 overhead _ time i Σ i = 1 N 1 overhead _ time i Equation (2)
Wherein, BT irepresent to distribute to the described stored memory amount of described virtual machine i, WSS ithe described working set size that represents described virtual machine i, N represents described virtual machine number, M availrepresent described stored memory amount, overhead_time irepresent the overhead time of virtual machine i.
CN201310456258.8A 2012-10-11 2013-09-29 Method and computer system for memory management on virtual machine system Active CN103729230B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201261712279P 2012-10-11 2012-10-11
US61/712,279 2012-10-11
US13/951,475 2013-07-26
US13/951,475 US9128843B2 (en) 2012-10-11 2013-07-26 Method and computer system for memory management on virtual machine system

Publications (2)

Publication Number Publication Date
CN103729230A true CN103729230A (en) 2014-04-16
CN103729230B CN103729230B (en) 2017-04-12

Family

ID=50453313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310456258.8A Active CN103729230B (en) 2012-10-11 2013-09-29 Method and computer system for memory management on virtual machine system

Country Status (1)

Country Link
CN (1) CN103729230B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020183311A1 (en) * 2019-03-08 2020-09-17 International Business Machines Corporation Secure storage query and donation
US11176054B2 (en) 2019-03-08 2021-11-16 International Business Machines Corporation Host virtual address space for secure interface control storage
US11182192B2 (en) 2019-03-08 2021-11-23 International Business Machines Corporation Controlling access to secure storage of a virtual machine
US11283800B2 (en) 2019-03-08 2022-03-22 International Business Machines Corporation Secure interface control secure storage hardware tagging
US11455398B2 (en) 2019-03-08 2022-09-27 International Business Machines Corporation Testing storage protection hardware in a secure virtual machine environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050235123A1 (en) * 2004-04-19 2005-10-20 Zimmer Vincent J Method to manage memory in a platform with virtual machines
CN1696902A (en) * 2004-05-11 2005-11-16 国际商业机器公司 System, method and program to migrate a virtual machine
CN101681268A (en) * 2007-06-27 2010-03-24 国际商业机器公司 System, method and program to manage memory of a virtual machine
CN101924693A (en) * 2009-04-01 2010-12-22 威睿公司 Be used for method and system in migrating processes between virtual machines

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050235123A1 (en) * 2004-04-19 2005-10-20 Zimmer Vincent J Method to manage memory in a platform with virtual machines
CN1696902A (en) * 2004-05-11 2005-11-16 国际商业机器公司 System, method and program to migrate a virtual machine
CN101681268A (en) * 2007-06-27 2010-03-24 国际商业机器公司 System, method and program to manage memory of a virtual machine
CN101924693A (en) * 2009-04-01 2010-12-22 威睿公司 Be used for method and system in migrating processes between virtual machines

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020183311A1 (en) * 2019-03-08 2020-09-17 International Business Machines Corporation Secure storage query and donation
US11068310B2 (en) 2019-03-08 2021-07-20 International Business Machines Corporation Secure storage query and donation
CN113544642A (en) * 2019-03-08 2021-10-22 国际商业机器公司 Secure storage query and donation
US11176054B2 (en) 2019-03-08 2021-11-16 International Business Machines Corporation Host virtual address space for secure interface control storage
US11182192B2 (en) 2019-03-08 2021-11-23 International Business Machines Corporation Controlling access to secure storage of a virtual machine
GB2596024A (en) * 2019-03-08 2021-12-15 Ibm Secure storage query and donation
US11283800B2 (en) 2019-03-08 2022-03-22 International Business Machines Corporation Secure interface control secure storage hardware tagging
GB2596024B (en) * 2019-03-08 2022-04-27 Ibm Secure storage query and donation
US11455398B2 (en) 2019-03-08 2022-09-27 International Business Machines Corporation Testing storage protection hardware in a secure virtual machine environment
US11635991B2 (en) 2019-03-08 2023-04-25 International Business Machines Corporation Secure storage query and donation
US11669462B2 (en) 2019-03-08 2023-06-06 International Business Machines Corporation Host virtual address space for secure interface control storage

Also Published As

Publication number Publication date
CN103729230B (en) 2017-04-12

Similar Documents

Publication Publication Date Title
US9128843B2 (en) Method and computer system for memory management on virtual machine system
KR101137172B1 (en) System, method and program to manage memory of a virtual machine
US8307187B2 (en) VDI Storage overcommit and rebalancing
US9026630B2 (en) Managing resources in a distributed system using dynamic clusters
CN101377745B (en) Virtual computer system and method for implementing data sharing between each field
US8601201B2 (en) Managing memory across a network of cloned virtual machines
EP2581828B1 (en) Method for creating virtual machine, virtual machine monitor and virtual machine system
Ahn et al. Improving {I/O} Resource Sharing of Linux Cgroup for {NVMe}{SSDs} on Multi-core Systems
US10169088B2 (en) Lockless free memory ballooning for virtual machines
CN103729230A (en) Method and computer system for memory management on virtual machine system
US9460009B1 (en) Logical unit creation in data storage system
US11093403B2 (en) System and methods of a self-tuning cache sizing system in a cache partitioning system
US9971785B1 (en) System and methods for performing distributed data replication in a networked virtualization environment
CN109766179B (en) Video memory allocation method and device
US20200201691A1 (en) Enhanced message control banks
Chang et al. Assessment of hypervisor and shared storage for cloud computing server
CN102981962A (en) Method for fast scanning dirty page bitmap of full-virtualization virtual machine
US10592297B2 (en) Use minimal variance to distribute disk slices to avoid over-commitment
Ge et al. Memory sharing for handling memory overload on physical machines in cloud data centers
US11334249B2 (en) Management of unmap processing rates in distributed and shared data storage volumes
Shaikh et al. Dynamic memory allocation technique for virtual machines
Qazi et al. Remote memory swapping for virtual machines in commercial infrastructure-as-a-service
CN114860439A (en) Memory allocation method, host machine, distributed system and program product
Wu et al. Green Master Based on MapReduce Cluster
CN108932205A (en) A kind of method and apparatus of defence RowHammer attack

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant