CN104461928B - Divide the method and device of cache - Google Patents
Divide the method and device of cache Download PDFInfo
- Publication number
- CN104461928B CN104461928B CN201310422795.0A CN201310422795A CN104461928B CN 104461928 B CN104461928 B CN 104461928B CN 201310422795 A CN201310422795 A CN 201310422795A CN 104461928 B CN104461928 B CN 104461928B
- Authority
- CN
- China
- Prior art keywords
- virtual machine
- vms
- cache
- data cached
- caching group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0844—Multiple simultaneous or quasi-simultaneous cache accessing
- G06F12/0846—Cache with multiple tag or data arrays being simultaneously accessible
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/123—Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/15—Use in a specific computing environment
- G06F2212/152—Virtualized environment, e.g. logically partitioned system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/601—Reconfiguration of cache memory
Abstract
The embodiment of the invention discloses a kind of method and devices for dividing cache, are related to field of computer technology, can improve the performance of virtual machine when virtual machine starts and replicates.The method of the present invention includes:When the data cached equal miss of the caching group of the first virtual machine access, the Virtual Machine Manager mode of operation VMS of first virtual machine is judged;When the VMS of first virtual machine is first state, replaces and belong to the least recently used data cached of the second virtual machine in the caching group.The present invention is suitable for virtualized environment.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of method and devices for dividing cache.
Background technique
Virtualization technology is one of current enterprise's hot technology, it has been widely used in Server Consolidation, system
In the research fields such as migration, load isolation.Current virtualization technology can be to multi-core processor, memory and input and output I/O
Equal physical resources are strictly divided, and are eliminated the competition between virtual machine to physical resource, be ensure that the function and property of virtual machine
It can isolation.But for the hardware resource that on piece multilevel cache etc. is shared, current virtualization technology is not divided, respectively
The decline of partial virtual machine performance will lead to the implicit competition of these hardware resources between virtual machine.For example, certain on virtual machine
A little virtualization applications are to LLC(Last-Level Cache, final stage shared cache)Capacity it is very sensitive, when virtual machine can
When the LLC capacity used is reduced due to competition, the performance of virtual machine can be decreased.
In order to change above situation, the performance of virtual machine is improved, researcher proposes shared cache partition mechanism.
In the prior art, pass through the LRU of modification(Least Recently Used, least recently used algorithm)With page color method pair
Caching is divided, and shared buffer memory capacity used in different virtual machine is limited, to reduce the competition punching in shared buffer memory
It is prominent.
At least there are the following problems for the prior art:In the difference that multi-tenant data center, each user execute virtual machine
The effect of management operation can be superimposed, for example, during starting virtual machine and executing duplication operation using virtual machine, when a large amount of
When virtual machine starts simultaneously in a short time or the larger of disk mirroring of virtual machine duplication, required time are longer
It waits, due to the superposition of operating effect, will lead to virtual machine performance degradation.And caching division methods in the prior art,
When virtual machine starting and duplication, the performance of virtual machine cannot be promoted well, virtual machine performance is lower.
Summary of the invention
The embodiment of the present invention provides a kind of method and device for dividing cache, is able to solve and opens in virtual machine execution
When dynamic and duplication, the lower problem of virtual machine performance.
In order to achieve the above objectives, the embodiment of the present invention adopts the following technical scheme that:
In a first aspect, the embodiment of the present invention provides a kind of method for dividing cache, including:
When the data cached equal miss of the caching group of the first virtual machine access, the virtual of first virtual machine is judged
Machine manages mode of operation VMS;
When the VMS of first virtual machine is first state, replaces in the caching group and belong to the second virtual machine most
It closely at least uses data cached.
With reference to first aspect, in the first possible implementation, the virtual machine of judgement first virtual machine
Before managing mode of operation VMS, the method also includes:
Virtual machine VM register is added, the data structure of the VM register includes:The VMS and virtual machine mark
VMID。
Further, before the Virtual Machine Manager mode of operation VMS of judgement first virtual machine, the method is also
Including:
When first virtual machine executes start-up operation or duplication operation, the VMS that first virtual machine is arranged is
First state;When first virtual machine is not carried out start-up operation or duplication operation, the VMS of first virtual machine is set
For the second state.
Optionally, when the VMS of first virtual machine is first state, the second void is belonged in replacing the caching group
Quasi- machine it is least recently used it is data cached before, the method also includes:
Judge whether the cache invalidation rate of second virtual machine is greater than crash rate threshold value;
If the cache invalidation rate of second virtual machine is greater than the crash rate threshold value, replace in the caching group most
It closely at least uses data cached;
If the cache invalidation rate of second virtual machine is not more than the crash rate threshold value, it is determined that in the caching group
Belong to the cache blocks number of second virtual machine.
In conjunction with the first possible implementation, in the second possible implementation, the addition virtual machine VM is posted
After storage, the method also includes:
The marker bit for extending buffer address, the VMID is added in the marker bit, slow in the buffer address
Deposit data belongs to virtual machine corresponding with the VMID.
Optionally, described after belonging to the cache blocks number of second virtual machine in the determination caching group
Method further includes:
Judge whether the cache blocks number of second virtual machine is less than quantity threshold;
If the cache blocks number of second virtual machine is less than the quantity threshold, replace in the caching group recently
What is at least used is data cached;
If the cache blocks number of second virtual machine is not less than the quantity threshold, replaces and belong in the caching group
In the least recently used data cached of the second virtual machine.
With reference to first aspect or the first possible implementation of first aspect, in the third possible implementation
In, when the VMS of first virtual machine is the second state, replace least recently used data cached in the caching group.
With reference to first aspect or any possible implementation, in the fourth possible implementation, described second
Virtual machine is used to control being physically entered for virtual environment and exports I/O resource, interacts with first virtual machine, described in starting
First virtual machine.
Second aspect, the embodiment of the present invention provide a kind of device for dividing cache, including:
Judging unit when the data cached equal miss of the caching group for accessing when the first virtual machine, judges described
The Virtual Machine Manager mode of operation VMS of one virtual machine;
Replacement unit, for replacing and belonging to the in the caching group when the VMS of first virtual machine is first state
Two virtual machines it is least recently used data cached.
In conjunction with second aspect, in the first possible implementation, described device further includes:
Adding unit, for adding virtual machine VM register, the data structure of the VM register includes:
The VMS and virtual machine identify VMID.
Further, described device further includes:Setting unit, for when first virtual machine execute start-up operation or
When duplication operation, the VMS that first virtual machine is arranged is first state;And for being not carried out when first virtual machine
When start-up operation or duplication operation, the VMS that first virtual machine is arranged is the second state.
Optionally, the judging unit is also used to judge whether the cache invalidation rate of second virtual machine is greater than crash rate
Threshold value;The replacement unit is also used to the replacement when the cache invalidation rate of second virtual machine is greater than the crash rate threshold value
It is least recently used data cached in the caching group;Described device further includes:
Determination unit, for determining when the cache invalidation rate of second virtual machine is not more than the crash rate threshold value
Belong to the cache blocks number of second virtual machine in the caching group.
In conjunction with the first possible implementation, in the second possible implementation, described device further includes:
Expanding element, for extending the marker bit of buffer address, the VMID that the adding unit is added is added to
In the marker bit, data cached in the buffer address belongs to virtual machine corresponding with the VMID.
Optionally, it is described true to be also used to judge whether the cache blocks number of second virtual machine is less than for the judging unit
The quantity threshold that order member determines;The replacement unit is also used to be less than institute when the cache blocks number of second virtual machine
When stating the quantity threshold that determination unit determines, replace least recently used data cached in the caching group;It is described to replace
Unit is changed to be also used to be not less than the quantity threshold that the determination unit determines when the cache blocks number of second virtual machine
When, it replaces and belongs to the least recently used data cached of the second virtual machine in the caching group.
In conjunction with the possible implementation of the first of second aspect or second aspect, in the third possible implementation
In, the replacement unit is also used to:When the VMS of first virtual machine is the second state, replace in the caching group recently
What is at least used is data cached.
In conjunction with second aspect or any possible implementation, in the fourth possible implementation, described second
Virtual machine is used to control being physically entered for virtual environment and exports I/O resource, interacts with first virtual machine, described in starting
First virtual machine.
The method and device provided in an embodiment of the present invention for dividing final stage shared cache, and in the prior art, in void
When quasi- machine executes starting and duplication, the performance of virtual machine cannot be promoted well, the lower problem of virtual machine performance is compared, this hair
In bright, when virtual machine executes starting and duplication, the final stage shared buffer memory between the first virtual machine and the second virtual machine is carried out
It divides, reduces the buffer memory capacity that the second virtual machine occupies.Since capacity of second virtual machine to caching is insensitive, even if caching
Crash rate is very high, and the influence to its performance is also little, so the buffer memory capacity occupied by reducing by the second virtual machine, increases first
The buffer memory capacity that virtual machine occupies, to reduce the competition between each virtual machine to caching, to promote virtual machine starting or duplication
When performance.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability
For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is the method flow diagram that one embodiment of the invention provides;
Fig. 2 is the method flow diagram that further embodiment of this invention provides;
Fig. 3 is the memory access address structure figure that further embodiment of this invention provides;
Fig. 4 is the buffer structure figure that further embodiment of this invention provides;
Fig. 5 uses the replaced buffer structure figure of traditional LRU data for what further embodiment of this invention provided;
Fig. 6 is that the use that further embodiment of this invention provides improves the replaced buffer structure figure of LRU data;
Fig. 7, Fig. 8 are the apparatus structure schematic diagram that further embodiment of this invention provides;
Fig. 9 is the apparatus structure schematic diagram that further embodiment of this invention provides.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other
Embodiment shall fall within the protection scope of the present invention.
One embodiment of the invention provide it is a kind of divide cache method, be used for virtualized environment, the first virtual machine and
Second virtual machine runs on same virtual platform, and the second virtual machine is the privileged virtual machine operated on virtual machine monitor,
It can control physics I/O resource, and at the same time interacting with the first virtual machine, the first virtual machine is guest virtual machine, can
Start multiple first virtual machines by multiple second virtual machines.As shown in Figure 1, the method includes:
101, when the data cached equal miss of the caching group of the first virtual machine access, cache controller judges the first void
The VMS of quasi- machine(Virtual Management State, Virtual Machine Manager mode of operation).
It should be noted that in the processor, adding virtual machine VM register, VM register architecture for each processor core
Including:VMS and VMID(Virtual Machine Identifier, virtual machine mark).Then, the label of buffer address is extended
Position, VMID is added in marker bit, wherein data cached in buffer address belongs to virtual machine corresponding with the VMID.
Wherein, the management operation of monitor of virtual machine response virtual machine, is arranged VMS's according to the operation content of virtual machine
State.When being that the first virtual machine executes start-up operation or duplication operation, the VMS of monitor of virtual machine setting at this time is first
State;When not being that the first virtual machine executes start-up operation or duplication operation, monitor of virtual machine setting VMS at this time is the
Two-state.
Further, when the first virtual machine accesses to caching, if when the first virtual machine access cache, caching group
Data cached equal miss needs to access memory at this time, therefrom searches corresponding data, is then replaced with data in caching
It changes, then cache controller judges the state of VMS at this time.
It should be noted that the caching group mentioned in the embodiment of the present invention, for when virtual machine carries out accessing operation, according to
The caching group that memory access address determines.
102, it is virtual to belong to second when the VMS of the first virtual machine is first state, in cache controller replacement caching group
Machine it is least recently used data cached.
It should be noted that cache controller judges the state of VMS, and when judging VMS for the second state, replacement
It is least recently used data cached in caching group.
Wherein, when the VMS of the first virtual machine is first state, cache controller obtains the cache invalidation of the second virtual machine
Rate, then judges whether the cache invalidation rate of the second virtual machine is greater than crash rate threshold value.If the cache invalidation of the second virtual machine
Rate is greater than crash rate threshold value, then replaces least recently used data cached in caching group;If the caching of the second virtual machine loses
Efficiency is not more than the crash rate threshold value, then cache controller determines the cache blocks number for belonging to the second virtual machine in caching group.
Further, cache controller determines belong to the cache blocks number of the second virtual machine in caching group after, judge second
Whether the cache blocks number of virtual machine is less than quantity threshold.If the cache blocks number of the second virtual machine is less than quantity threshold,
It replaces least recently used data cached in caching group;If the cache blocks number of the second virtual machine is not less than quantity threshold,
It then replaces and belongs to the least recently used data cached of the second virtual machine in caching group.
It should be noted that belong to the cache blocks of the second virtual machine in the crash rate threshold value and caching group of the second virtual machine
Quantity threshold, be it is set in advance, value can be according to the demand self-setting of the specific operating condition of virtual machine and user;
The embodiment of the present invention to cache controller obtain the second virtual machine cache invalidation rate implementation without limitation, can be this
Any implementation known to the technical staff of field, for example, passing through the MPKI of the second virtual machine(Misses Per1K
Instructions, the miss rate of every 1000 instructions)To calculate.
With in the prior art, when virtual machine executes starting and duplication, the performance of virtual machine cannot be promoted well, makes void
The lower problem of quasi- machine performance is compared, in the embodiment of the present invention, when virtual machine executes starting and duplication, to the second virtual machine and
Final stage shared buffer memory between first virtual machine is divided, and the capacity that the second virtual machine occupies caching is reduced.Due to second
Virtual machine is insensitive to the capacity of caching, even if the crash rate of caching is very high, the influence to its performance is also little, so by subtracting
The buffer memory capacity that few second virtual machine occupies increases the buffer memory capacity that the first virtual machine occupies, right between each virtual machine to reduce
The problem of competition of caching solves between each virtual machine in the prior art to Buffer competition, declines virtual machine performance.Pass through
Above-mentioned technical problem is solved, performance when improving virtual machine starting or duplication can be made.
Further embodiment of this invention provides a kind of method for dividing final stage shared cache, is used for virtualized environment.?
Under virtual environment, on the virtual platform of the present embodiment, virtual machine the maximum number N=8, Domain0 that can be run are to operate in virtually
Privileged virtual machine on monitor unit hypervisor, it can control physics I/O resource, and at the same time and guest virtual machine into
Row interaction, guest virtual machine needs to start by Domain0, and it is 8 bytes that block size is cached in virtual machine shared buffer memory, and 4
Road group is connected, totally 16 groups.As shown in Fig. 2, the method includes:
201, virtual machine VM register is added.
Wherein, include multiple processor cores in processor, add virtual machine VM register, VM deposit for each processor core
Data structure in device includes:Virtual Machine Manager mode of operation VMS and virtual machine identify VMID.
It should be noted that VMS there are 0 and 1 two states, the type of operation is executed for distinguishing guest virtual machine at this time:
If VMS=0, indicate that guest virtual machine is not to execute start-up operation or duplication operation at this time;If VMS=1, indicate objective at this time
Family virtual machine executes start-up operation or duplication operation.VMID is for identifying different virtual machines, and the digit of VMID is by platform
The maximum number of virtual machine can be run to determine.
For example, the digit of VMID is log2 (N) if the maximum number that can run virtual machine on virtual platform is N.This
In inventive embodiments, virtual machine maximum number N=8 that can be run, then the digit of VMID has log2 (N)=3, Domain0's
VMID is 000.The present embodiment is not defined the value of N, and when N is other values, the method for the present embodiment is equally applicable.
202, the marker bit of buffer address is extended.
Wherein, increase VMID in the marker bit of buffer address.VMID is used to mark the caching in the buffer address herein
Which virtual machine block belongs to, and in the replacement of cache blocks data, the VMID value in register at this time is written in buffer address
In VMID.
203, belong to the cache blocks of Domain0 in the crash rate threshold value and caching group of monitor of virtual machine setting Domain0
Quantity threshold.
Wherein, the crash rate threshold value of Domain0 is obtained according to experimental result.Start-up operation is executed in guest virtual machine
Or when duplication operation, although Domain0 can keep good performance in the very high situation of crash rate, when failure
When rate is more than certain value, the performance of Domain0 is possible to decline.In order to guarantee that Domain0 is able to maintain good performance, use
It family can be according in experiment as a result, sets itself crash rate threshold value.Belong to the cache blocks quantity threshold of Domain0 in caching group
For user's sets itself, value of the maximum value no more than cache blocks in caching group.
For example, cache blocks number is 4 in every group in the embodiment of the present invention, then the maximum value of the quantity threshold of cache blocks is
4。
It should be noted that the caching group mentioned in this step, is when virtual machine carries out accessing operation, according to memory access
The caching group that location determines.
204, the state of monitor of virtual machine setting Virtual Machine Manager mode of operation VMS.
Wherein, the state of VMS is arranged according to the operation content of virtual machine for monitor of virtual machine.When monitor of virtual machine is rung
When what is answered is that guest virtual machine executes start-up operation or duplication operation, VMS=1 is set;When what monitor of virtual machine responded is not
When guest virtual machine executes start-up operation or duplication operation, VMS=0 is set.
205, cache controller determine guest virtual machine access caching group it is data cached whether equal miss, if not
Hit, thens follow the steps 206;If not equal miss, then terminate process.
Wherein, after the state of monitor of virtual machine setting VMS, virtual machine accesses to caching according to memory access address,
If when virtual machine access cache, the data cached equal miss of caching group needs to access memory at this time, therefrom search corresponding
Then data are replaced with data in caching, then cache controller judges the state of VMS at this time;If virtual machine
When access cache, there is data cached hit in caching group, does not then need data replacement, terminate this process.
It should be noted that memory access address is 16, then the structure of memory access address such as Fig. 3 institute in the embodiment of the present invention
Show, wherein marker bit Tag is first nine of memory access address, when the marker bit matching in the marker bit and memory access address in cache blocks
When and cache blocks in active position be 1, then it represents that cache hit;Caching group position Set is the ten to the tenth of memory access address
Three, for the determination caching group to be accessed, bits of offset Offset is last three of memory access address, for determining in cache blocks
The byte to be accessed.
For example, the state of VMS is arranged in the virtual machine order of Xen response VMID=101, monitor of virtual machine.Then, VMID=
101 virtual machine access cache, if the virtual machine this memory access address be 0001000111010011, then, from the memory access
Group number Set=1010 for wanting access cache can be found out in address, as shown in Figure 4.Four in caching group 1010 of the memory access address
Marker bit after cache blocks extension is followed successively by:000000001111,010000000011,000000000001 and
011000100001, wherein the front three of marker bit is the VMID of extension, indicates this cache blocks virtual machine affiliated at this time, after
The position Tag of nine addresses for each cache blocks, the position Tag of the memory access address of this virtual machine is 000100011, it is known that four slow
Counterfoil is not hit by, and needs to access memory at this time, therefrom searches corresponding data, is then replaced, is held with data in caching
Row step 206.
206, cache controller determines whether VMS state is 1, if VMS state is 1, executes step 207;Otherwise using biography
System LRU replacement strategy.
It should be noted that the state of VMS is arranged in step 204, it is determined at this time, to determine number
Replacement policy to be used when according to replacement.
Wherein, replacement policy includes improved LRU replacement strategy in traditional LRU replacement strategy and the embodiment of the present invention.When
When VMS state is 1, LRU replacement strategy improved in the embodiment of the present invention, i.e. execution step 207 to step 211 are used;Work as VMS
It is using traditional LRU replacement strategy, i.e., least recently used data cached in replacement caching group when state is 0.
For example, in embodiments of the present invention, when the virtual machine of VMID=101 accesses to caching, the confidential visit of client virtual
The equal miss of the cache blocks for the caching group asked.If VMS state is 1 at this time, 207 are thened follow the steps;If VMS state is at this time
0, then it executes traditional LRU replacement strategy and carries out data replacement.It is assumed that the 4th slow at this time in the 1010th caching group
Counterfoil is the cache blocks at least used recently, by its VMID=011 it is found that this cache blocks belongs to the virtual machine of VMID=011, and
Judgement result is 0 for VMS state, then selects the 4th cache blocks, the data in the cache blocks are replaced with the virtual of VMID=101
The data that machine is accessed from memory, acquired results are as shown in Figure 5.
207, cache controller obtains the cache invalidation rate of Domain0 at this time.
Wherein, cache invalidation rate is equal to cache access missing times divided by cache access number, and the embodiment of the present invention is to slow
Memory controller obtains the implementation of the cache invalidation rate of Domain0 without limitation, can be well known to those skilled in the art
Any implementation, for example, the MPKI for passing through Domain0.
208, cache controller determines whether cache invalidation rate is greater than crash rate threshold value, if being not more than crash rate threshold value,
Execute step 209;Otherwise using traditional LRU replacement strategy.
Wherein, if the cache invalidation rate of Domain0 is greater than crash rate threshold value at this time, traditional LRU replacement strategy is used,
It replaces least recently used data cached in caching group;If the cache invalidation rate of Domain0 is not more than the crash rate
Threshold value thens follow the steps 209.
209, cache controller determines the cache blocks number for belonging to Domain0 in caching group.
Wherein, cache controller can obtain each slow in caching group at this time according to the VMID extended in cache tag position
Which virtual machine counterfoil belongs to, so that it is determined that belonging to the cache blocks number of Domain0.
For example, in the embodiment of the present invention, as shown in figure 4, can be obtained in caching group 1010 according to the VMID of each cache blocks:1st
A and the 3rd cache blocks belong to Domain0, and the 2nd cache blocks belong to the virtual machine of VMID=010, and the 4th cache blocks belong to
The virtual machine of VMID=011 then can determine that the cache blocks number for belonging to Domain0 is 2.
210, cache controller determines whether cache blocks number is less than quantity threshold, if being less than quantity threshold, executes step
Rapid 211;Otherwise using traditional LRU replacement strategy.
Wherein, if the cache blocks number for belonging to Domain0 in caching group is less than the quantity threshold, tradition is used
LRU replacement strategy, i.e., it is least recently used data cached in replacement caching group;If belonging to the slow of Domain0 in caching group
Counterfoil number is not less than quantity threshold, thens follow the steps 211.
211, cache controller selects the least recently used cache blocks replacement for belonging to Domain0 in caching group.
For example, in the embodiment of the present invention, as shown in figure 4, at this time in the 1010th group, the 1st and the 3rd cache blocks are belonged to
Domain0, it is assumed that the 3rd cache blocks are the cache blocks for belonging to Domain0 at least used recently, select the 3rd cache blocks,
The data that the virtual machine that data in the cache blocks replace with VMID=101 is accessed from memory, acquired results are as shown in Figure 6.
With in the prior art, when virtual machine executes starting and duplication, the performance of virtual machine cannot be promoted well, makes void
The lower problem of quasi- machine performance is compared, in the embodiment of the present invention, when virtual machine executes starting and duplication, to Domain0 and visitor
Final stage shared buffer memory between the virtual machine of family is divided, and the capacity that Domain0 occupies caching is reduced.Due to Domain0 pairs
The capacity of caching is insensitive, even if the crash rate of caching is very high, the influence to its performance is also little, so passing through reduction
The buffer memory capacity that Domain0 is occupied increases the buffer memory capacity that guest virtual machine occupies, to reduce between each virtual machine to caching
The problem of competition solves between each virtual machine in the prior art to Buffer competition, declines virtual machine performance.On solving
Technical problem is stated, performance when improving virtual machine starting or duplication can be made.
Further embodiment of this invention provides a kind of device 30 for dividing cache, as shown in fig. 7, described device 30 is wrapped
It includes:
Judging unit 31, when the data cached equal miss of the caching group for being accessed when the first virtual machine, described in judgement
The Virtual Machine Manager mode of operation VMS of first virtual machine;
Replacement unit 32, for replacing and belonging in the caching group when the VMS of first virtual machine is first state
Second virtual machine it is least recently used data cached.
Further, as shown in figure 8, described device 30 can also include:
Adding unit 33, for adding virtual machine VM register, the data structure of the VM register includes:The VMS
VMID is identified with virtual machine.
Further, as shown in figure 8, described device 30 can also include:
Expanding element 34 adds the VMID that the adding unit 33 is added for extending the marker bit of buffer address
It is added in the marker bit, data cached in the buffer address belongs to virtual machine corresponding with the VMID.
Further, as shown in figure 8, described device 30 can also include:
Setting unit 35, for being arranged described the when first virtual machine executes start-up operation or duplication operation
The VMS of one virtual machine is first state;When first virtual machine is not carried out start-up operation or duplication operation, described in setting
The VMS of first virtual machine is the second state.
Further, the replacement unit 32 is also used to:
When the VMS of first virtual machine is the second state, caching least recently used in the caching group is replaced
Data.
Further, the judging unit 31 is also used to judge whether the cache invalidation rate of second virtual machine is greater than mistake
Efficiency threshold;
The replacement unit 32 is also used to when the cache invalidation rate of second virtual machine is greater than the crash rate threshold value,
It replaces least recently used data cached in the caching group;As shown in figure 8, described device 30 can also include:
Determination unit 36, for when the cache invalidation rate of second virtual machine be not more than the crash rate threshold value when, really
Belong to the cache blocks number of second virtual machine in the fixed caching group.
Further, the judging unit 31 is also used to judge whether the cache blocks number of second virtual machine is less than institute
State the quantity threshold that determination unit 36 determines;
The replacement unit 32 is also used to be less than the determination unit 36 really when the cache blocks number of second virtual machine
When the fixed quantity threshold, replace least recently used data cached in the caching group;
The replacement unit 32 is also used to the cache blocks number when second virtual machine not less than the determination unit 36
When the determining quantity threshold, replaces and belong to the least recently used data cached of the second virtual machine in the caching group.
With in the prior art, when virtual machine executes starting and duplication, the performance of virtual machine cannot be promoted well, makes void
The lower problem of quasi- machine performance is compared, in the embodiment of the present invention, when virtual machine executes starting and duplication, to the second virtual machine and
Final stage shared buffer memory between first virtual machine is divided, and the capacity that the second virtual machine occupies caching is reduced.Due to second
Virtual machine is insensitive to the capacity of caching, even if cache invalidation rate is very high, the influence to its performance is also little, so passing through reduction
The buffer memory capacity that second virtual machine occupies increases the buffer memory capacity that the first virtual machine occupies, to reduce between each virtual machine to slow
The problem of competition deposited solves between each virtual machine in the prior art to Buffer competition, declines virtual machine performance.Pass through solution
Certainly above-mentioned technical problem can make performance when improving virtual machine starting or duplication.
Further embodiment of this invention provides a kind of device 40 for dividing cache, as shown in figure 9, described device 40 is wrapped
It includes:
Processor 41 when the data cached equal miss of the caching group for accessing when the first virtual machine, judges described
The Virtual Machine Manager mode of operation VMS of one virtual machine;And for replacing when the VMS of first virtual machine is first state
It changes and belongs to the least recently used data cached of the second virtual machine in the caching group.
Further, the processor 41 is also used to add virtual machine VM register, the data structure of the VM register
Including:The VMS and virtual machine identify VMID.
Further, the processor 41 is also used to extend the marker bit of buffer address, the VMID is added to described
In marker bit, data cached in the buffer address belongs to virtual machine corresponding with the VMID.
Further, the processor 41 is also used to execute start-up operation or duplication operation when first virtual machine
When, the VMS that first virtual machine is arranged is first state;And for being not carried out start-up operation when first virtual machine
Or when duplication operation, the VMS that first virtual machine is arranged is the second state.
Further, the processor 41 is also used to when the VMS of first virtual machine is the second state, described in replacement
It is least recently used data cached in caching group.
Optionally, the processor 41 is also used to judge whether the cache invalidation rate of second virtual machine is greater than crash rate
Threshold value;And for replacing the caching group when the cache invalidation rate of second virtual machine is greater than the crash rate threshold value
In it is least recently used data cached;And the cache invalidation rate for when second virtual machine is not more than the failure
When rate threshold value, the cache blocks number for belonging to second virtual machine in the caching group is determined.
Optionally, the processor 41 is also used to judge whether the cache blocks number of second virtual machine is less than number threshold
Value;And it if the cache blocks number for second virtual machine is less than the quantity threshold, replaces in the caching group
Least recently used is data cached;And if the cache blocks number for second virtual machine is not less than the number
Threshold value then replaces and belongs to the least recently used data cached of the second virtual machine in the caching group.
With in the prior art, when virtual machine executes starting and duplication, the performance of virtual machine cannot be promoted well, makes void
The lower problem of quasi- machine performance is compared, in the embodiment of the present invention, when virtual machine executes starting and duplication, to the second virtual machine and
Final stage shared buffer memory between first virtual machine is divided, and the capacity that the second virtual machine occupies caching is reduced.Due to second
Virtual machine is insensitive to the capacity of caching, even if cache invalidation rate is very high, the influence to its performance is also little, so passing through reduction
The buffer memory capacity that second virtual machine occupies increases the buffer memory capacity that the first virtual machine occupies, to reduce between each virtual machine to slow
The problem of competition deposited solves between each virtual machine in the prior art to Buffer competition, declines virtual machine performance.Pass through solution
Certainly above-mentioned technical problem can make performance when improving virtual machine starting or duplication.
The method that above-mentioned offer may be implemented in the device that a kind of access provided in an embodiment of the present invention divides cache is real
Example is applied, concrete function realizes the explanation referred in embodiment of the method, and details are not described herein.One kind provided in an embodiment of the present invention
The method and device for dividing cache can be adapted for virtualized environment, but be not limited only to this.
It should be noted that the caching mentioned in the embodiment of the present invention is applicable to final stage shared cache, greatly not only
It is limited to this.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for equipment reality
For applying example, since it is substantially similar to the method embodiment, so describing fairly simple, related place is referring to embodiment of the method
Part explanation.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a computer-readable storage medium
In, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic
Dish, CD, read-only memory(Read-Only Memory, ROM)Or random access memory(Random Access
Memory, RAM)Deng.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by those familiar with the art, all answers
It is included within the scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
Claims (14)
1. a kind of method for dividing cache, which is characterized in that including:
When the data cached equal miss of the caching group of the first virtual machine access, the virtual machine pipe of first virtual machine is judged
Manage mode of operation VMS;
When the VMS of first virtual machine is first state, replace belong in the caching group the second virtual machine recently most
What is used less is data cached, wherein the first state is for indicating that first virtual machine executes start-up operation or duplication
State in which when operation, what second virtual machine was used to control virtual environment is physically entered output I/O resource, with described the
One virtual machine interacts, and starts first virtual machine.
2. the method according to claim 1, wherein in the Virtual Machine Manager of judgement first virtual machine
Before mode of operation VMS, the method also includes:
Virtual machine VM register is added, the data structure of the VM register includes:The VMS and virtual machine identify VMID.
3. according to the method described in claim 2, it is characterized in that, after the addition virtual machine VM register, the side
Method further includes:
The marker bit for extending buffer address, the VMID is added in the marker bit, the caching number in the buffer address
According to belonging to virtual machine corresponding with the VMID.
4. the method according to claim 1, wherein in the Virtual Machine Manager of judgement first virtual machine
Before mode of operation VMS, the method also includes:
When first virtual machine executes start-up operation or duplication operation, the VMS that first virtual machine is arranged is first
State;When first virtual machine is not carried out start-up operation or duplication operation, the VMS that first virtual machine is arranged is the
Two-state.
5. method according to claim 1 or 4, which is characterized in that the method also includes:
When the VMS of first virtual machine is the second state, replace least recently used data cached in the caching group.
6. the method according to claim 1, wherein when the VMS of first virtual machine be first state when,
Replace belong in the caching group the second virtual machine it is least recently used it is data cached before, the method also includes:
Judge whether the cache invalidation rate of second virtual machine is greater than crash rate threshold value;
If the cache invalidation rate of second virtual machine is greater than the crash rate threshold value, replace in the caching group recently most
What is used less is data cached;
If the cache invalidation rate of second virtual machine is not more than the crash rate threshold value, it is determined that belong in the caching group
The cache blocks number of second virtual machine.
7. according to the method described in claim 6, it is characterized in that, belonging to second void in the determination caching group
After the cache blocks number of quasi- machine, the method also includes:
Judge whether the cache blocks number of second virtual machine is less than quantity threshold;
If the cache blocks number of second virtual machine is less than the quantity threshold, replace minimum recently in the caching group
What is used is data cached;
If the cache blocks number of second virtual machine is not less than the quantity threshold, replaces and belong to the in the caching group
Two virtual machines it is least recently used data cached.
8. a kind of device for dividing cache, which is characterized in that including:
Judging unit when the data cached equal miss of the caching group for accessing when the first virtual machine, judges that described first is empty
The Virtual Machine Manager mode of operation VMS of quasi- machine;
Replacement unit, for replacing and belonging to the second void in the caching group when the VMS of first virtual machine is first state
Quasi- machine it is least recently used data cached, wherein the first state is for indicating that first virtual machine executes starting
State in which when operation or duplication operation, second virtual machine are used to control being physically entered for virtual environment and export I/O
Resource is interacted with first virtual machine, starts first virtual machine.
9. device according to claim 8, which is characterized in that described device further includes:
Adding unit, for adding virtual machine VM register, the data structure of the VM register includes:The VMS and virtual
Machine identifies VMID.
10. device according to claim 9, which is characterized in that described device further includes:
Expanding element, for extending the marker bit of buffer address, the VMID that the adding unit is added is added to described
In marker bit, data cached in the buffer address belongs to virtual machine corresponding with the VMID.
11. device according to claim 8, which is characterized in that described device further includes:
Setting unit, for it is virtual to be arranged described first when first virtual machine executes start-up operation or duplication operation
The VMS of machine is first state;And for being arranged when first virtual machine is not carried out start-up operation or duplication operation
The VMS of first virtual machine is the second state.
12. the device according to claim 8 or 11, which is characterized in that the replacement unit is also used to:
When the VMS of first virtual machine is the second state, replace least recently used data cached in the caching group.
13. device according to claim 8, which is characterized in that the judging unit is also used to judge that described second is virtual
Whether the cache invalidation rate of machine is greater than crash rate threshold value;
The replacement unit is also used to replace institute when the cache invalidation rate of second virtual machine is greater than the crash rate threshold value
It states least recently used data cached in caching group;Described device further includes:
Determination unit, described in determining when the cache invalidation rate of second virtual machine is not more than the crash rate threshold value
Belong to the cache blocks number of second virtual machine in caching group.
14. device according to claim 13, which is characterized in that the judging unit is also used to judge that described second is virtual
Whether the cache blocks number of machine is less than the quantity threshold that the determination unit determines;
The replacement unit is also used to when the cache blocks number of second virtual machine is described less than what the determination unit determined
When quantity threshold, replace least recently used data cached in the caching group;
The replacement unit is also used to be not less than the institute that the determination unit determines when the cache blocks number of second virtual machine
When stating quantity threshold, replaces and belong to the least recently used data cached of the second virtual machine in the caching group.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310422795.0A CN104461928B (en) | 2013-09-16 | 2013-09-16 | Divide the method and device of cache |
PCT/CN2014/086341 WO2015035928A1 (en) | 2013-09-16 | 2014-09-12 | Method and apparatus for dividing cache |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310422795.0A CN104461928B (en) | 2013-09-16 | 2013-09-16 | Divide the method and device of cache |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104461928A CN104461928A (en) | 2015-03-25 |
CN104461928B true CN104461928B (en) | 2018-11-16 |
Family
ID=52665083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310422795.0A Active CN104461928B (en) | 2013-09-16 | 2013-09-16 | Divide the method and device of cache |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN104461928B (en) |
WO (1) | WO2015035928A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106484539A (en) * | 2016-10-13 | 2017-03-08 | 东北大学 | A kind of determination method of processor cache characteristic |
CN108228351B (en) * | 2017-12-28 | 2021-07-27 | 上海交通大学 | GPU performance balance scheduling method, storage medium and electronic terminal |
CN111880726B (en) * | 2020-06-19 | 2022-05-10 | 浙江工商大学 | Method for improving CNFET cache performance |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1495618A (en) * | 2002-09-20 | 2004-05-12 | 英特尔公司 | Chip multiprocessor or high-speed buffer storage share of multiprocessing system |
US7856633B1 (en) * | 2000-03-24 | 2010-12-21 | Intel Corporation | LRU cache replacement for a partitioned set associative cache |
CN102999444A (en) * | 2012-11-13 | 2013-03-27 | 华为技术有限公司 | Method and device for replacing data in caching module |
CN103218316A (en) * | 2012-02-21 | 2013-07-24 | 微软公司 | Cache employing multiple page replacement algorithms |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6587937B1 (en) * | 2000-03-31 | 2003-07-01 | Rockwell Collins, Inc. | Multiple virtual machine system with efficient cache memory design |
CN101571836A (en) * | 2008-04-29 | 2009-11-04 | 国际商业机器公司 | Method and system for replacing cache blocks |
US8745618B2 (en) * | 2009-08-25 | 2014-06-03 | International Business Machines Corporation | Cache partitioning with a partition table to effect allocation of ways and rows of the cache to virtual machine in virtualized environments |
US8990582B2 (en) * | 2010-05-27 | 2015-03-24 | Cisco Technology, Inc. | Virtual machine memory compartmentalization in multi-core architectures |
-
2013
- 2013-09-16 CN CN201310422795.0A patent/CN104461928B/en active Active
-
2014
- 2014-09-12 WO PCT/CN2014/086341 patent/WO2015035928A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7856633B1 (en) * | 2000-03-24 | 2010-12-21 | Intel Corporation | LRU cache replacement for a partitioned set associative cache |
CN1495618A (en) * | 2002-09-20 | 2004-05-12 | 英特尔公司 | Chip multiprocessor or high-speed buffer storage share of multiprocessing system |
CN103218316A (en) * | 2012-02-21 | 2013-07-24 | 微软公司 | Cache employing multiple page replacement algorithms |
CN102999444A (en) * | 2012-11-13 | 2013-03-27 | 华为技术有限公司 | Method and device for replacing data in caching module |
Non-Patent Citations (1)
Title |
---|
一种新型共享Cache动态划分机制;倪亚路,周晓方;《计算机工程》;20111130;第37卷(第22期);第2.4节 * |
Also Published As
Publication number | Publication date |
---|---|
WO2015035928A1 (en) | 2015-03-19 |
CN104461928A (en) | 2015-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9824011B2 (en) | Method and apparatus for processing data and computer system | |
US9811465B2 (en) | Computer system and cache control method | |
US8407704B2 (en) | Multi-level memory architecture using data structures for storing access rights and performing address translation | |
US20170075818A1 (en) | Memory management method and device | |
US10223026B2 (en) | Consistent and efficient mirroring of nonvolatile memory state in virtualized environments where dirty bit of page table entries in non-volatile memory are not cleared until pages in non-volatile memory are remotely mirrored | |
US11340945B2 (en) | Memory congestion aware NUMA management | |
CN108369507A (en) | For using the method and apparatus for handling process instruction in memory | |
Magenheimer et al. | Transcendent memory and linux | |
CN103052945B (en) | The method of managing computer memory and data storage device | |
US20150095576A1 (en) | Consistent and efficient mirroring of nonvolatile memory state in virtualized environments | |
US9740627B2 (en) | Placement engine for a block device | |
US9727465B2 (en) | Self-disabling working set cache | |
WO2013101104A1 (en) | Sharing tlb mappings between contexts | |
CN104346284A (en) | Memory management method and memory management equipment | |
CN112799977B (en) | Real-time protection method and device for cache partition and cache access of computer | |
TW201633145A (en) | Managing reuse information for memory pages | |
CN104461928B (en) | Divide the method and device of cache | |
WO2023216450A1 (en) | Method and apparatus for managing tlb cache in virtualization platform | |
CN109947666A (en) | Credible performing environment caching partition method and device, electronic equipment and storage medium | |
US9292452B2 (en) | Identification of page sharing opportunities within large pages | |
TWI648625B (en) | Managing address-independent page attributes | |
JP2006155272A (en) | Control method and program for virtual computer | |
CN107301021A (en) | It is a kind of that the method and apparatus accelerated to LUN are cached using SSD | |
Đorđević et al. | Performance issues in cloud computing: KVM hypervisor’s cache modes evaluation | |
KR101303079B1 (en) | Apparatus and method for controlling cache coherence in virtualized environment based on multi-core |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |