CN102999444A - Method and device for replacing data in caching module - Google Patents
Method and device for replacing data in caching module Download PDFInfo
- Publication number
- CN102999444A CN102999444A CN2012104534598A CN201210453459A CN102999444A CN 102999444 A CN102999444 A CN 102999444A CN 2012104534598 A CN2012104534598 A CN 2012104534598A CN 201210453459 A CN201210453459 A CN 201210453459A CN 102999444 A CN102999444 A CN 102999444A
- Authority
- CN
- China
- Prior art keywords
- quota
- cache
- mirror image
- virtual machine
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/15—Use in a specific computing environment
- G06F2212/151—Emulated environment, e.g. virtual machine
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention provides a method and a device for replacing data in a caching module. The method comprises the following steps: receiving a new data block or writing information of the new data block into the caching module; selecting a caching block as a preparatory replacing block from the caching module according to the quota in the condition that the whole space of the caching module is full, wherein the caching block is the data block stored in the caching module; and replacing the preparatory replacing block by the new data block. According to the method and the device for replacing the data in the caching module, one caching module is selected as the preparatory replacing block from the caching module according to the quota, so that the quantity of the caching blocks (in the caching module) of a mirror image or a virtual machine can be limited, and the space of the caching module is prevented from being greatly occupied by the highly active mirror image or virtual machine, and as a result, the problem that the caching spaces of the mirror images or virtual machines with different active levels are not balanced can be solved.
Description
Technical field
The present invention relates to the cache management technology in the cloud computing, relate in particular to a kind of method and device for replacing the cache module data.
Background technology
Desktop cloud is that the personal computer desktop environment is separated from physical machine by cloud computing mode, becomes a kind of service that desktop can externally be provided.In using the virtualized desktop cloud of a kind of virtual architectural framework (Xen) system, privileged domain among the Xen (Domain 0) fictionalizes CPU, input and output (Input and Output, IO) resource such as bus, storage is for the virtual machine territory among the Xen on the server (Domain U).At present a lot of Optimization Softwares have all designed buffer memory (Cache) module for Domain U in Domain 0.This Cache module is used for the hot spot data of buffer memory mirror image that each DomainU uses, the IO that produces to reduce each Domain U to use the data of mirror image on the remote storage is to the pressure of remote storage per second input and output amount (Input and Output PerSecond, IOPS).But; because the finite capacity of Cache module; with respect to each mirror image easily tens, capacity is seldom GB capacity up to a hundred even the more data; the new data that the Cache module often can occur newly entering is replaced away the old hot spot data that is buffered in the Cache module; cause the hit rate of each Domain U in the Cache module to reduce; have a strong impact on the performance of mirror image or virtual machine, thereby had a strong impact on user's experience.
For the problem that the hit rate of each Domain U in the Cache module reduces, usually adopt at present cache replacement algorithm that the cache blocks in the Cache module is prepared and replace ordering.The cache blocks that is replaced at first of preparation is after the administration module of Cache module receives new data block and maybe will write the information of new data block, the buffer unit storage new data block that the cache blocks that is replaced at first with preparation takies, the cache blocks that preparation is replaced at first is replaced.As using at most recently (Least Recently Used, LRU) cache replacement algorithm that the data block in the Cache module is carried out cache management.This LRU cache replacement algorithm can be retained in the data of nearest up-to-date access in the Cache module as much as possible, and the data replacement that will not access at most recently goes out the Cache module, thereby can improve the cache hit rate of hot spot data in the Cache module, reduce the IO response time, improve the user and experience |.
But, existing cache replacement algorithm is made no exception all to the IO access behavior of Cache, with all IO Access Events of identical tactical management, when the process concurrent running of a plurality of mirror images, the Cache space is used in each process competition, stores the data of oneself using.But the memory access behavior of each process is different, and the process that has is more prone to use one's own Cache, and the process that has then can be grabbed the Cache that other process is used, and affects the execution of other processes.When one or several mirror image relatively enlivens, they carry out the data block that IO access reads may take the Cache space relatively largely, such as the characteristics according to the LRU replace Algorithm, this can cause other transitory phase that the not too active data of mirror image in the Cache module are replaced out the Cache module in a large number.Like this, the data block of mirror cache in the Cache module is after replacing through some, what stay is the hot spot data of this mirror image mostly, when if a certain the period, mirror image was inactive, if because the frequent IO behavior of other active mirror image causes the hot spot data of this mirror image to be replaced out Cache, then can cause the hit rate degradation of this mirror image in buffer memory, and increase this mirror image IO response delay, thereby reduced the IO handling property of mirror image, affected user's experience.
Summary of the invention
The embodiment of the invention provides a kind of method and device for replacing the cache module data, is used for solving the different mirror image of active degree or the unbalanced problem of spatial cache of virtual machine.
First aspect of the embodiment of the invention provides a kind of method for replacing the cache module data, comprising:
Receive new data block or write the information of new data block in the cache module;
In the situation about being taken in the space of cache module, in described cache module, select a cache blocks according to quota, as the preparation replace block; Described cache blocks is the data block that is stored in the described cache module;
Replace described preparation replace block with described new data block.
Another aspect of the embodiment of the invention provides a kind of device for replacing the cache module data, comprising:
Data reception module, the information that is used for receiving new data block or writes new data block to cache module;
Replace and select module, in the situation that is used for being taken in the space of cache module, in described cache module, select a cache blocks according to quota, as the preparation replace block; Described cache blocks is the data block that is stored in the described cache module;
Replacement module is used for replacing described preparation replace block with described new data block.
The method and the device that are used for replacing the cache module data that the embodiment of the invention provides, by in described cache module, selecting a cache blocks according to quota, as the preparation replace block, limited the quantity of mirror image or the virtual machine cache blocks in cache module, avoid active mirror image or the virtual machine to take Cache module space relatively largely, thereby solved the different mirror image of active degree or the unbalanced problem of spatial cache of virtual machine.
Description of drawings
The process flow diagram of a kind of method for replacing the cache module data that Fig. 1 provides for the embodiment of the invention;
The synoptic diagram for buffer queue in the method for replacing the cache module data that Fig. 2 provides for the embodiment of the invention;
The structural representation of a kind of device for replacing the cache module data that Fig. 3 provides for the embodiment of the invention;
The another kind that Fig. 4 provides for the embodiment of the invention is used for structure and the application synoptic diagram of the device of replacement cache module data;
The application scenarios synoptic diagram for the device of replacing the cache module data that Fig. 5 provides for the embodiment of the invention;
The processing flow chart for the device of replacing the cache module data that Fig. 6 provides for the embodiment of the invention.
Embodiment
The process flow diagram of a kind of method for replacing the cache module data that Fig. 1 provides for the embodiment of the invention.As shown in Figure 1, the method comprises:
Step 11, receive new data block or write the information of new data block in the cache module.
For example the administration module of cache module receives the new data block of a certain mirror image or virtual machine, perhaps receives the information that writes new data block in the cache module, and knowing has new data block will write cache module.Can also know according to this information that writes new data block in the cache module and to be about to write the new data block of cache module from which mirror image or virtual machine.
In step 12, the situation about being taken in the space of cache module, in this cache module, select a cache blocks according to quota, as the preparation replace block.Wherein, cache blocks is the data block that is stored in the cache module, and the data block of namely storing in the cache module all is called cache blocks.
When for example the quantity of the cache blocks of the mirror image under the above-mentioned new data block or virtual machine reaches high quota H, according to cache replacement algorithm, with the cache blocks that is replaced at first of preparation in the cache blocks of the mirror image under this new data block or virtual machine as the preparation replace block; Wherein, high quota H=N * b/I, N is the quantity of the buffer unit in the cache module, this buffer unit is a space that cache blocks takies, b〉1, I is the mirror image of this cache module service or the quantity of virtual machine.This on the basis of cache replacement algorithm, quantity by restriction mirror image or the cache blocks of virtual machine in cache module, limit in other words the quantity of the buffer unit that mirror image or virtual machine take in cache module, avoided active mirror image or virtual machine to take Cache module space relatively largely, other mirror image that causes or the hot spot data of virtual machine buffer memory are replaced out the Cache module, and the problem that other mirror images that produce or virtual machine performance descend.
When the quantity of the cache blocks of the mirror image under the above-mentioned new data block or virtual machine did not reach high quota H, according to cache replacement algorithm, the cache blocks that preparation in the cache blocks of the mirror image in the quota limit or virtual machine is replaced at first was as the preparation replace block; Mirror image in this quota limit or virtual machine are to take the quantity of buffer unit of cache module greater than minimum level of quota L and less than mirror image or the virtual machine of high quota H; Wherein, high quota H=N * b/I is with above-mentioned high quota H, minimum level of quota L=N * s/I, N are the quantity of the buffer unit in the cache module with above-mentioned N, buffer unit is a space that cache blocks takies, b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, I is the mirror image of cache module service or the quantity of virtual machine.This on the basis of cache replacement algorithm, the quantity of the buffer unit that restriction takies is replaced out the Cache module less than the mirror image of minimum level of quota or the cache blocks of virtual machine, the hot spot data that has guaranteed inactive mirror image or virtual machine buffer memory is not replaced out the Cache module easily, the spatial cache that effectively balanced each mirror image or virtual machine take has guaranteed the performance of each mirror image or virtual machine.
Wherein, cache replacement algorithm can be the LRU cache replacement algorithm, can be up-to-date nearest use (MostRecently Used, MRU) cache replacement algorithm, also can be other efficient cache replacement algorithms.
For the LRU cache replacement algorithm, the cache blocks that preparation is replaced at first is the cache blocks that the LRU end of LRU cache replacement algorithm buffer queue points to; For the MRU cache replacement algorithm, the cache blocks that preparation is replaced at first is the cache blocks that the MRU end of MRU cache replacement algorithm buffer queue points to.The cache blocks that the preparation that obtains according to other efficient cache replacement algorithms is replaced at first is similar, repeats no more here.
Step 13, replace above-mentioned preparation replace block with above-mentioned new data block.
Alternatively, in the different situation of the mirror image under the cache blocks in the mirror image under above-mentioned new data block or virtual machine and the cache module or virtual machine, in memory module, select a cache blocks according to quota, before the preparation replace block, also comprise:
Calculating comprises high quota H and the minimum level of quota L of each mirror image of mirror image under the above-mentioned new data block or virtual machine or virtual machine, and wherein, high quota H and minimum level of quota L see above-mentioned explanation for details.
This by behind newly-increased mirror image or virtual machine, recomputating each mirror image or the virtual machine of cache module service, realized the dynamic adjustment of buffer memory quota, so that after the mirror image of cache module service or the virtual machine increase, each mirror image of cache module service or virtual machine still can carry out buffer memory according to new quota and replace, the spatial cache that guarantees each mirror image or virtual machine is balanced, and performance still can be protected.
Alternatively, in the different situation of the mirror image under the cache blocks in the mirror image under above-mentioned new data block or virtual machine and the cache module or virtual machine, in memory module, select a cache blocks according to quota, before the preparation replace block, also comprise:
Receive indication and close the order of mirror image or virtual machine;
Remove mirror image that indication closes or the cache blocks of virtual machine according to this order, and calculate high quota H and the minimum level of quota L of residue mirror image or virtual machine, wherein, high quota H and minimum level of quota L see above-mentioned explanation for details.This by after closing a certain or some mirror image or virtual machine, recomputating each mirror image or the virtual machine of cache module service, realized the dynamic adjustment of buffer memory quota, so that after a certain or some mirror image of cache module service or virtual machine close, remaining mirror image or virtual machine still can carry out buffer memory according to new quota and replace, the spatial cache that guarantees each mirror image or virtual machine is balanced, and performance still can be protected.
Above-described embodiment is for when some or several mirror images or virtual machine relatively enliven, may take Cache module space relatively largely, and the hot spot data that causes other mirror image or virtual machine buffer memory is replaced out the problem of Cache module, by adopting the method for buffer memory quotas administered, as select the method for preparation replace block according to quota, alleviate each mirror image or the unbalanced problem of virtual machine performance that cause when this situation occurs.Particularly, each mirror image or virtual machine are arranged quota limit, limit the quantity of the cache blocks that each mirror image or virtual machine take in the Cache module, so that the hot spot data of inactive mirror image or virtual machine buffer memory is not replaced out the Cache module easily, thereby the spatial cache that effectively balanced each mirror image or virtual machine take has guaranteed the performance of each mirror image or virtual machine.
The below is to describe the method that is used for replacing the cache module data as example with the buffer memory replacement process of quotas administered a plurality of mirror images.
Suppose that Cache module total volume space size is 9 buffer units, the mirror image of serving has 3, and cache replacement algorithm is the LRU cache replacement algorithm, adopts fixedly quota mode, and the maximum quota of each mirror image is 4, and minimum quota is 2.
When data block writes this Cache module, if available free buffer unit in the Cache module directly obtains be used to the buffer unit of storing new writing data blocks, in the buffer unit that obtains, write new data block from the buffer unit of free time.The data block that newly writes becomes new cache blocks, with information such as the MRU end insertion of identification information from LRU replace Algorithm buffer queue (hereinafter to be referred as the LRU buffer queue) of new cache blocks, as shown in Figure 2.Wherein, the MRU end points to cache blocks up-to-date in the Cache module or nearest accessed cache blocks.When Cache module space does not completely namely have idle buffer unit, do not having in the situation of quotas administered, if when having new data block will write the Cache module, the LRU cache replacement algorithm will shift out the cache blocks information of LRU end in the LRU buffer queue, correspondingly deletes the cache blocks of the LRU end sensing of LRU buffer queue in the Cache module.Wherein, LRU end points in the Cache module not accessed at most cache blocks.After deleting the cache blocks of this LRU end sensing, the space of vacateing is used for writing of new data block, and this process also makes buffer memory replace.
Not taking in Cache module space is in the situation of available free buffer unit in the Cache module, adopts above-mentioned cache replacement algorithm to carry out buffer memory and replaces.Suppose after after a while operation, in the Cache module buffer memory 2 cache blocks of 4 cache blocks of mirror image one, mirror image two and 3 cache blocks of mirror image three, each cache blocks takies 1 buffer unit, and the space that all 9 buffer units that taken altogether the Cache module are the Cache module is taken.The order of these cache blocks in the LRU buffer queue as shown in Figure 2, what LRU end pointed to is the cache blocks of mirror image two, what the MRU end pointed to is the cache blocks of mirror image one.
When writing new data block in this Cache module again:
If the new data block that writes comes from mirror image one, because the cache blocks quantity of mirror image one in the Cache module has reached high quota 4, therefore, although what the end of the LRU in the LRU replace Algorithm buffer queue pointed to is the cache blocks of mirror image two, but the cache blocks of mirror image two can not be replaced out the Cache module, but from the LRU LRU buffer queue end, search the cache blocks of mirror image one, carry out buffer memory and replace, namely replace the cache blocks of holding first mirror image one of the sensing that begins from LRU with new data block.It is the cache blocks of holding the mirror image one of corresponding cache blocks information points with LRU among Fig. 2.
If the new data block that writes is from mirror image three, because the buffer unit quantity that mirror image three takies in the Cache module does not reach high quota 4, therefore, according to the select progressively preparation replace block of holding the MRU end from LRU.Because acquiring the cache blocks of the LRU end sensing in the LRU buffer queue is mirror image two, and the buffer unit quantity that mirror image two takies in the Cache module is 2, is not higher than minimum level of quota 2, therefore continues to obtain next cache blocks.Next cache blocks is the cache blocks of mirror image one, and the buffer unit quantity that mirror image one takies in the Cache module is higher than minimum level of quota 2, therefore, replaces the cache blocks of this mirror image one with the new data block that writes.
Can find out from top example: quotas administered so that the buffer unit quantity that each mirror image takies the Cache module can be not too much can be not very little yet, thereby reach the characteristic of the buffer unit quantity of balanced each mirror image.
The below again with in desktop cloud virtual desktop framework (Virtual Desktop Infrastructure, VDI) scene with the example that is set to of quotas administered, the implementation process of the data replacement method in the Cache module is described.
If set up in the internal memory of Cache module Dom 0 in the Xen virtualized environment, its total volume is 4GB, each buffer unit size for 4KB(then the total buffer unit quantity I of this Cache module be 1M), quota rate mu-factor b is 120%, quota ratio coefficient of reduction s is 12%
When operation on another one territory when 4 mirror images are arranged, the high quota of each mirror image is 1M * 120%/4=307.2K, and minimum level of quota is 1M * 12%/4=30.72K.
In the situation about not taking in Cache module space, adopt cache replacement algorithm such as LRU replace Algorithm to carry out buffer memory and replace.
Suppose through after a while the operation after, each mirror image has taken respectively 1M/4=256K buffer unit, in the Cache module buffer memory hot spot data in each mirror image.At this moment, even certain image ratio is more active, it also can only take at most 307.2K buffer unit.When certain mirror image reaches this high quota, if again will be to the new data block of adding in the Cache module from this mirror image, will replace with the LRU end cache blocks of this mirror image in the LRU buffer queue, rather than replace with the LRU end cache blocks of LRU buffer queue.Wherein, the LRU of this mirror image end cache blocks refers in the LRU buffer queue, and from the LRU end, first points to the cache blocks in the Cache module of cache blocks information points of cache blocks of this mirror image in the LRU buffer queue.The LRU end cache blocks of LRU buffer queue refers to, the cache blocks in the Cache module of the cache blocks information points of the LUR end of LRU buffer queue.
Like this, even certain image ratio is idle, it at least also can guarantee to take 30.72K buffer unit and preserve its hot spot data, and the cache blocks in these buffer units can not replaced out the Cache module by the data block of other mirror image, thereby has ensured the performance of this mirror image.
When adding or delete mirror image, quota limit carries out dynamic change.To increase by 1 mirror image as example, new mirror image quantity is 5, and new high quota is 1M * 120%/5=245.76K, and minimum level of quota is 1M * 12%/5=24.576K, carries out buffer memory by new high quota and minimum level of quota again this moment and replaces.The method that buffer memory is replaced and embodiment shown in Figure 2 are similar.
Above-described embodiment carries out the buffer memory quotas administered take mirror image as unit, similarly, the technical scheme that above-described embodiment provides also can expand to use the virtual machine of a plurality of mirror images to carry out the buffer memory quotas administered as unit.
In above-described embodiment, existing replace Algorithm also can be the MRU cache replacement algorithm, can also be other efficient Cache cache replacement algorithm.
One of ordinary skill in the art will appreciate that: all or part of step that realizes above-mentioned each embodiment of the method can be finished by the relevant hardware of programmed instruction.Aforesaid program can be stored in the computer read/write memory medium.This program is carried out the step that comprises above-mentioned each embodiment of the method when carrying out; And aforesaid storage medium comprises: the various media that can be program code stored such as ROM, RAM, magnetic disc or CD.
The structural representation of a kind of device for replacing the cache module data that Fig. 3 provides for the embodiment of the invention.The device that present embodiment provides is used for realizing the method that provides embodiment illustrated in fig. 1 that as shown in Figure 3, this device comprises: module 32 and replacement module 33 are selected in data reception module 31, replacement.
The information that data reception module 31 is used for receiving new data block or writes new data block to cache module.Replace and select module 32 in the situation about being taken in the space of cache module, in this cache module, select a cache blocks according to quota, as the preparation replace block; This cache blocks is the data block that is stored in this cache module.Replacement module 33 is used for replacing above-mentioned preparation replace block with new data block.
Alternatively, replace to select module 32 specifically to be used for: when the quantity of the cache blocks of the mirror image under the new data block or virtual machine reaches high quota H, according to cache replacement algorithm, with the cache blocks that is replaced at first of preparation in the cache blocks of the mirror image under the new data block or virtual machine as the preparation replace block; Wherein, high quota H=N * b/I, N is the quantity of the buffer unit in the cache module, buffer unit is a space that cache blocks takies, b is the quota rate mu-factor, b〉1, I is the mirror image of cache module service or the quantity of virtual machine.
Alternatively, replace to select module 32 specifically to be used for: when the quantity of the cache blocks of the mirror image under the new data block or virtual machine does not reach high quota H, according to cache replacement algorithm, with the cache blocks that is replaced at first of preparation in the cache blocks of the mirror image in the quota limit or virtual machine as the preparation replace block; Mirror image in this quota limit or virtual machine are to take the quantity of buffer unit of cache module greater than minimum level of quota L and less than mirror image or the virtual machine of high quota H.Wherein, high quota H=N * b/I, minimum level of quota L=N * s/I, N is the quantity of the buffer unit in the cache module, and buffer unit is a space that cache blocks takies, and b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, and I is the mirror image of cache module service or the quantity of virtual machine.
Alternatively, the device that is used for replacement cache module data that the embodiment of the invention provides also comprises: the first quota arranges module, be used in the different situation of the mirror image under the cache blocks of the mirror image under the new data block or virtual machine and cache module or virtual machine, replace and select module in memory module, to select a cache blocks according to quota, before the preparation replace block, calculate high quota H and the minimum level of quota L of each mirror image of the mirror image comprise under the new data block or virtual machine or virtual machine.Wherein, high quota H=N * b/I, minimum level of quota L=N * s/I, N is the quantity of the buffer unit in the cache module, and buffer unit is a space that cache blocks takies, and b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, and I is the mirror image of cache module service or the quantity of virtual machine.
Alternatively, the device that is used for replacement cache module data that the embodiment of the invention provides also comprises: order receiver module and the second quota arrange module.
The order receiver module is used in the different situation of the mirror image under the cache blocks of the mirror image under the new data block or virtual machine and cache module or virtual machine, replace and select module in memory module, to select a cache blocks according to quota, before the preparation replace block, receive the order that mirror image or virtual machine are closed in indication.
The second quota arranges module and is used for the order that receives according to the order receiver module, removes mirror image that this order indication closes or the cache blocks of virtual machine, and calculates high quota H and the minimum level of quota L that remains mirror image or virtual machine.Wherein, high quota H=N * b/I, minimum level of quota L=N * s/I, N is the quantity of the buffer unit in the cache module, and this buffer unit is a space that cache blocks takies, and b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, and I is the mirror image of cache module service or the quantity of virtual machine.
Said apparatus embodiment selects module to carry out the buffer memory quotas administered by replacing, as select the method for preparation replace block according to quota, alleviate when some or several mirror images or virtual machine relatively enliven, may take Cache module space relatively largely, and each mirror image or the unbalanced problem of virtual machine spatial cache that cause when causing the hot spot data of other mirror image or virtual machine buffer memory to be replaced out the Cache module.Particularly, each mirror image or virtual machine are arranged quota limit, limit the quantity of the cache blocks that each mirror image or virtual machine take in the Cache module, so that the hot spot data of inactive mirror image or virtual machine buffer memory is not replaced out the Cache module easily, thereby the spatial cache that effectively balanced each mirror image or virtual machine take has guaranteed the performance of each mirror image or virtual machine.
The another kind that Fig. 4 provides for the embodiment of the invention is used for structure and the application synoptic diagram of the device of replacement cache module data.As shown in Figure 4, comprise that for the device of replacing the cache module data buffer memory capacity administration module 41, quota arrange module 42, quantity control module 43, buffer memory replacement module 44, quotas administered module 45 and additions and deletions administration module 46.
Wherein, cache module can be high speed storing; such as storage speed and the IOPS memory device much larger than common hard disc; as not supporting the internal memory of power-off protection; perhaps as supporting nonvolatile random access memory (the Non-Volatile Random Access Memory of power-off protection; NVRAM), perhaps can carry out the memory device of high speed interactive operation such as high speed solid hard disk (Solid State Disk, SSD) etc.
This cache module is that the mirror image that operates in the desktop cloud VDI scene carries out the buffer memory service.When the mirror image that moves in to desktop cloud VDI scene or virtual machine carry out quotas administered, will carry out operation associatedly between the modules, incidence relation as shown in Figure 4.
Buffer memory capacity administration module 41 is used for obtaining from cache module the quantity of this cache module buffer unit.
Quota arranges the quantity of buffer unit in the cache module that module 42 obtains according to buffer memory capacity administration module 41 and mirror image that quantity control module 43 is obtained or the quantity of virtual machine, and the quota limit of each mirror image or virtual machine is set.Suppose that cache module total volume size is M, each buffer unit size is then buffer unit quantity N=M/K of K(), mirror image quantity is I, and the quota rate mu-factor is b, and quota ratio coefficient of reduction is s.When the mirror image quantity I〉0, then high quota is H=N * b/I, and minimum level of quota quantity is L=N * s/I, and when the mirror image quantity I changes, then high quota H and minimum level of quota L will change thereupon.If fixedly during quota, can determine according to actual conditions high quota and the minimum level of quota of each mirror image or virtual machine.
Wherein quota limit also can offer quota by the external world module 42 is set, and in this case, quota arranges module 42 does not need to carry out quota calculating.
Buffer memory replacement module 44 adopts cache replacement algorithm that cache module is carried out buffer memory and replaces.Cache replacement algorithm refers to the cache replacement algorithm of various different replacement policies, can be the LRU cache replacement algorithm, also can be the MRU cache replacement algorithm, can also be other efficient Cache cache replacement algorithm.
Using high speed storing as in the desktop cloud VDI scene of cache module, cache module has the fast absolute predominance of data access speed with respect to generic storage, but because finite capacity, when generic storage utilizes cache module to be used for data buffer storage, certainly exist the situation that data block is replaced, the piece content is being write in the process of cache module, in the cache module during available free buffer unit, only using buffer memory replacement module 44 to carry out buffer memory and replace and get final product.When not having the free buffer unit in the cache module, quotas administered module 45 arranges high quota and the minimum level of quota of mirror image that module 42 arranges or virtual machine according to quota, replacement policy to cache blocks is adjusted, and this is judged and adjustment process is the key component of the embodiment of the invention.
Additions and deletions administration module 46 is used for obtaining the information of closing mirror image or virtual machine, judges perhaps whether mirror image or virtual machine have increased, and also is that additions and deletions administration module 46 is used for knowing whether the mirror image that moves in the desktop cloud VDI scene or the quantity of virtual machine change.
When quota arranges module 42 and module is set knows that according to additions and deletions administration module 46 quantity of mirror image or virtual machine changes, recomputate each mirror image of moving in the desktop cloud VDI scene or the quota limit of virtual machine.Like this, when interpolation/deletion mirror image or virtual machine, quota arranges module 42 will dynamically adjust mirror image or the quota limit of virtual machine in cache module, and then the cache blocks in cache module carries out the buffer scheduling management to mirror image or virtual machine according to this quota limit.
Under desktop cloud VDI scene mirror image or virtual machine add/when deleting, the capital causes the variation of mirror image quantity under this desktop cloud VDI scene, and cause the variation of data block quota quantities in the cache module space of mirror image, when data block is prepared to write in the cache module, not only cache replacement algorithm to be adopted, also quotas administered will be carried out.
Wherein, what quota arranged module 42, quantity control module 43, buffer memory replacement module 44 and 45 realizations of quotas administered module is the function that module and replacement module are selected in above-mentioned replacement, and just definition is divided different with function.
Above-mentioned device for replacing the cache module data can be set to one with cache module, as shown in Figure 5.
The application scenarios synoptic diagram for the device of replacing the cache module data that Fig. 5 provides for the embodiment of the invention.As shown in Figure 5, cache module and alternative 51 comprise cache module and are used for replacing the device of the data of this cache module that is to say, in the present embodiment, the device that cache module reaches for the data of replacing this cache module is integrated setting.
Cache module and alternative 51 are by virtual the obtaining on server domain Domain 0 of the virtualization software system in the desktop cloud VDI technology, for mirror image or the virtual machine that moves on the Domain U provides the buffer memory service.
When the new data block of mirror image was prepared to write cache module, the treatment scheme of cache module and alternative 51 comprised as shown in Figure 6:
Step 61, data block is prepared to write in the cache module.
Step 62, cache module and alternative 51 judge whether the cache module remaining space〉0, if, execution in step 63, otherwise, execution in step 65.
Step 63, cache module and alternative 51 obtain buffer unit, are used for storing the data block of preparing to write cache module.Whether buffer unit is obtained in judgement successful, if, execution in step 64, otherwise, the writing data blocks failure.
Whether the mirror image under the data block that step 65, cache module and alternative 51 determining steps 61 are prepared to write or the cache blocks quantity of virtual machine in cache module have reached high quota, if, execution in step 66, otherwise, execution in step 67.
The cache blocks A that step 66, cache module and alternative 51 find preparation to be replaced at first according to cache replacement algorithm, mirror image or virtual machine under the data block that cache blocks A prepares to write from step 61.The storage unit that cache blocks A is taken is as the storage space of preparing data block in the storing step 61, and then execution in step 64.
The cache blocks B that step 67, cache module and alternative 51 find preparation to be replaced at first according to cache replacement algorithm.
Whether step 68, cache module and alternative 51 judge mirror image under the cache blocks B or the cache blocks quantity of virtual machine in cache module greater than minimum level of quota, if, execution in step 69, otherwise, execution in step 610.
The storage unit that step 69, cache module and alternative 51 take cache blocks B is as the storage space of preparing data block in the write step 61, and then execution in step 64.
Under the corresponding desktop cloud VDI scene, different mirror image or the virtual machine of a plurality of active degrees, above-described embodiment carries out buffer memory according to quota and replaces, alleviated the problem that partly active mirror image or virtual machine are seized a large amount of cache modules space, allow the hot spot data of inactive mirror image or virtual machine buffer memory do not replaced out easily cache module, thereby the spatial cache of effectively balanced each mirror image or virtual machine has guaranteed the performance of each mirror image or virtual machine.
It should be noted that at last: above each embodiment is not intended to limit only in order to technical scheme of the present invention to be described; Although with reference to aforementioned each embodiment the present invention is had been described in detail, those of ordinary skill in the art is to be understood that: it still can be made amendment to the technical scheme that aforementioned each embodiment puts down in writing, and perhaps some or all of technical characterictic wherein is equal to replacement; And these modifications or replacement do not make the essence of appropriate technical solution break away from the scope of various embodiments of the present invention technical scheme.
Claims (10)
1. a method that is used for replacing the cache module data is characterized in that, comprising:
Receive new data block or write the information of new data block in the cache module;
In the situation about being taken in the space of cache module, in described cache module, select a cache blocks according to quota, as the preparation replace block; Described cache blocks is the data block that is stored in the described cache module;
Replace described preparation replace block with described new data block.
2. described method according to claim 1 is characterized in that, selects a cache blocks according to quota in described memory module, as the preparation replace block, comprising:
When the quantity of the cache blocks of the mirror image under the described new data block or virtual machine reached high quota H, according to cache replacement algorithm, the cache blocks that preparation in the cache blocks of the mirror image under the described new data block or virtual machine is replaced at first was as the preparation replace block; Wherein, described high quota H=N * b/I, N is the quantity of the buffer unit in the described cache module, described buffer unit is a space that cache blocks takies, b is the quota rate mu-factor, b〉1, I is the mirror image of described cache module service or the quantity of virtual machine.
3. described method according to claim 1 is characterized in that, selects a cache blocks according to quota in described memory module, as the preparation replace block, comprising:
When the quantity of the cache blocks of the mirror image under the described new data block or virtual machine did not reach high quota H, according to cache replacement algorithm, the cache blocks that preparation in the cache blocks of the mirror image in the quota limit or virtual machine is replaced at first was as the preparation replace block; Mirror image in the described quota limit or virtual machine are to take the quantity of buffer unit of described cache module greater than minimum level of quota L and less than mirror image or the virtual machine of high quota H; Wherein, described high quota H=N * b/I, described minimum level of quota L=N * s/I, N is the quantity of the buffer unit in the described cache module, and described buffer unit is a space that cache blocks takies, and b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, and I is the mirror image of described cache module service or the quantity of virtual machine.
4. each described method according to claim 1-3, it is characterized in that, in the different situation of mirror image under the cache blocks in mirror image under described new data block or virtual machine and the described cache module or virtual machine, in described memory module, select a cache blocks according to quota, before the preparation replace block, also comprise:
Calculating comprises high quota H and the minimum level of quota L of each mirror image of mirror image under the described new data block or virtual machine or virtual machine, wherein, described high quota H=N * b/I, described minimum level of quota L=N * s/I, N are the quantity of the buffer unit in the described cache module, described buffer unit is a space that cache blocks takies, b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, I is the mirror image of described cache module service or the quantity of virtual machine.
5. each described method according to claim 1-3, it is characterized in that, in the different situation of mirror image under the cache blocks in mirror image under described new data block or virtual machine and the described cache module or virtual machine, in described memory module, select a cache blocks according to quota, before the preparation replace block, also comprise:
Receive indication and close the order of mirror image or virtual machine;
Remove mirror image that indication closes or the cache blocks of virtual machine according to described order, and calculate high quota H and the minimum level of quota L of residue mirror image or virtual machine, wherein, described high quota H=N * b/I, described minimum level of quota L=N * s/I, N is the quantity of the buffer unit in the described cache module, described buffer unit is a space that cache blocks takies, b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, and I is the mirror image of described cache module service or the quantity of virtual machine.
6. a device that is used for replacing the cache module data is characterized in that, comprising:
Data reception module, the information that is used for receiving new data block or writes new data block to cache module;
Replace and select module, in the situation that is used for being taken in the space of cache module, in described cache module, select a cache blocks according to quota, as the preparation replace block; Described cache blocks is the data block that is stored in the described cache module;
Replacement module is used for replacing described preparation replace block with described new data block.
7. described device according to claim 6, it is characterized in that, described replacement selects module specifically to be used for: when the quantity of the cache blocks of the mirror image under the described new data block or virtual machine reaches high quota H, according to cache replacement algorithm, with the cache blocks that is replaced at first of preparation in the cache blocks of the mirror image under the described new data block or virtual machine as the preparation replace block; Wherein, described high quota H=N * b/I, N is the quantity of the buffer unit in the described cache module, described buffer unit is a space that cache blocks takies, b is the quota rate mu-factor, b〉1, I is the mirror image of described cache module service or the quantity of virtual machine.
8. described device according to claim 6, it is characterized in that, described replacement selects module specifically to be used for: when the quantity of the cache blocks of the mirror image under the described new data block or virtual machine does not reach high quota H, according to cache replacement algorithm, with the cache blocks that is replaced at first of preparation in the cache blocks of the mirror image in the quota limit or virtual machine as the preparation replace block; Mirror image in the described quota limit or virtual machine are to take the quantity of buffer unit of described cache module greater than minimum level of quota L and less than mirror image or the virtual machine of high quota H; Wherein, described high quota H=N * b/I, described minimum level of quota L=N * s/I, N is the quantity of the buffer unit in the described cache module, and described buffer unit is a space that cache blocks takies, and b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, and I is the mirror image of described cache module service or the quantity of virtual machine.
9. each described device is characterized in that according to claim 6-8, also comprises:
The first quota arranges module, be used in the different situation of the mirror image under the cache blocks of the mirror image under the described new data block or virtual machine and described cache module or virtual machine, described replacement selects module to select a cache blocks according to quota in described memory module, before the preparation replace block, calculating comprises high quota H and the minimum level of quota L of each mirror image of mirror image under the described new data block or virtual machine or virtual machine, wherein, described high quota H=N * b/I, described minimum level of quota L=N * s/I, N is the quantity of the buffer unit in the described cache module, described buffer unit is a space that cache blocks takies, b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, I is the mirror image of described cache module service or the quantity of virtual machine.
10. each described device is characterized in that according to claim 6-8, also comprises:
The order receiver module, be used in the different situation of the mirror image under the cache blocks of the mirror image under the described new data block or virtual machine and described cache module or virtual machine, described replacement selects module to select a cache blocks according to quota in described memory module, before the preparation replace block, receive the order that mirror image or virtual machine are closed in indication;
The second quota arranges module, for the mirror image of closing according to described order removing indication or the cache blocks of virtual machine, and calculate high quota H and the minimum level of quota L of residue mirror image or virtual machine, wherein, described high quota H=N * b/I, described minimum level of quota L=N * s/I, N is the quantity of the buffer unit in the described cache module, described buffer unit is a space that cache blocks takies, b is the quota rate mu-factor, b〉1, s is quota ratio coefficient of reduction, 0<s<1, I is the mirror image of described cache module service or the quantity of virtual machine.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012104534598A CN102999444A (en) | 2012-11-13 | 2012-11-13 | Method and device for replacing data in caching module |
PCT/CN2013/074843 WO2014075428A1 (en) | 2012-11-13 | 2013-04-27 | Method and device for replacing data in cache module |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012104534598A CN102999444A (en) | 2012-11-13 | 2012-11-13 | Method and device for replacing data in caching module |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102999444A true CN102999444A (en) | 2013-03-27 |
Family
ID=47928034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012104534598A Pending CN102999444A (en) | 2012-11-13 | 2012-11-13 | Method and device for replacing data in caching module |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN102999444A (en) |
WO (1) | WO2014075428A1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014075428A1 (en) * | 2012-11-13 | 2014-05-22 | 华为技术有限公司 | Method and device for replacing data in cache module |
CN104331492A (en) * | 2014-11-14 | 2015-02-04 | 北京国双科技有限公司 | Method and device for caching multi-instance data |
WO2015035928A1 (en) * | 2013-09-16 | 2015-03-19 | 华为技术有限公司 | Method and apparatus for dividing cache |
WO2016095761A1 (en) * | 2014-12-16 | 2016-06-23 | 华为技术有限公司 | Cache processing method and apparatus |
CN106372007A (en) * | 2015-07-23 | 2017-02-01 | Arm 有限公司 | Cache usage estimation |
CN106897442A (en) * | 2017-02-28 | 2017-06-27 | 郑州云海信息技术有限公司 | A kind of distributed file system user quota method for pre-distributing and distribution system |
WO2018082695A1 (en) * | 2016-11-07 | 2018-05-11 | 华为技术有限公司 | Cache replacement method and device |
CN108932150A (en) * | 2017-05-24 | 2018-12-04 | 中兴通讯股份有限公司 | Caching method, device and medium based on SSD and disk mixing storage |
CN110096333A (en) * | 2019-04-18 | 2019-08-06 | 华中科技大学 | A kind of container performance accelerated method based on nonvolatile memory |
CN111625678A (en) * | 2019-02-28 | 2020-09-04 | 北京字节跳动网络技术有限公司 | Information processing method, apparatus and computer readable storage medium |
CN112667534A (en) * | 2020-12-31 | 2021-04-16 | 海光信息技术股份有限公司 | Buffer storage device, processor and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030079087A1 (en) * | 2001-10-19 | 2003-04-24 | Nec Corporation | Cache memory control unit and method |
CN101763226A (en) * | 2010-01-19 | 2010-06-30 | 北京航空航天大学 | Cache method for virtual storage devices |
US20110320720A1 (en) * | 2010-06-23 | 2011-12-29 | International Business Machines Corporation | Cache Line Replacement In A Symmetric Multiprocessing Computer |
CN102483718A (en) * | 2009-08-25 | 2012-05-30 | 国际商业机器公司 | Cache partitioning in virtualized environments |
CN102609362A (en) * | 2012-01-30 | 2012-07-25 | 复旦大学 | Method for dynamically dividing shared high-speed caches and circuit |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999444A (en) * | 2012-11-13 | 2013-03-27 | 华为技术有限公司 | Method and device for replacing data in caching module |
-
2012
- 2012-11-13 CN CN2012104534598A patent/CN102999444A/en active Pending
-
2013
- 2013-04-27 WO PCT/CN2013/074843 patent/WO2014075428A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030079087A1 (en) * | 2001-10-19 | 2003-04-24 | Nec Corporation | Cache memory control unit and method |
CN102483718A (en) * | 2009-08-25 | 2012-05-30 | 国际商业机器公司 | Cache partitioning in virtualized environments |
CN101763226A (en) * | 2010-01-19 | 2010-06-30 | 北京航空航天大学 | Cache method for virtual storage devices |
US20110320720A1 (en) * | 2010-06-23 | 2011-12-29 | International Business Machines Corporation | Cache Line Replacement In A Symmetric Multiprocessing Computer |
CN102609362A (en) * | 2012-01-30 | 2012-07-25 | 复旦大学 | Method for dynamically dividing shared high-speed caches and circuit |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014075428A1 (en) * | 2012-11-13 | 2014-05-22 | 华为技术有限公司 | Method and device for replacing data in cache module |
CN104461928B (en) * | 2013-09-16 | 2018-11-16 | 华为技术有限公司 | Divide the method and device of cache |
WO2015035928A1 (en) * | 2013-09-16 | 2015-03-19 | 华为技术有限公司 | Method and apparatus for dividing cache |
CN104461928A (en) * | 2013-09-16 | 2015-03-25 | 华为技术有限公司 | Method and device for dividing caches |
CN104331492A (en) * | 2014-11-14 | 2015-02-04 | 北京国双科技有限公司 | Method and device for caching multi-instance data |
CN104331492B (en) * | 2014-11-14 | 2017-11-21 | 北京国双科技有限公司 | A kind of method and device for caching more instance datas |
WO2016095761A1 (en) * | 2014-12-16 | 2016-06-23 | 华为技术有限公司 | Cache processing method and apparatus |
CN106372007A (en) * | 2015-07-23 | 2017-02-01 | Arm 有限公司 | Cache usage estimation |
CN106372007B (en) * | 2015-07-23 | 2022-04-19 | Arm 有限公司 | Cache utilization estimation |
WO2018082695A1 (en) * | 2016-11-07 | 2018-05-11 | 华为技术有限公司 | Cache replacement method and device |
CN106897442A (en) * | 2017-02-28 | 2017-06-27 | 郑州云海信息技术有限公司 | A kind of distributed file system user quota method for pre-distributing and distribution system |
CN108932150A (en) * | 2017-05-24 | 2018-12-04 | 中兴通讯股份有限公司 | Caching method, device and medium based on SSD and disk mixing storage |
CN108932150B (en) * | 2017-05-24 | 2023-09-15 | 中兴通讯股份有限公司 | Caching method, device and medium based on SSD and disk hybrid storage |
CN111625678A (en) * | 2019-02-28 | 2020-09-04 | 北京字节跳动网络技术有限公司 | Information processing method, apparatus and computer readable storage medium |
CN110096333A (en) * | 2019-04-18 | 2019-08-06 | 华中科技大学 | A kind of container performance accelerated method based on nonvolatile memory |
CN110096333B (en) * | 2019-04-18 | 2021-06-29 | 华中科技大学 | Container performance acceleration method based on nonvolatile memory |
CN112667534A (en) * | 2020-12-31 | 2021-04-16 | 海光信息技术股份有限公司 | Buffer storage device, processor and electronic equipment |
CN112667534B (en) * | 2020-12-31 | 2023-10-20 | 海光信息技术股份有限公司 | Buffer storage device, processor and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2014075428A1 (en) | 2014-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102999444A (en) | Method and device for replacing data in caching module | |
US8793427B2 (en) | Remote memory for virtual machines | |
CN103019962B (en) | Data buffer storage disposal route, device and system | |
CN110597451B (en) | Method for realizing virtualized cache and physical machine | |
CN104503703B (en) | The treating method and apparatus of caching | |
JP6106028B2 (en) | Server and cache control method | |
US20120290786A1 (en) | Selective caching in a storage system | |
US11010056B2 (en) | Data operating method, device, and system | |
CN104580437A (en) | Cloud storage client and high-efficiency data access method thereof | |
CN102521330A (en) | Mirror distributed storage method under desktop virtual environment | |
US9684625B2 (en) | Asynchronously prefetching sharable memory pages | |
CN113778662B (en) | Memory recovery method and device | |
US20140089562A1 (en) | Efficient i/o processing in storage system | |
CN106133707A (en) | Cache management | |
WO2020069074A1 (en) | Write stream separation into multiple partitions | |
US9792050B2 (en) | Distributed caching systems and methods | |
US10657069B2 (en) | Fine-grained cache operations on data volumes | |
US20160342542A1 (en) | Delay destage of data based on sync command | |
CN109086141A (en) | EMS memory management process and device and computer readable storage medium | |
CN106293953B (en) | A kind of method and system of the shared display data of access | |
CN111427804B (en) | Method for reducing missing page interruption times, storage medium and intelligent terminal | |
US20150100663A1 (en) | Computer system, cache management method, and computer | |
US8732404B2 (en) | Method and apparatus for managing buffer cache to perform page replacement by using reference time information regarding time at which page is referred to | |
JP4865075B1 (en) | Computer and computer system | |
US10083117B2 (en) | Filtering write request sequences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20130327 |