CN103257935A - Cache management method and application thereof - Google Patents

Cache management method and application thereof Download PDF

Info

Publication number
CN103257935A
CN103257935A CN2013101384199A CN201310138419A CN103257935A CN 103257935 A CN103257935 A CN 103257935A CN 2013101384199 A CN2013101384199 A CN 2013101384199A CN 201310138419 A CN201310138419 A CN 201310138419A CN 103257935 A CN103257935 A CN 103257935A
Authority
CN
China
Prior art keywords
buffer memory
screening
queue
capacity
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101384199A
Other languages
Chinese (zh)
Other versions
CN103257935B (en
Inventor
陈俭喜
刘景宁
冯丹
黄赛
王璞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201310138419.9A priority Critical patent/CN103257935B/en
Publication of CN103257935A publication Critical patent/CN103257935A/en
Application granted granted Critical
Publication of CN103257935B publication Critical patent/CN103257935B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1021Hit rate improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • G06F2212/1036Life time enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/21Employing a record carrier using a specific recording technology
    • G06F2212/214Solid state disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a cache management method. The method includes the following steps of writing request data in blank caching space if the blank caching space exists in a cache, writing the request data in the tail portion of a cache screening queue if the cache has been fully written and meanwhile block numbers of the request data are not recorded in the cache screening queue, and deleting the block numbers from the cache screening queue and writing the request data in the cache if the cache has been fully written and meanwhile the block numbers of the request data have been recorded in the cache screening queue, wherein the cache screening queue is an LRU queue which is used for recording block numbers of discs which are recently accessed but miss the cache. The invention further discloses application of the method in control of read-write request executing. According to the cache management method, the cache hit ratio is improved, meanwhile, the service life of cache equipment is prolonged, writing operations of the cache equipment are reduced as much as possible on the premise of guaranteeing the cache hit ratio for application load with different features, and parameter adjusting and optimizing can be conducted without manual operations.

Description

A kind of buffer memory management method and application thereof
Technical field
The invention belongs to the computer data field of storage, particularly, relate to a kind of buffer memory management method.
Background technology
Buffer memory refers to carry out the storer of high-speed data exchange.In disk storage system, because disk needs tracking and these mechanically actuateds of location when reading and writing data, so response speed is much more slowly than the processing speed of storage system.The effect of buffer memory is exactly the hot spot data that is accessed frequently in the memory disk in the disk storage system, when disk storage system needs these data, directly reads from buffer memory, and the fast data response is provided, and improves the performance of whole magnetic disk storage system.But the buffer memory equipment price general charged that can carry out the high-speed data exchange is higher, so buffer memory capacity is limited.Cache management is exactly the data that will store frequent visit in limited buffer memory capacity, improves the cache hit rate (ratio that institute's request msg can read in buffer memory) of disk storage system as far as possible, to improve the performance of whole magnetic disk storage system.
Compare traditional magnetic disk, solid state hard disc (Solid State Drive) need not mechanical part, does not have tracking consuming time and positioning action, and higher random access performance can be provided, and also has advantages such as low-power consumption, low noise, shock resistance be good; Compare the RAM(random access memory), under identical cost budgeting, the capacity of solid state hard disc is much higher than RAM, and solid state hard disc has the non-volatile characteristic of outage, therefore in a lot of storage productss, solid state hard disc is used as the buffer memory of disk, exchanges the remarkable lifting of performance of storage system for lower cost.
However, solid state hard disc also has the defective of oneself, and topmost problem is the problem of wiping.Solid state hard disc is to be the memory device of medium with flash memory (Flash), flash memory (Flash) is to be the semiconductor storage unit of unit formation with the floating boom transistor, by programming, each unit can be in multiple different conditions, thereby represents the data of one or more bit.According to the difference of gate structure, flash memory can be divided into NAND and NOR two classes.Wherein nand flash memory has higher storage density, is usually used in mass data storage, as solid state hard disc etc.Nand flash memory chip is organized into a plurality of (block) usually, and a piece generally includes 64~256 pages (page).According to the difference of storage density, nand flash memory is divided into SLC and MLC two classes again.Each storage unit is only preserved the data of 1 bit in the SLC flash memory, and a plurality of bits can be preserved in each unit in the MLC flash memory, so storage density is higher.A lot of big capacity solid state hard discs all adopt the MLC flash memory as medium.The physical characteristics of flash memory has determined to upgrade the data of wherein storage must at first carry out erase operation, just can write new data then.The erase operation of nand flash memory is that unit carries out with the piece, once wipes all bits in the piece are set to 0.The erasing times of flash memory is limited, and the SLC flash memory can be wiped about 100,000 times, and the MLC buffer memory can only be wiped about 10,000 times even still less.Therefore, write operation can reduction of service life frequently.And writing at random under the intensive situation, the daily record of solid state hard disc itself and garbage reclamation mechanism also will increase extra writing and wiping, further reduction of service life.
Traditional buffer memory management method is unique target to improve cache hit rate.Based on temporal locality principle (data block of visiting recently is accessed again probably in the near future), these methods can all write buffer memory to each data block of visiting usually, in order to visiting.These methods are not to design at solid state hard disc, and they do not consider the life problems of writing of solid state hard disc.When solid state hard disc was used as buffer memory equipment, the buffer memory replacement can bring extra write operation.Particularly for the application load to read, the write operation of buffer memory equipment mainly comes from replacement operation.On the other hand, the distribution of the access frequency of data block is unbalanced in the practical application load, the data block of frequent visit only accounts for sub-fraction usually, the access frequency of other most of data blocks is lower, these data-block cache can not significantly be promoted performance on solid state hard disc, the solid state hard disc of meaningless consumption writes the life-span on the contrary.
Summary of the invention
Above defective or improvement demand at prior art, the invention provides a kind of buffer memory management method and in the executory application of read-write requests, its purpose is by the screening to data block, the hot spot data piece that buffer memory is accessed frequently, reduced the replacement operation after buffer memory is write completely, solve the quick loss solid state hard disc of the traditional buffer memory management method buffer memory problem in serviceable life thus, solved traditional buffer memory management method simultaneously the non-frequent visit data piece of part is write the not high enough problem of cache hit rate behind the buffer memory.
According to one aspect of the present invention, a kind of buffer control method is provided, be used for the request msg of miss buffer memory is controlled, write the request msg of buffer memory with screening, reduce the buffer memory replacement operation, specifically comprise:
If buffer memory is not fully written, then requested data block number is write the white space of buffer memory;
If buffer memory is fully written, requested data block number is not recorded in the buffer memory screening formation simultaneously, then requested data block number is not write buffer memory, and requested data block number is write buffer memory screening rear of queue;
If buffer memory is fully written, requested data block number is recorded in the middle of the described buffer memory screening formation simultaneously, then this data block number is deleted from this buffer memory screening formation, and this request msg is write buffer memory;
Wherein, described buffer memory screening formation is the LRU formation, is used for the record disk block number of accessed miss data in buffer recently.
Preferably, the capacity C of buffer memory screening formation rCan carry out the self-adaptation adjustment according to read-write requests.
Preferably, the capacity C of buffer memory screening formation rCan dynamically be adjusted into C r-C/ (C-C r) or C r+ C/C r, wherein, C is the maximum quantity that buffer memory can data blocks stored.
Preferably, when read-write requests miss of data buffer memory, the described buffer memory screening C of capacity of queue rSelf-adaptation is adjusted into C r+ C/C r, when read-write requests miss of data buffer memory, the described buffer memory screening C of capacity of queue rSelf-adaptation is adjusted into C r+ C/C r
Preferably, the capacity C of described buffer memory screening formation rAfter adjustment if C rLess than 0.1 * C, then make C rIf=0.1 * C greater than 0.9 * C, then makes C r=0.9 * C, wherein, C is the maximum quantity that buffer memory can data blocks stored.
Preferably, the described buffer memory screening C of capacity of queue rInitial value preferably be set to 0.1 * C, wherein C is the maximum quantity that buffer memory can data blocks stored.
According to another aspect of the present invention, a kind of execution control method of read-write requests is provided, to use above-mentioned buffer memory management method and carry out the read-write requests data are carried out buffer memory control, this method specifically comprises:
(1) set up the LRU formation and screen formation as buffer memory, wherein this buffer memory screening formation is used for the record disk block number of accessed miss data in buffer recently;
(2) according to read-write requests described buffer memory is screened capacity of queue and carry out the self-adaptation adjustment;
(3) select to write data in buffer according to read-write requests, comprising:
If request msg is hit buffer memory, then do not need to screen; If request msg is miss buffer memory both, then carry out each described buffer memory management method among the aforesaid right requirement 1-6, carry out buffer memory control
(4) be redirected read-write requests and execution
(4.1) for the request msg that is not written into buffer memory, then directly read-write requests is redirected to corresponding address on the disk, carry out read-write operation;
(4.2) for buffer memory Already in, perhaps write the request msg of buffer memory after the screening, data block corresponding in the buffer memory is moved on to the afterbody of buffer queue, then read-write requests is redirected to corresponding address in the buffer memory, carry out read-write operation.
Preferably, described LRU formation is set up in internal memory, is C as buffer memory screening formation and its capacity of initialization r, C rInitial value preferably be set to 0.1 * C, C is the maximum quantity that buffer memory can data blocks stored.Buffer memory screening formation is that a LRU(who is arranged in the internal memory is least recently used) formation, its capacity is C r, C rInitial value preferably be set to 0.1 * C, also can be for other be worth, C is the maximum quantity that buffer memory can data blocks stored, capacity C rCan the self-adaptation adjustment with the variation of application load.
Preferably, the self-adapting regulation method of described buffer memory screening capacity of queue is as follows:
(2.1) judge whether the read-write requests data hit buffer memory, if hit, change step (2.2) over to; Otherwise, change step (2.3) over to;
(2.2) automatically buffer memory is screened the C of capacity of queue rSize be adjusted into C r-C/ (C-C r); If the C after adjusting rLess than 0.1 * C, C in addition then r=0.1 * C, wherein C is the maximum quantity that buffer memory can data blocks stored;
(2.3) automatically buffer memory is screened the C of capacity of queue rSize be adjusted into C r+ C/C rIf the C after adjusting rGreater than 0.9 * C, then make C r=0.9 * C, wherein C is the maximum quantity that buffer memory can data blocks stored.
Preferably, when request msg is write buffer memory, if there is not blank block in the buffer memory, then empty the data block of buffer queue head correspondence earlier, the cache blocks that empties is distributed to this request msg, then this requested data block is moved to the afterbody of buffer queue.
Preferably, described buffer memory equipment is flash memory.
Among the present invention, buffer memory screening formation is used for the record disk block number of accessed miss data in buffer recently, and it does not record actual data in magnetic disk.When the quantity of the data block number of the buffer memory screening queue record capacity C greater than it r, buffer memory screening formation is least recently used according to LRU(so) and algorithm only keeps accessed C recently rIndividual data block number.The head record of buffer memory screening formation be minimum accessed data block number recently, and buffer memory screening rear of queue record is the most normal accessed data block number recently.Therefore at every turn when writing a new data block number to buffer memory screening formation, new data block number is inserted buffer memory screening rear of queue, detect buffer memory screening formation then and whether exceeded capacity C r, if exceed the header block of data number of then deleting buffer memory screening formation.
Among the present invention, the capacity of buffer memory screening formation directly influences the effect of its garbled data piece.If buffer memory screening capacity of queue is too little, then can make the data block that much is accessed frequently not have screened arriving; If buffer memory screening capacity of queue is too big, can make then that much not to be that the data block of frequent visit is screened come out.Buffer memory screening formation of the present invention can be adjusted into optimal value to capacity according to application load, thereby the yardstick of control screening under the prerequisite that guarantees hit rate, reduces unnecessary replacement operation as far as possible.
In the practical application load, the hot spot data piece of frequent visit only accounts for a small part, then access times are less for other most data blocks, even only visit once, these data blocks that are of little use are write buffer memory can not improve cache hit rate, waste limited spatial cache on the contrary, when buffer memory equipment is solid state hard disc, also shortened its serviceable life.Therefore filter out the data block that these are accessed frequently earlier, write the hit rate that can improve buffer memory in the buffer memory of finite capacity again, reduce data cached replacement operation simultaneously, prolong the serviceable life of solid state hard disc buffer memory.
Buffer memory management method and application thereof that the present invention proposes utilize buffer memory screening formation, filter out the higher data block of access frequency and are kept in the buffer memory, have improved cache hit rate; Avoid unnecessary buffer memory replacement operation simultaneously, thereby reduce the load of writing as the solid state hard disc of buffer memory, prolong the serviceable life of solid state hard disc.
Description of drawings
Fig. 1 is the synoptic diagram of the system that uses of the method for the embodiment of the invention;
Fig. 2 is the implementing procedure figure of the method for the embodiment of the invention;
Fig. 3 is the caching data block screening process figure of the embodiment of the invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explaining the present invention, and be not used in restriction the present invention.
Amortization management method of the present invention can be applicable to as shown in Figure 1 with solid state hard disc as on the buffer memory device memory systems.The buffer memory management method that the present invention proposes preferably designs at the disk storage system that with flash memory (as solid state hard disc) is the buffer memory medium, but its application is not limited to this type of disk storage system.
As shown in Figure 2, the buffer memory management method of present embodiment is used for the request msg of miss buffer memory is controlled, and writes the request msg of buffer memory with screening, reduces the buffer memory replacement operation, and this method specifically comprises:
If buffer memory is not fully written, then requested data block number is write the white space of buffer memory;
If buffer memory is fully written, requested data block number is not recorded in the buffer memory screening formation simultaneously, then requested data block number is not write buffer memory, and requested data block number is write buffer memory screening rear of queue; Wherein, described buffer memory screening formation is the LRU formation, is used for the record disk block number of accessed miss data in buffer recently.
If buffer memory is fully written, requested data block number is recorded in the middle of the described buffer memory screening formation simultaneously, then this data block number is deleted from this buffer memory screening formation, and this request msg is write buffer memory;
The capacity C of buffer memory screening formation rCan carry out the self-adaptation adjustment according to read-write requests, namely be adjusted into C r-C/ (C-C r) or C r+ C/C rNamely when read-write requests miss of data buffer memory, buffer memory screens the C of capacity of queue rSelf-adaptation is adjusted into C r+ C/C r, when read-write requests miss of data buffer memory, the described buffer memory screening C of capacity of queue rSelf-adaptation is adjusted into C r+ C/C r, the capacity C of buffer memory screening formation rAfter adjustment if C rLess than 0.1 * C, then make C rIf=0.1 * C greater than 0.9 * C, then makes C r=0.9 * C, wherein, C is the maximum quantity that buffer memory can data blocks stored.The buffer memory screening C of capacity of queue rThe initial value present embodiment in preferably be set to 0.1 * C, wherein C is the maximum quantity that buffer memory can data blocks stored, also can be set to other values.
Use above-mentioned buffer memory management method in the present embodiment and carry out the method that read-write requests control is carried out, it is by carrying out buffer memory control to the read-write requests data, filtering out the higher data block of access frequency is kept in the buffer memory, realize the raising of cache hit rate, avoid unnecessary buffer memory replacement operation simultaneously.This method specifically comprises: set up the LRU formation as buffer memory screening formation and initialization 1.; 2. the self-adaptation adjustment of buffer memory screening capacity of queue; 3. the data block of the frequent visit of screening writes buffer memory; 4. cache management and read-write requests are redirected.Step is specific as follows:
(1) sets up the LRU formation as buffer memory screening formation and initialization
Set up buffer memory screening formation in internal memory, buffer memory screening formation is that a LRU(who is arranged in the internal memory is least recently used) formation, its capacity is C r, C rThe initial value present embodiment in preferably be set to 0.1 * C, C is the maximum quantity that buffer memory can data blocks stored, capacity C rCan the self-adaptation adjustment with the variation of application load.
Buffer memory screening formation is used for the record disk block number of accessed miss data in buffer recently, and it does not record actual data in magnetic disk.When the quantity of the data block number of the buffer memory screening queue record capacity C greater than it r, buffer memory screening formation is least recently used according to LRU(so) and algorithm only keeps accessed C recently rIndividual data block number.
The head record of buffer memory screening formation be minimum accessed data block number recently, and buffer memory screening rear of queue record is accessed data block number recently.Therefore at every turn when writing a new data block number to buffer memory screening formation, new data block number is inserted buffer memory screening rear of queue, detect buffer memory screening formation then and whether exceeded capacity C r, if exceed the header block of data number of then deleting buffer memory screening formation.
(2) the self-adaptation adjustment of buffer memory screening capacity of queue
The capacity of buffer memory screening formation directly influences the effect of its garbled data piece.If buffer memory screening capacity of queue is too little, then can make the data block that much is accessed frequently not have screened arriving; If buffer memory screening capacity of queue is too big, can make then that much not to be that the data block of frequent visit is screened come out.Adopt a kind of loaded self-adaptive mode in the embodiments of the invention, can be adjusted into optimal value according to the capacity that the feature of application load is screened buffer memory formation automatically, thereby the yardstick of control screening under the prerequisite that guarantees hit rate, reduces unnecessary replacement operation as far as possible.
The self-adapting regulation method of buffer memory screening capacity of queue is as follows:
(2.1) receive read-write requests at every turn, judge whether request msg hits buffer memory, if hit, change step (2.2) over to; Otherwise, change step (2.3) over to;
(2.2) automatically buffer memory is screened the C of capacity of queue rSize be adjusted into C r-C/ (C-C r); If the C after adjusting rLess than 0.1 * C, C in addition then r=0.1 * C, wherein C is the maximum quantity that buffer memory can data blocks stored;
(2.3) automatically buffer memory is screened the C of capacity of queue rSize be adjusted into C r+ C/C rIf the C after adjusting rGreater than 0.9 * C, then make C r=0.9 * C, wherein C is the maximum quantity that buffer memory can data blocks stored.
(3) filter out the data block that is accessed frequently and write buffer memory
In the practical application load, the hot spot data piece of frequent visit only accounts for a small part, then access times are less for other most data blocks, even only visit once, these data blocks that are of little use are write buffer memory can not improve cache hit rate, waste limited spatial cache on the contrary, when buffer memory equipment is solid state hard disc, also shortened its serviceable life.Therefore filter out the data block that these are accessed frequently earlier, write the hit rate that can improve buffer memory in the buffer memory of finite capacity again, reduce data cached replacement operation simultaneously, prolong the serviceable life of solid state hard disc buffer memory.
In general, twice of accessed mistake or above data block will be accessed frequently future probably in one short period, and these data blocks are exactly potential hot spot data piece.After spatial cache fills up, have only and found just can carry out the buffer memory replacement operation by new hot spot data piece, it is write buffer memory.
As shown in Figure 3, concrete screening technique is as follows:
(3.1) if request msg is hit buffer memory, then do not need to screen;
(3.2) as if request msg both miss buffer memory, but buffer memory still has blank spatial cache, then distribute a blank spatial cache storage request msg, and requested data block is moved to the afterbody (buffer queue generally also is the LRU formation, and the afterbody of buffer queue is represented accessed data block recently) of buffer queue.
Buffer queue record be the piece number that is stored in the data block in the buffer memory, its tail data piece data block that number expression is corresponding is the most normal accessed recently; When needs are replaced in the buffer memory original data with new data, judge delete which data block according to buffer queue exactly, vacating space is given new data block.
(3.3) if both miss buffer memory of request msg, and buffer memory is fully written, and requested data block number is not recorded in the buffer memory screening formation simultaneously, then requested data block number write buffer memory screening rear of queue.
If having surpassed its self-adaptation, the length of buffer memory screening formation this moment adjusts capacity C r, then delete the header block of data number of buffer memory screening formation.
(3.4) if the miss buffer memory of request msg, and buffer memory is fully written, simultaneously requested data block number is recorded in the middle of the buffer memory screening formation, illustrate so this data block nearest a period of time be the 2nd time accessed, looking this data block is frequent visit data piece, this data block number is deleted from buffer memory screening formation, and this request msg is write buffer memory.
Before request msg write buffer memory, empty the data block of buffer queue head correspondence earlier, the cache blocks that empties is distributed to this request msg, then this requested data block is moved to the afterbody of buffer queue.
(4) be redirected read-write requests and execution:
(4.1) for the request msg that is not written into buffer memory, then directly read-write requests is redirected to corresponding address on the disk, carry out;
(4.2) for buffer memory Already in, perhaps write the request msg of buffer memory after the screening, data block corresponding in the buffer memory is moved on to the afterbody of buffer queue, then read-write requests is redirected to corresponding address in the buffer memory, carry out.
Those skilled in the art will readily understand; the above only is preferred embodiment of the present invention; not in order to limiting the present invention, all any modifications of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. buffer memory management method is used for the request msg of miss buffer memory is controlled, and writes the request msg of buffer memory with screening, reduces the buffer memory replacement operation, it is characterized in that this method specifically comprises:
If buffer memory is not fully written, then requested data block number is write the white space of buffer memory;
If buffer memory is fully written, requested data block number is not recorded in the buffer memory screening formation simultaneously, then requested data block number is not write buffer memory, and requested data block number is write buffer memory screening rear of queue;
If buffer memory is fully written, requested data block number is recorded in the middle of the described buffer memory screening formation simultaneously, then this data block number is deleted from this buffer memory screening formation, and this request msg is write buffer memory;
Wherein, described buffer memory screening formation is the LRU formation, is used for the record disk block number of accessed miss data in buffer recently.
2. a kind of buffer memory management method according to claim 1 is characterized in that, the capacity C of described buffer memory screening formation rCan dynamically adjust according to read-write requests, namely be adjusted into C r-C/ (C-C r) or C r+ C/C r, wherein, C is the maximum quantity that buffer memory can data blocks stored number.
3. a kind of buffer memory management method according to claim 1 and 2 is characterized in that, when read-write requests miss of data buffer memory, and the described buffer memory screening C of capacity of queue rBe adjusted into C r+ C/C r, when read-write requests miss of data buffer memory, the described buffer memory screening C of capacity of queue rBe adjusted into C r+ C/C r, wherein, C is the maximum quantity that buffer memory can data blocks stored number.
4. according to claim 2 or 3 described a kind of buffer memory management methods, it is characterized in that the capacity C of described buffer memory screening formation rAfter adjustment if C rLess than 0.1 * C, then its capacity is re-set as C rIf=0.1 * C greater than 0.9 * C, is C with its capacity setting then r=0.9 * C.
5. according to each described a kind of buffer memory management method among the claim 2-4, it is characterized in that the described buffer memory screening C of capacity of queue rInitial value preferably be set to 0.1 * C.
6. according to each described a kind of buffer memory management method among the claim 1-5, it is characterized in that, after requested data block number being recorded to described buffer memory screening formation, if the physical length of buffer memory screening formation surpasses its capacity, then delete the header block of data number of this buffer memory screening formation.
7. the execution control method of a read-write requests, it is used aforesaid right and requires that each described buffer memory management method carries out buffer memory control to the read-write requests data among the 1-6, it is characterized in that this method specifically comprises:
(1) set up the LRU formation and screen formation as buffer memory, this buffer memory screening formation is used for the record disk block number of accessed miss data in buffer recently;
(2) according to read-write requests described buffer memory is screened capacity of queue and carry out the self-adaptation adjustment;
(3) select to write data in buffer according to read-write requests, comprising:
If request msg is hit buffer memory, then do not screen, forward step (4) to; If the miss buffer memory of request msg, carry out then that each described buffer memory management method carries out buffer memory control among the aforesaid right requirement 1-6;
(4) be redirected read-write requests and execution
(4.1) for the request msg that is not written into buffer memory, then directly read-write requests is redirected to corresponding address on the disk, carry out read-write operation;
(4.2) for buffer memory Already in, perhaps write the request msg of buffer memory after the screening, data block number corresponding in the buffer memory is moved on to the afterbody of buffer queue, then read-write requests is redirected to corresponding address in the buffer memory, carry out read-write operation.
8. execution control method according to claim 7 is characterized in that, the self-adapting regulation method of described buffer memory screening capacity of queue is as follows:
(2.1) judge whether the read-write requests data hit buffer memory, if hit, change step (2.2) over to; Otherwise, change step (2.3) over to;
(2.2) automatically buffer memory is screened the C of capacity of queue rSize be adjusted into C r-C/ (C-C r); If the C after adjusting rLess than 0.1 * C, then buffer memory screens capacity of queue and is set to C r=0.1 * C, wherein C is the maximum quantity that buffer memory can data blocks stored;
(2.3) automatically buffer memory is screened the C of capacity of queue rSize be adjusted into C r+ C/C rIf, the C after adjusting rGreater than 0.9 * C, then buffer memory screens capacity of queue and is set to C r=0.9 * C.
9. according to claim 7 or 8 described execution control methods, it is characterized in that, when request msg is write buffer memory, if there is not blank block in the buffer memory, then empty the data block number of buffer queue head correspondence earlier, the cache blocks that empties is distributed to this request msg, then this requested data block number is moved to the afterbody of buffer queue.
10. according to the described a kind of buffer memory management method of one of claim 1-9, it is characterized in that described buffer memory equipment is flash memory.
CN201310138419.9A 2013-04-19 2013-04-19 A kind of buffer memory management method and application thereof Active CN103257935B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310138419.9A CN103257935B (en) 2013-04-19 2013-04-19 A kind of buffer memory management method and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310138419.9A CN103257935B (en) 2013-04-19 2013-04-19 A kind of buffer memory management method and application thereof

Publications (2)

Publication Number Publication Date
CN103257935A true CN103257935A (en) 2013-08-21
CN103257935B CN103257935B (en) 2016-07-13

Family

ID=48961865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310138419.9A Active CN103257935B (en) 2013-04-19 2013-04-19 A kind of buffer memory management method and application thereof

Country Status (1)

Country Link
CN (1) CN103257935B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577348A (en) * 2013-10-09 2014-02-12 广东欧珀移动通信有限公司 Method and mobile device for automatically counting application cache size and reminding user
CN103744624A (en) * 2014-01-10 2014-04-23 浪潮电子信息产业股份有限公司 System architecture for realizing selective upgrade of data cached in SSD (Solid State Disk) of storage system
CN104123238A (en) * 2014-06-30 2014-10-29 海视云(北京)科技有限公司 Data storage method and device
CN104360966A (en) * 2014-11-21 2015-02-18 浪潮(北京)电子信息产业有限公司 Method and device for carrying out IO (input/output) operation on block data
CN105095495A (en) * 2015-08-21 2015-11-25 浪潮(北京)电子信息产业有限公司 Distributed file system cache management method and system
CN106610793A (en) * 2016-11-11 2017-05-03 深圳市深信服电子科技有限公司 Method and device for managing cache data of hyper-converged system
CN106844740A (en) * 2017-02-14 2017-06-13 华南师范大学 Data pre-head method based on memory object caching system
CN107463509A (en) * 2016-06-05 2017-12-12 华为技术有限公司 Buffer memory management method, cache controller and computer system
CN107911799A (en) * 2017-05-18 2018-04-13 北京聚通达科技股份有限公司 A kind of method using Intelligent routing
CN109032969A (en) * 2018-06-16 2018-12-18 温州职业技术学院 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring
CN109144431A (en) * 2018-09-30 2019-01-04 华中科技大学 Caching method, device, equipment and the storage medium of data block
CN109164976A (en) * 2016-12-21 2019-01-08 北京忆恒创源科技有限公司 Optimize storage device performance using write buffer
CN109857680A (en) * 2018-11-21 2019-06-07 杭州电子科技大学 A kind of LRU flash cache management method based on dynamic page weight
CN110709810A (en) * 2017-10-09 2020-01-17 华为技术有限公司 Junk data cleaning method and equipment
CN110825585A (en) * 2019-10-30 2020-02-21 许继集团有限公司 Alarm event processing method and system based on micro-service
CN114327297A (en) * 2021-12-28 2022-04-12 华中科技大学 Data request processing method, equipment and system for interleaved recording disk

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055511A1 (en) * 2003-08-05 2005-03-10 Ivan Schreter Systems and methods for data caching
CN102637147A (en) * 2011-11-14 2012-08-15 天津神舟通用数据技术有限公司 Storage system using solid state disk as computer write cache and corresponding management scheduling method
CN102945207A (en) * 2012-10-26 2013-02-27 浪潮(北京)电子信息产业有限公司 Cache management method and system for block-level data
CN103049394A (en) * 2012-11-30 2013-04-17 记忆科技(深圳)有限公司 Method and system for data caching of solid state disk

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055511A1 (en) * 2003-08-05 2005-03-10 Ivan Schreter Systems and methods for data caching
CN102637147A (en) * 2011-11-14 2012-08-15 天津神舟通用数据技术有限公司 Storage system using solid state disk as computer write cache and corresponding management scheduling method
CN102945207A (en) * 2012-10-26 2013-02-27 浪潮(北京)电子信息产业有限公司 Cache management method and system for block-level data
CN103049394A (en) * 2012-11-30 2013-04-17 记忆科技(深圳)有限公司 Method and system for data caching of solid state disk

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈俭喜: "基于NAND闪存的固态存储系统设计及优化", 《万方学位论文数据库》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577348A (en) * 2013-10-09 2014-02-12 广东欧珀移动通信有限公司 Method and mobile device for automatically counting application cache size and reminding user
CN103744624B (en) * 2014-01-10 2017-09-22 浪潮电子信息产业股份有限公司 A kind of system architecture for realizing the data cached selectivity upgradings of storage system SSD
CN103744624A (en) * 2014-01-10 2014-04-23 浪潮电子信息产业股份有限公司 System architecture for realizing selective upgrade of data cached in SSD (Solid State Disk) of storage system
CN104123238A (en) * 2014-06-30 2014-10-29 海视云(北京)科技有限公司 Data storage method and device
CN104360966B (en) * 2014-11-21 2017-12-12 浪潮(北京)电子信息产业有限公司 To block number according to the method and apparatus for carrying out input-output operation
CN104360966A (en) * 2014-11-21 2015-02-18 浪潮(北京)电子信息产业有限公司 Method and device for carrying out IO (input/output) operation on block data
CN105095495A (en) * 2015-08-21 2015-11-25 浪潮(北京)电子信息产业有限公司 Distributed file system cache management method and system
CN105095495B (en) * 2015-08-21 2019-01-25 浪潮(北京)电子信息产业有限公司 A kind of distributed file system buffer memory management method and system
CN107463509B (en) * 2016-06-05 2020-12-15 华为技术有限公司 Cache management method, cache controller and computer system
CN107463509A (en) * 2016-06-05 2017-12-12 华为技术有限公司 Buffer memory management method, cache controller and computer system
WO2017211247A1 (en) * 2016-06-05 2017-12-14 华为技术有限公司 Cache management method, cache controller, and computer system
CN106610793A (en) * 2016-11-11 2017-05-03 深圳市深信服电子科技有限公司 Method and device for managing cache data of hyper-converged system
CN106610793B (en) * 2016-11-11 2019-09-17 深信服科技股份有限公司 The data cached management method and device of super emerging system
CN109164976A (en) * 2016-12-21 2019-01-08 北京忆恒创源科技有限公司 Optimize storage device performance using write buffer
CN109164976B (en) * 2016-12-21 2021-12-31 北京忆恒创源科技股份有限公司 Optimizing storage device performance using write caching
CN106844740A (en) * 2017-02-14 2017-06-13 华南师范大学 Data pre-head method based on memory object caching system
CN107911799A (en) * 2017-05-18 2018-04-13 北京聚通达科技股份有限公司 A kind of method using Intelligent routing
CN107911799B (en) * 2017-05-18 2021-03-23 北京聚通达科技股份有限公司 Method for utilizing intelligent route
CN110709810A (en) * 2017-10-09 2020-01-17 华为技术有限公司 Junk data cleaning method and equipment
US11126546B2 (en) 2017-10-09 2021-09-21 Huawei Technologies Co., Ltd. Garbage data scrubbing method, and device
US11704240B2 (en) 2017-10-09 2023-07-18 Huawei Technologies Co., Ltd. Garbage data scrubbing method, and device
CN109032969A (en) * 2018-06-16 2018-12-18 温州职业技术学院 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring
CN109144431A (en) * 2018-09-30 2019-01-04 华中科技大学 Caching method, device, equipment and the storage medium of data block
CN109144431B (en) * 2018-09-30 2021-11-02 华中科技大学 Data block caching method, device, equipment and storage medium
CN109857680A (en) * 2018-11-21 2019-06-07 杭州电子科技大学 A kind of LRU flash cache management method based on dynamic page weight
CN110825585A (en) * 2019-10-30 2020-02-21 许继集团有限公司 Alarm event processing method and system based on micro-service
CN114327297A (en) * 2021-12-28 2022-04-12 华中科技大学 Data request processing method, equipment and system for interleaved recording disk
CN114327297B (en) * 2021-12-28 2024-03-19 华中科技大学 Data request processing method, equipment and system of staggered recording disk

Also Published As

Publication number Publication date
CN103257935B (en) 2016-07-13

Similar Documents

Publication Publication Date Title
CN103257935A (en) Cache management method and application thereof
US20240126433A1 (en) Method of controlling nonvolatile semiconductor memory
KR101894625B1 (en) Priority-based garbage collection for data storage systems
US11030107B2 (en) Storage class memory queue depth threshold adjustment
CN107622022B (en) Cache over-provisioning in a data storage device
US8225044B2 (en) Storage system which utilizes two kinds of memory devices as its cache memory and method of controlling the storage system
KR101563875B1 (en) Method and system for balancing host write operations and cache flushing
CN108762664B (en) Solid state disk page-level cache region management method
US20160217071A1 (en) Cache Allocation in a Computerized System
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
US20100325352A1 (en) Hierarchically structured mass storage device and method
CN104834607A (en) Method for improving distributed cache hit rate and reducing solid state disk wear
US20100293337A1 (en) Systems and methods of tiered caching
CN104794064A (en) Cache management method based on region heat degree
US20090094391A1 (en) Storage device including write buffer and method for controlling the same
US7818505B2 (en) Method and apparatus for managing a cache memory in a mass-storage system
CN103136121A (en) Cache management method for solid-state disc
CN102760101A (en) SSD-based (Solid State Disk) cache management method and system
CN105389135B (en) A kind of solid-state disk inner buffer management method
US11645006B2 (en) Read performance of memory devices
CN108762671A (en) Hybrid memory system based on PCM and DRAM and management method thereof
CN103514110A (en) Cache management method and device for nonvolatile memory device
CN107832007A (en) A kind of method of raising SSD combination properties
CN110321081B (en) Flash memory read caching method and system
US20090327580A1 (en) Optimization of non-volatile solid-state memory by moving data based on data generation and memory wear

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant