CN103257935B - A kind of buffer memory management method and application thereof - Google Patents

A kind of buffer memory management method and application thereof Download PDF

Info

Publication number
CN103257935B
CN103257935B CN201310138419.9A CN201310138419A CN103257935B CN 103257935 B CN103257935 B CN 103257935B CN 201310138419 A CN201310138419 A CN 201310138419A CN 103257935 B CN103257935 B CN 103257935B
Authority
CN
China
Prior art keywords
buffer memory
queue
data block
screening
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310138419.9A
Other languages
Chinese (zh)
Other versions
CN103257935A (en
Inventor
陈俭喜
刘景宁
冯丹
黄赛
王璞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201310138419.9A priority Critical patent/CN103257935B/en
Publication of CN103257935A publication Critical patent/CN103257935A/en
Application granted granted Critical
Publication of CN103257935B publication Critical patent/CN103257935B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1021Hit rate improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • G06F2212/1036Life time enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/21Employing a record carrier using a specific recording technology
    • G06F2212/214Solid state disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a kind of buffer memory management method, including: if buffer memory has blank spatial cache, then the blank spatial cache of request data write;If buffer memory is fully written, requested data block number is not recorded in buffer memory screening queue simultaneously, then requested data block number writes the afterbody of buffer memory screening queue;If buffer memory is fully written, requested data block number has been recorded in the middle of buffer memory screening queue simultaneously, this data block number is screened queue from buffer memory and deletes, and this request data is write buffer memory;Wherein, buffer memory screening queue is LRU queue, for recording the disk block number of the accessed data of miss buffer memory recently.The invention also discloses described method and perform the application in controlling in read-write requests.The present invention is possible not only to improve cache hit rate, it is also possible to extend the service life of buffer memory device, the application load of different characteristic can reduce the write operation of buffer memory device simultaneously under the premise ensureing hit rate, it is not necessary to be performed manually by arameter optimization as far as possible.

Description

A kind of buffer memory management method and application thereof
Technical field
The invention belongs to computer data field of storage, in particular it relates to a kind of buffer memory management method.
Background technology
Buffer memory refers to the memorizer that can carry out high-speed data exchange.In disk storage system, owing to disk needs tracking when reading and writing data and positions these mechanically actuateds, therefore response speed is much more slowly than the processing speed of storage system.In disk storage system, the effect of buffer memory stores the hot spot data being accessed frequently in disk exactly, when disk storage system needs these data, directly reads from buffer memory, it is provided that quickly data response, improves the performance of whole disk storage system.But the buffer memory device price general charged being able to carry out high-speed data exchange is higher, and therefore buffer memory capacity is limited.Cache management seeks to the data that storage frequently accesses in limited buffer memory capacity, improves the cache hit rate (ratio that requested data can read in the buffer) of disk storage system, to improve the performance of whole disk storage system as far as possible.
Comparing traditional magnetic disk, solid state hard disc (SolidStateDrive) is without mechanical part, it does not have tracking consuming time and positioning action, using the teaching of the invention it is possible to provide higher random access performance, also has the advantages such as low-power consumption, low noise, shock resistance be good;Compare RAM(random access memory), under identical cost budgeting, the capacity of solid state hard disc is much higher than RAM, and solid state hard disc has the characteristic that power-off is non-volatile, therefore in a lot of storage products, solid state hard disc is taken as the buffer memory of disk, exchanges being obviously improved of performance of storage system for relatively low cost.
While it is true, solid state hard disc also has the defect of oneself, topmost problem to be erasing problems.The storage device that solid state hard disc is is medium with flash memory (Flash), flash memory (Flash) is the semiconductor storage unit constituted for unit with floating transistor, by programming, each unit can be at multiple different conditions, thus representing the data of one or more bit.Difference according to gate structure, flash memory can be divided into NAND and NOR two class.Wherein nand flash memory has higher memory density, is usually used in mass data storage, such as solid state hard disc etc..Nand flash memory chip is generally organized into multiple pieces (block), and a block generally includes 64~256 pages (page).Difference according to memory density, nand flash memory is divided into again SLC and MLC two class.In SLC Flash, each memory element only preserves the data of 1 bit, and in MLC flash, each unit can preserve multiple bit, and therefore memory density is higher.A lot of Large Copacity solid state hard discs all adopt MLC flash as medium.The physical characteristic of flash memory determine to update the data wherein stored must first carry out erasing operation, then just can write new data.The erasing operation of nand flash memory carries out in units of block, once wipes and bits all in block are set to 0.The erasing times of flash memory is limited, and SLC Flash can be wiped about 100,000 times, and MLC buffer memory can only wipe about 10,000 times even less.Therefore, write operation can reduction of service life frequently.And when random write is intensive, the daily record of solid state hard disc itself and garbage reclamation mechanism also will increase extra writing and wiping, further reduction of service life.
Traditional buffer memory management method is to improve cache hit rate for unique objects.Based on temporal locality principle (data block accessed recently is likely to again be accessed in the near future), the data block that these methods would generally access each writes buffer memory, in order to accessing in the future.These methods are not directed to solid state hard disc design, and what they did not consider solid state hard disc writes life problems.When solid state hard disc is used as buffer memory device time, buffer memory replacement can bring extra write operation.Especially for the application load to read, the write operation of buffer memory device mostlys come from replacement operation.On the other hand, in practical application load, the access frequency distribution of data block is unbalanced, the data block frequently accessed generally only accounts for sub-fraction, the access frequency of other major part data blocks is relatively low, these data-block cache can not be obviously improved on solid state hard disc performance, the solid state hard disc of consumption meaningless on the contrary write the life-span.
Summary of the invention
Disadvantages described above or Improvement requirement for prior art, the invention provides a kind of buffer memory management method and in the executory application of read-write requests, its object is to by the screening to data block, the hot spot data block that buffer memory is accessed frequently, decrease buffer memory and write the replacement operation after completely, the problem thus solving tradition buffer memory management method rapid deterioration solid state hard disc buffer memory service life, solves tradition buffer memory management method simultaneously and non-for part frequent access data block writes the problem that cache hit rate after buffer memory is not high enough.
According to one aspect of the present invention, it is provided that a kind of buffer control method, for the request data of miss buffer memory is controlled, to screen the request data of write buffer memory, reduces buffer memory replacement operation, specifically include:
If buffer memory is not written full, then requested data block number is write the white space of buffer memory;
If buffer memory is fully written, requested data block number is not recorded in buffer memory screening queue simultaneously, then requested data block number does not write buffer memory, and requested data block number writes the afterbody of buffer memory screening queue;
If buffer memory is fully written, requested data block number has been recorded in the middle of the screening queue of described buffer memory simultaneously, then this data block number screened queue from this buffer memory and delete, and this request data is write buffer memory;
Wherein, the screening queue of described buffer memory is LRU queue, for recording the disk block number of the accessed data of miss buffer memory recently.
Preferably, the capacity C of buffer memory screening queuerSelf-adaptative adjustment can be carried out according to read-write requests.
Preferably, the capacity C of buffer memory screening queuerIt is dynamically adapted as Cr-C/(C-Cr) or Cr+C/Cr, wherein, C is the maximum quantity of the data block that buffer memory can store.
Preferably, when the miss buffer memory of read-write requests data, described buffer memory screening capacity of queue CrSelf-adaptative adjustment is Cr+C/Cr, when the miss buffer memory of read-write requests data, described buffer memory screening capacity of queue CrSelf-adaptative adjustment is Cr+C/Cr
Preferably, the capacity C of described buffer memory screening queuerIf C after the adjustmentrLess than 0.1 × C, then make Cr=0.1 × C, if more than 0.9 × C, then makes Cr=0.9 × C, wherein, C is the maximum quantity of the data block that buffer memory can store.
Preferably, described buffer memory screening capacity of queue CrInitial value be preferably arranged to 0.1 × C, wherein C is the maximum quantity of the data block that buffer memory can store.
It is another aspect of this invention to provide that provide the execution control method of a kind of read-write requests, applying above-mentioned buffer memory management method and carry out read-write requests data are carried out buffer control, the method specifically includes:
(1) setting up LRU queue and screen queue as buffer memory, wherein this buffer memory screening queue is for recording the disk block number of the accessed data of miss buffer memory recently;
(2) according to read-write requests, described buffer memory is screened capacity of queue and carry out self-adaptative adjustment;
(3) data of write buffer memory are selected according to read-write requests, including:
If request data hit buffer memory, then it is made without screening;If request data is miss buffer memory both, then perform the buffer memory management method according to any one of the claims 1-6, carry out buffer control
(4) redirect read-write requests and perform
(4.1) for being not written into the request data of buffer memory, then directly read-write requests is redirected to address corresponding on disk, performs read-write operation;
(4.2) for Already in buffer memory, or after screening, write the request data of buffer memory, data block corresponding in buffer memory is moved on to the afterbody of buffer queue, then read-write requests is redirected to address corresponding in buffer memory, perform read-write operation.
Preferably, described LRU queue is set up in internal memory, screens queue as buffer memory and to initialize its capacity be Cr, CrInitial value be preferably arranged to the maximum quantity that 0.1 × C, C are the data block that buffer memory can store.Buffer memory screening queue is that a LRU(being arranged in internal memory is least recently used) queue, its capacity is Cr, CrInitial value be preferably arranged to 0.1 × C, it is also possible to for other values, C is the maximum quantity of the data block that buffer memory can store, capacity CrCan with the change of application load self-adaptative adjustment.
Preferably, the self-adapting regulation method of described buffer memory screening capacity of queue is as follows:
(2.1) judge whether read-write requests data hit buffer memory, if hit, proceed to step (2.2);Otherwise, step (2.3) is proceeded to;
(2.2) automatically buffer memory is screened capacity of queue CrSize be adjusted to Cr-C/(C-Cr);If the C after adjustingrLess than 0.1 × C, then another Cr=0.1 × C, wherein C is the maximum quantity of the data block that buffer memory can store;
(2.3) automatically buffer memory is screened capacity of queue CrSize be adjusted to Cr+C/Cr;If the C after adjustingrMore than 0.9 × C, then make Cr=0.9 × C, wherein C is the maximum quantity of the data block that buffer memory can store.
Preferably, when request data is write buffer memory, if buffer memory does not have blank block, then first empty the data block that buffer queue head is corresponding, the cache blocks emptied is distributed to this request data, then this requested data block is moved to the afterbody of buffer queue.
Preferably, described buffer memory device is flash memory.
In the present invention, buffer memory screening queue is for recording the disk block number of the accessed data of miss buffer memory recently, and it does not record the data in magnetic disk of reality.When buffer memory screens the quantity capacity C more than it of the data block number of queue recordr, then buffer memory screening queue is least recently used according to LRU() algorithm only retains and is accessed for C recentlyrIndividual data block number.The head record of buffer memory screening queue is the minimum data block number that is accessed for recently, and the trailer record of buffer memory screening queue is the most often be accessed for data block number recently.Therefore time every time to buffer memory screening queue one new data block number of write, new data block number being inserted the afterbody of buffer memory screening queue, then whether detection buffer memory screens queue beyond capacity CrIf, beyond, delete the header block of data number of buffer memory screening queue.
In the present invention, the capacity of buffer memory screening queue directly affects the effect of its garbled data block.If it is too little that buffer memory screens capacity of queue, then the data block being much accessed frequently can be made not to be screened to;If buffer memory screens capacity of queue too greatly, then the data block that can make a lot of not frequent access is screened out.Capacity can be adjusted to optimal value according to application load by the buffer memory screening queue of the present invention, thus controlling the yardstick of screening, under the premise ensureing hit rate, reduces unnecessary replacement operation as far as possible.
In practical application load, the hot spot data block frequently accessed only accounts for a small part, other most data block then access times are less, even only access once, the data block write buffer memory these being of little use can not improve cache hit rate, waste limited spatial cache on the contrary, when buffer memory device is solid state hard disc, also shorten its service life.Therefore first filter out these data blocks being accessed frequently, then write the hit rate that can improve buffer memory in the buffer memory of finite capacity, reduce data cached replacement operation simultaneously, extend the service life of solid state hard disc buffer memory.
The buffer memory management method of present invention proposition and application thereof, utilize buffer memory screening queue, filters out the higher data block of access frequency and preserves in the buffer, improves cache hit rate;Avoiding unnecessary buffer memory replacement operation simultaneously, thus reducing the load of writing of the solid state hard disc as buffer memory, extending the service life of solid state hard disc.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of the system of the method application of the embodiment of the present invention;
Fig. 2 is the implementing procedure figure of the method for the embodiment of the present invention;
Fig. 3 is the caching data block screening process figure of the embodiment of the present invention.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein is only in order to explain the present invention, is not intended to limit the present invention.
The amortization management method of the present invention can be applicable to as shown in Figure 1 using solid state hard disc as buffer memory device storage system.The buffer memory management method that the present invention proposes designs for the disk storage system of buffer memory medium preferably for flash memory (such as solid state hard disc), but its application is not limited to this type of disk storage system.
As in figure 2 it is shown, the buffer memory management method of the present embodiment, for the request data of miss buffer memory is controlled, to screen the request data of write buffer memory, reducing buffer memory replacement operation, the method specifically includes:
If buffer memory is not written full, then requested data block number is write the white space of buffer memory;
If buffer memory is fully written, requested data block number is not recorded in buffer memory screening queue simultaneously, then requested data block number does not write buffer memory, and requested data block number writes the afterbody of buffer memory screening queue;Wherein, the screening queue of described buffer memory is LRU queue, for recording the disk block number of the accessed data of miss buffer memory recently.
If buffer memory is fully written, requested data block number has been recorded in the middle of the screening queue of described buffer memory simultaneously, then this data block number screened queue from this buffer memory and delete, and this request data is write buffer memory;
The capacity C of buffer memory screening queuerSelf-adaptative adjustment can be carried out according to read-write requests, namely be adjusted to Cr-C/(C-Cr) or Cr+C/Cr.Namely when the miss buffer memory of read-write requests data, buffer memory screening capacity of queue CrSelf-adaptative adjustment is Cr+C/Cr, when the miss buffer memory of read-write requests data, described buffer memory screening capacity of queue CrSelf-adaptative adjustment is Cr+C/Cr, the capacity C of buffer memory screening queuerIf C after the adjustmentrLess than 0.1 × C, then make Cr=0.1 × C, if more than 0.9 × C, then makes Cr=0.9 × C, wherein, C is the maximum quantity of the data block that buffer memory can store.Buffer memory screening capacity of queue CrInitial value the present embodiment in be preferably arranged to 0.1 × C, wherein C is the maximum quantity of the data block that buffer memory can store, it is also possible to be set to other values.
The present embodiment is applied above-mentioned buffer memory management method and is written and read the method that request controls to perform, it is by carrying out buffer control to read-write requests data, filter out the higher data block of access frequency to preserve in the buffer, it is achieved the raising of cache hit rate, avoid unnecessary buffer memory replacement operation simultaneously.The method specifically includes: 1. set up LRU queue and screen queue as buffer memory and initialize;2. the self-adaptative adjustment of buffer memory screening capacity of queue;3. the data block write buffer memory that screening frequently accesses;4. cache management and read-write requests redirect.Step is specific as follows:
(1) set up LRU queue screen queue as buffer memory and initialize
Setting up buffer memory screening queue in internal memory, buffer memory screening queue is that a LRU(being arranged in internal memory is least recently used) queue, its capacity is Cr, CrInitial value the present embodiment in be preferably arranged to the maximum quantity that 0.1 × C, C are the data block that buffer memory can store, capacity CrCan with the change of application load self-adaptative adjustment.
Buffer memory screening queue is for recording the disk block number of the accessed data of miss buffer memory recently, and it does not record the data in magnetic disk of reality.When buffer memory screens the quantity capacity C more than it of the data block number of queue recordr, then buffer memory screening queue is least recently used according to LRU() algorithm only retains and is accessed for C recentlyrIndividual data block number.
The head record of buffer memory screening queue is the minimum data block number that is accessed for recently, and the trailer record of buffer memory screening queue is be accessed for data block number recently.Therefore time every time to buffer memory screening queue one new data block number of write, new data block number being inserted the afterbody of buffer memory screening queue, then whether detection buffer memory screens queue beyond capacity CrIf, beyond, delete the header block of data number of buffer memory screening queue.
(2) self-adaptative adjustment of buffer memory screening capacity of queue
The capacity of buffer memory screening queue directly affects the effect of its garbled data block.If it is too little that buffer memory screens capacity of queue, then the data block being much accessed frequently can be made not to be screened to;If buffer memory screens capacity of queue too greatly, then the data block that can make a lot of not frequent access is screened out.Embodiments of the invention adopt a kind of loaded self-adaptive mode, the capacity that buffer memory automatically can screen queue according to the feature of application load is adjusted to optimal value, thus controlling the yardstick of screening, under the premise ensureing hit rate, reduce unnecessary replacement operation as far as possible.
The self-adapting regulation method that buffer memory screens capacity of queue is as follows:
(2.1) receive read-write requests, it is judged that whether request data hits buffer memory every time, if hit, proceed to step (2.2);Otherwise, step (2.3) is proceeded to;
(2.2) automatically buffer memory is screened capacity of queue CrSize be adjusted to Cr-C/(C-Cr);If the C after adjustingrLess than 0.1 × C, then another Cr=0.1 × C, wherein C is the maximum quantity of the data block that buffer memory can store;
(2.3) automatically buffer memory is screened capacity of queue CrSize be adjusted to Cr+C/Cr;If the C after adjustingrMore than 0.9 × C, then make Cr=0.9 × C, wherein C is the maximum quantity of the data block that buffer memory can store.
(3) the data block write buffer memory being accessed frequently is filtered out
In practical application load, the hot spot data block frequently accessed only accounts for a small part, other most data block then access times are less, even only access once, the data block write buffer memory these being of little use can not improve cache hit rate, waste limited spatial cache on the contrary, when buffer memory device is solid state hard disc, also shorten its service life.Therefore first filter out these data blocks being accessed frequently, then write the hit rate that can improve buffer memory in the buffer memory of finite capacity, reduce data cached replacement operation simultaneously, extend the service life of solid state hard disc buffer memory.
In general, being accessed within the time one shorter twice or above data block will be likely to be accessed frequently future, these data blocks are exactly potential hot spot data block.After spatial cache fills up, only it is found that new hot spot data block, just can carry out buffer memory replacement operation, be written into buffer memory.
As it is shown on figure 3, concrete screening technique is as follows:
(3.1) if request data hit buffer memory, then screening it is made without;
(3.2) if request data both miss buffer memorys, but buffer memory still has blank spatial cache, then one piece of blank spatial cache storage request data of distribution, and requested data block is moved to the afterbody (buffer queue is typically also LRU queue, the afterbody of buffer queue represent be accessed for data block recently) of buffer queue.
The block number of the data block being stored in buffer memory of buffer queue record, its tail data block number represents that corresponding data block is the most often accessed recently;When needing to replace original data in buffer memory by new data, it is simply that judge delete which data block according to buffer queue, vacating space gives new data block.
(3.3) if request data both miss buffer memorys, and buffer memory is fully written, and requested data block number is not recorded in buffer memory screening queue simultaneously, then requested data block number writes the afterbody of buffer memory screening queue.
If the length of now buffer memory screening queue has exceeded its self-adaptative adjustment capacity Cr, then the header block of data number of buffer memory screening queue is deleted.
(3.4) if the miss buffer memory of request data, and buffer memory is fully written, requested data block number has been recorded in the middle of buffer memory screening queue simultaneously, so illustrate that this data block has been be accessed for the 2nd time in nearest a period of time, depending on this data block for frequently accessing data block, this data block number is screened queue from buffer memory and deletes, and this request data is write buffer memory.
Before request data is write buffer memory, first empty the data block that buffer queue head is corresponding, the cache blocks emptied is distributed to this request data, then this requested data block is moved to the afterbody of buffer queue.
(4) redirect read-write requests and perform:
(4.1) for being not written into the request data of buffer memory, then directly read-write requests is redirected to address corresponding on disk, performs;
(4.2) for Already in buffer memory, or after screening, write the request data of buffer memory, data block corresponding in buffer memory is moved on to the afterbody of buffer queue, then read-write requests is redirected to address corresponding in buffer memory, perform.
Those skilled in the art will readily understand; the foregoing is only presently preferred embodiments of the present invention; not in order to limit the present invention, all any amendment, equivalent replacement and improvement etc. made within the spirit and principles in the present invention, should be included within protection scope of the present invention.

Claims (9)

1. a buffer memory management method, for the request data of miss buffer memory is controlled, to screen the request data of write buffer memory, reduces buffer memory replacement operation, it is characterised in that the method specifically includes:
If buffer memory is not written full, then requested data block is write the white space of buffer memory, and requested data block number is moved to the afterbody of buffer memory screening queue;
If buffer memory is fully written, requested data block number is not recorded in buffer memory screening queue simultaneously, then requested data block does not write buffer memory, and requested data block number writes the afterbody of buffer memory screening queue;
If buffer memory is fully written, requested data block number has been recorded in the middle of the screening queue of described buffer memory simultaneously, then this data block number is screened queue from this buffer memory and delete, and this requested data block is write buffer memory, wherein before request data is write buffer memory, first empty slow screening and deposit the data block that queue head is corresponding, the cache blocks emptied is distributed to this request data;
Wherein, the screening queue of described buffer memory is LRU queue, for recording the disk block number of the accessed data of miss buffer memory recently, the head record of this buffer memory screening queue is the minimum data block number that is accessed for recently, and the trailer record of buffer memory screening queue is the most often be accessed for data block number recently.
2. a kind of buffer memory management method according to claim 1, it is characterised in that the capacity C of described buffer memory screening queuerCan dynamically adjust according to read-write requests, namely be adjusted to Cr-C/(C-Cr) or Cr+C/Cr, wherein, C is the maximum quantity of the data block that buffer memory can store.
3. a kind of buffer memory management method according to claim 2, it is characterised in that when the miss buffer memory of read-write requests data, described buffer memory screening capacity of queue CrIt is adjusted to Cr+C/Cr, wherein, C is the maximum quantity of the data block that buffer memory can store.
4. a kind of buffer memory management method according to claim 2, it is characterised in that the capacity C of described buffer memory screening queuerIf C after the adjustmentrLess than 0.1 × C, then its capacity is re-set as Cr=0.1 × C, if more than 0.9 × C, is then C by its capacity settingr=0.9 × C.
5. a kind of buffer memory management method according to any one of claim 2-4, it is characterised in that described buffer memory screening capacity of queue CrInitial value be preferably arranged to 0.1 × C.
6. a kind of buffer memory management method according to any one of claim 1-4, it is characterized in that, after requested data block number be recorded the screening queue of described buffer memory, if the physical length of buffer memory screening queue exceedes its capacity, then delete the header block of data number of this buffer memory screening queue.
7. an execution control method for read-write requests, read-write requests data are carried out buffer control by its application buffer memory management method according to any one of the claims 1-6, it is characterised in that the method specifically includes:
(1) setting up LRU queue and screen queue as buffer memory, the screening queue of this buffer memory is for recording the disk block number of the accessed data of miss buffer memory recently;
(2) according to read-write requests, described buffer memory is screened capacity of queue and carry out self-adaptative adjustment;
(3) data of write buffer memory are selected according to read-write requests, including:
If request data hit buffer memory, then do not screen, forward step (4) to;If request data is miss buffer memory, then the buffer memory management method performed according to any one of the claims 1-6 carries out buffer control;
(4) redirect read-write requests and perform
(4.1) for being not written into the request data of buffer memory, then directly read-write requests is redirected to address corresponding on disk, performs read-write operation;
(4.2) for Already in buffer memory, or after screening, write the request data of buffer memory, data block number corresponding in buffer memory is moved on to the afterbody of buffer memory screening queue, then read-write requests is redirected to address corresponding in buffer memory, perform read-write operation.
8. execution control method according to claim 7, it is characterised in that the self-adapting regulation method that described buffer memory screens capacity of queue is as follows:
(2.1) judge whether read-write requests data hit buffer memory, if hit, proceed to step (2.2);Otherwise, step (2.3) is proceeded to;
(2.2) automatically buffer memory is screened capacity of queue CrSize be adjusted to Cr-C/(C-Cr);If the C after adjustingrLess than 0.1 × C, then buffer memory is screened capacity of queue and be set to Cr=0.1 × C, wherein C is the maximum quantity of the data block that buffer memory can store;
(2.3) automatically buffer memory is screened capacity of queue CrSize be adjusted to Cr+C/CrIf, the C after adjustingrMore than 0.9 × C, then buffer memory is screened capacity of queue and be set to Cr=0.9 × C.
9. the execution control method according to claim 7 or 8, it is characterized in that, when request data is write buffer memory, if buffer memory does not have blank block, then first empty the data block that buffer memory screening queue head is corresponding, the cache blocks emptied is distributed to this request data, then this requested data block number is moved to the afterbody of buffer memory screening queue.
CN201310138419.9A 2013-04-19 2013-04-19 A kind of buffer memory management method and application thereof Active CN103257935B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310138419.9A CN103257935B (en) 2013-04-19 2013-04-19 A kind of buffer memory management method and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310138419.9A CN103257935B (en) 2013-04-19 2013-04-19 A kind of buffer memory management method and application thereof

Publications (2)

Publication Number Publication Date
CN103257935A CN103257935A (en) 2013-08-21
CN103257935B true CN103257935B (en) 2016-07-13

Family

ID=48961865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310138419.9A Active CN103257935B (en) 2013-04-19 2013-04-19 A kind of buffer memory management method and application thereof

Country Status (1)

Country Link
CN (1) CN103257935B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577348A (en) * 2013-10-09 2014-02-12 广东欧珀移动通信有限公司 Method and mobile device for automatically counting application cache size and reminding user
CN103744624B (en) * 2014-01-10 2017-09-22 浪潮电子信息产业股份有限公司 A kind of system architecture for realizing the data cached selectivity upgradings of storage system SSD
CN104123238A (en) * 2014-06-30 2014-10-29 海视云(北京)科技有限公司 Data storage method and device
CN104360966B (en) * 2014-11-21 2017-12-12 浪潮(北京)电子信息产业有限公司 To block number according to the method and apparatus for carrying out input-output operation
CN105095495B (en) * 2015-08-21 2019-01-25 浪潮(北京)电子信息产业有限公司 A kind of distributed file system buffer memory management method and system
CN107463509B (en) * 2016-06-05 2020-12-15 华为技术有限公司 Cache management method, cache controller and computer system
CN106610793B (en) * 2016-11-11 2019-09-17 深信服科技股份有限公司 The data cached management method and device of super emerging system
CN108228470B (en) * 2016-12-21 2021-05-18 北京忆恒创源科技有限公司 Method and equipment for processing write command for writing data into NVM (non-volatile memory)
CN106844740B (en) * 2017-02-14 2020-12-29 华南师范大学 Data pre-reading method based on memory object cache system
CN107911799B (en) * 2017-05-18 2021-03-23 北京聚通达科技股份有限公司 Method for utilizing intelligent route
EP4099177A1 (en) 2017-10-09 2022-12-07 Huawei Technologies Co., Ltd. Garbage data scrubbing method, and device
CN109032969A (en) * 2018-06-16 2018-12-18 温州职业技术学院 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring
CN109144431B (en) * 2018-09-30 2021-11-02 华中科技大学 Data block caching method, device, equipment and storage medium
CN109857680B (en) * 2018-11-21 2020-09-11 杭州电子科技大学 LRU flash memory cache management method based on dynamic page weight
CN110825585B (en) * 2019-10-30 2023-05-02 许继集团有限公司 Alarm event processing method and system based on micro-service
CN114327297B (en) * 2021-12-28 2024-03-19 华中科技大学 Data request processing method, equipment and system of staggered recording disk

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637147A (en) * 2011-11-14 2012-08-15 天津神舟通用数据技术有限公司 Storage system using solid state disk as computer write cache and corresponding management scheduling method
CN102945207A (en) * 2012-10-26 2013-02-27 浪潮(北京)电子信息产业有限公司 Cache management method and system for block-level data
CN103049394A (en) * 2012-11-30 2013-04-17 记忆科技(深圳)有限公司 Method and system for data caching of solid state disk

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1505506A1 (en) * 2003-08-05 2005-02-09 Sap Ag A method of data caching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637147A (en) * 2011-11-14 2012-08-15 天津神舟通用数据技术有限公司 Storage system using solid state disk as computer write cache and corresponding management scheduling method
CN102945207A (en) * 2012-10-26 2013-02-27 浪潮(北京)电子信息产业有限公司 Cache management method and system for block-level data
CN103049394A (en) * 2012-11-30 2013-04-17 记忆科技(深圳)有限公司 Method and system for data caching of solid state disk

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于NAND闪存的固态存储系统设计及优化;陈俭喜;《万方学位论文数据库》;20110818;全文 *

Also Published As

Publication number Publication date
CN103257935A (en) 2013-08-21

Similar Documents

Publication Publication Date Title
CN103257935B (en) A kind of buffer memory management method and application thereof
US8225044B2 (en) Storage system which utilizes two kinds of memory devices as its cache memory and method of controlling the storage system
CN108762664B (en) Solid state disk page-level cache region management method
CN104794064B (en) A kind of buffer memory management method based on region temperature
CN105930282B (en) A kind of data cache method for NAND FLASH
US11030107B2 (en) Storage class memory queue depth threshold adjustment
US8327076B2 (en) Systems and methods of tiered caching
US20100325352A1 (en) Hierarchically structured mass storage device and method
CN104834607A (en) Method for improving distributed cache hit rate and reducing solid state disk wear
US9003099B2 (en) Disc device provided with primary and secondary caches
US20110231598A1 (en) Memory system and controller
US9703699B2 (en) Hybrid-HDD policy for what host-R/W data goes into NAND
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
US20090094391A1 (en) Storage device including write buffer and method for controlling the same
JP2007528079A (en) Flash controller cache structure
US7039765B1 (en) Techniques for cache memory management using read and write operations
CN103136121A (en) Cache management method for solid-state disc
CN106569732B (en) Data migration method and device
CN107463509B (en) Cache management method, cache controller and computer system
CN106775466A (en) A kind of FTL read buffers management method and device without DRAM
TWI403897B (en) Memory device and data management method thereof
KR101403922B1 (en) Apparatus and method for data storing according to an access degree
CN106294197A (en) A kind of page frame replacement method towards nand flash memory
CN107832007A (en) A kind of method of raising SSD combination properties
CN106021159B (en) Large Copacity solid state hard disc logical address is to physical address map method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant