CN102760101B - SSD-based (Solid State Disk) cache management method and system - Google Patents

SSD-based (Solid State Disk) cache management method and system Download PDF

Info

Publication number
CN102760101B
CN102760101B CN201210160350.5A CN201210160350A CN102760101B CN 102760101 B CN102760101 B CN 102760101B CN 201210160350 A CN201210160350 A CN 201210160350A CN 102760101 B CN102760101 B CN 102760101B
Authority
CN
China
Prior art keywords
data
ssd
chained list
buffer memory
lru
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210160350.5A
Other languages
Chinese (zh)
Other versions
CN102760101A (en
Inventor
车玉坤
熊劲
马久跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Institute of Computing Technology of CAS
Original Assignee
Huawei Technologies Co Ltd
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd, Institute of Computing Technology of CAS filed Critical Huawei Technologies Co Ltd
Priority to CN201210160350.5A priority Critical patent/CN102760101B/en
Publication of CN102760101A publication Critical patent/CN102760101A/en
Application granted granted Critical
Publication of CN102760101B publication Critical patent/CN102760101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses an SSD-based (Solid State Disk) cache management method and system. The SSD-based cache management method comprises the following steps: step 1, sending a read-write request, checking whether a cached DRAM (Dynamic Random Access Memory) hits data, searching a hash list, judging whether the data is existent, reading the data from the cached DRAM and returning the request if the data is existent, and reading data to the cached DRAM from a HDD (Hard Disk Drive) and carrying out step 2 if the data is nonexistent in the cached DRAM; step 2, carrying out data screening by using a two-level LRU (least recently used) linked list and a Ghost buffer, and identifying data heat; and step 3, carrying out self-adaptive change calculation on the length of the two-level LRU linked list, acquiring the granularity of a page cluster when the second-level LRU linked list of the cached DRAM is full, taking C pages behind the second-level LRU end as a whole body to replace the cached DRAM, and writing the C pages into an SSD in a large granularity, wherein the page cluster size is C pages, and C is an integral multiple of Block page number in the SSD.

Description

A kind of buffer memory management method based on SSD and system
Technical field
The present invention relates to the storage organization of buffer memory and strategy, particularly relate to a kind of buffer memory management method based on SSD and system.
Background technology
Along with the progress of contemporary society, need data message to be processed to get more and more, data volume is explosive growth.This brings many problems to traditional storage system.Traditional storage system is generally made up of internal memory (DRAM) and hard disk (HDD), DRAM as the buffer memory of HDD, the following challenge of such systems face:
One, data total amount is in rapid increase.The Joining Report of IDC and EMC is pointed out: the data of society are explosive growth trend, can find out from their report, and data volume before 2005 and is before only tens EB (1EB=10 18byte), 2010 is then thousands of EB, expects 2015 and then can arrive close to 8000EB, namely 8ZB (1ZB=10 21byte) data volume. face such data scale, under traditional DRAM+HDD storage architecture, increasing I/O will be had to ask to be sent to disk, and therefore performance can be elongated and be had an impact because the response time of request.
Its two, I/O gap is increasing gradually, and HDD becomes performance bottleneck gradually.Report display, cpu performance every year with 60% speed increment, within namely every 18 months, double.And the every annual growth rate of the performance of HDD is less than 10%, greatly about about 8%, this is because it is limited by the feature of disk physical arrangement, the seeking speed of disk arm and the rotational speed of card, just can double for approximately every 10 years.The delay gap of DRAM and HDD is also in increasing simultaneously, and above these cause HDD to become the bottleneck of I/O.If ask frequently disk to be sent to, the performance of system so seriously will inevitably be reduced.
Its three, the performance requirement of data processing is improving constantly.In recent years, high-performance calculation changes to I/O intensity by CPU is intensive gradually, and the I/O efficiency of system has important impact to performance, and this just proposes very high requirement to the I/O operation of storage system.In addition the high speed development of Internet service is also had higher requirement to the I/O operating performance of mass storage system (MSS), for the internet, applications of search engine, ecommerce, OSN and so on, they need the operation requests simultaneously processing a large number of users, and the response time that user experiences must be in acceptable scope (second level within).Such application characteristic just requires that the data-storage system of its bottom must have good I/O performance, and traditional DRAM+HDD will more and more be difficult to be competent at.
SSD is the novel storage medium of one emerging in recent years, and its appearance very likely helps to solve above-mentioned challenge.The performance of SSD and price are all between DRAM and HDD, if joined in caching system, by its L2 cache as HDD, very likely improve the performance of system, because its volume ratio DRAM wants large, simultaneously performance comparatively HDD be better than several orders of magnitude, therefore estimate can effectively reduce the request of mailing to HDD.But SSD has the characteristic of many uniquenesses, making directly to be introduced between DRAM and HDD by SSD can there is many problems as buffer memory, cause the performance of SSD to can not get maximum using. these characteristics are as follows:
First, the readwrite performance of SSD is asymmetric, the performance of read operation is better than write operation far away, when random operation and also granularity less poorer, but in traditional caching system, data from HDD, enter SSD or replace from DRAM to enter the random write that SSD is small grain size, and this is had a strong impact on regard to making the performance of SSD.
Second, SSD restricted lifetime, its life-span is limited to erasing times, and the quantity of write operation directly determines erasing times, but in traditional buffer memory concept, data whatever all will enter buffer memory, even if these data are only once accessed, these will cause unnecessary erase operation, can affect the life-span of SSD.Therefore to reduce unnecessary write operation as far as possible.
The finite capacity of the three, SSD, although it is larger to compare to DRAM, relatively HDD and the data volume that will store or smaller, therefore should make it store the data of often accessing, it so just can be made to maximize the use.
As can be seen from describing above, how to design a caching system and cache management strategy, improve the performance of system, the performance of SSD and space can be made to maximize the use is a challenge simultaneously.Traditional cache management mode is also not suitable for being applied in SSD, and they are mainly divided into following a few class:
The first kind is the buffer storage managing algorithm based on temporal locality.Typical Representative is LRU (LeastRecently Used) algorithm, but this algorithm it can not appraising datum access temperature, once also just can be replaced from going to last-of-chain from begin chain even if data are only accessed.
Equations of The Second Kind is the buffer storage managing algorithm based on access frequency.Typical Representative is LFU (LeastFrequently Used) algorithm.But this algorithm it does not consider this factor of time, the past often access but no longer accessed for a long time data are still saved in the buffer owing to there being very high access frequent, buffer memory therefore can be caused to pollute.
3rd class is the consideration of comprehensive two factors, and Typical Representative is LIRS (Low Inter-ReferenceSet) algorithm.But its shortcoming realizes comparatively complexity, can bring certain overhead simultaneously.
No matter above which kind of, they all can not improve above-mentioned problem, because no matter data enter SSD or replace SSD, then certainly lead to the random write of the small grain size for SSD, enter if data are indiscriminate, the problem that buffer memory pollutes shows even more serious on SSD.Specifically see accompanying drawing 1.
Therefore need to set about in the replacement of a kind of technology data in the data controlling to enter SSD and optimization SSD, thus change problem above-mentioned, thus the maximized performance utilizing SSD.
Summary of the invention
For solving the problem, for the performance characteristics of SSD, the object of the invention is solve prior art SSD on small grain size random write too much, the with serious pollution problem of buffer memory, propose a new cache architecture, be made up of DRAM and SSD two-stage, DRAM is level cache, and SSD is as the system of L2 cache.
The present invention discloses a kind of buffer memory management method based on SSD, comprising:
Step 1, send read-write requests, to check in buffer memory DRAM whether hiting data, search the hash table of buffer memory DRAM, judge whether described data exist, if existed, from buffer memory DRAM, read these data and return this request, if do not existed in buffer memory DRAM, then reading from HDD after in data to buffer memory DRAM and perform step 2;
Step 2, adopts two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Step 3, length for two-stage LRU chained list carries out the calculating of adaptive change, when LRU chained list in the second level in buffer memory DRAM is full, take the granularity of page bunch, the rear C page of the LRU end being positioned at described second level LRU chained list is integrally condensed together and is replaced out buffer memory DRAM, then coarsegrain is written in SSD, and wherein a page bunch size is assumed to C page, and C is the integral multiple of Block number of pages in SSD.
The described buffer memory management method based on SSD, described step 1 comprises:
Step 21, if data exist, i.e. hiting data in buffer memory DRAM, then can directly the data in buffer memory DRAM be returned, request completes;
Step 22, if data do not exist in the hash table of buffer memory DRAM, then needs the hash table continuing inquiry SSD, judges whether these data are stored in SSD;
Step 23, if hit in SSD, then these data read out from SSD, request completes.
The described buffer memory management method based on SSD, comprising:
The data read from HDD are directly copied in buffer memory DRAM, and after being screened by buffer memory DRAM, just the replacement of part is in SSD buffer memory for data, and the content between buffer memory DRAM and SSD is not identical, the summation in the Shi Liangge space, space of two-level cache; If ask also miss in SSD, so need request to be sent in HDD to read.
The described buffer memory management method based on SSD, described step 2 comprises:
Step 41, when data first time enters into buffer memory DRAM, be first placed on the MRU end of first order LRU chained list, two-stage LRU chained list is all in buffer memory DRAM;
Step 42, the size of first order LRU chained list is set to the certain proportion p of whole buffer memory DRAM size 1, 0<p 1<1;
Step 43, when first order chained list is full, adopt the mode of LRU to replace, the information of the page replaced out is saved in ghost buffer, its history access record is preserved, and this history access record is the data that access temperature is not high;
Step 44, when the data second time in first order LRU chained list is hit time, is then promoted in the chained list of the second level;
Step 45, when second level LRU chained list is full time, replaces the content of its second level LRU chained list in SSD, obtains accessing the higher data of temperature.
The described buffer memory management method based on SSD, in described step 3, the calculating of adaptive change comprises:
Step 51, for two-stage LRU chained list all with the addition of corresponding Shadowbuffer, it stores the visit information of the page be replaced recently in appropriate level chained list respectively, the visit information record of two Shadowbuffer all same quantity of memory buffers DRAM;
Step 52, the size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, and namely the target sizes of one-level LRU chained list is TargetSize; Initial value is set to the half of buffer memory DRAM size; Load is subsequently changed process.
The described buffer memory management method based on SSD, its change procedure comprises:
Step 61, after the data page replacement of first order LRU chained list, historical information is kept in first order LRU chained list ShadowBuffer, meanwhile, after the data in the LRU chained list of the second level are replaced, is also saved in the LRU chained list Shadowbuffer of the second level;
Step 62, when data are hit in first order LRU chained list Shadowbuffer, increases the length of described first order LRU chained list, and described TargeSize increases;
Step 63, when data are hit in the LRU chained list Shadowbuffer of the second level, increases the length of described second level LRU chained list, and described TargetSize reduces.
The described buffer memory management method based on SSD, described step 3 also comprises:
Step 71, the screening of two-stage LRU chained list in buffer memory DRAM, second level LRU storage of linked list be hotter data, from MRU end to the temperature of LRU end respectively from high to low;
Step 72, when second level LRU chained list is full, the page that second level LRU holds is replaced in buffer zone;
Step 73, second level chained list is replaced at every turn and all can be made to accumulate a page in buffer zone, and through after a period of time, buffer zone can arrive 64 pages, has now arrived the size of Cluster, and therefore this Cluster is in ready state;
Step 74, when the page again replaced enters buffer zone, buffer zone is full, therefore brushes this Cluster in SSD, buffer empty, repeats step 71-74 afterwards.
The present invention also discloses a kind of cache management system based on SSD, comprising:
Check buffer memory DRAM module, for sending read-write requests, to check in buffer memory DRAM whether hiting data, search the hash table of buffer memory DRAM, judge whether described data exist, if existed, from buffer memory DRAM, reading these data and return this request, if do not existed in buffer memory DRAM, then from HDD, reading data to buffer memory DRAM;
Garbled data module, for the screening adopting two-stage LRU chained list and Ghost buffer to carry out data, the temperature of authentication data;
Adaptive change and aggregation module, for carrying out the calculating of adaptive change for the length of two-stage LRU chained list, when LRU chained list in the second level in buffer memory DRAM is full, take the granularity of page bunch, the rear C page of the LRU end being positioned at described second level LRU chained list is integrally condensed together and is replaced out buffer memory DRAM, then coarsegrain is written in SSD, and wherein a page bunch size is assumed to C page, and C is the integral multiple of Block number of pages in SSD.
The described cache management system based on SSD, described inspection buffer memory DRAM module comprises:
Hiting data module, if existed for data, i.e. hiting data in buffer memory DRAM, then can directly the data in buffer memory DRAM be returned, request completes;
Inquiry SSD module, if do not existed in the hash table of buffer memory DRAM for data, then needs the hash table continuing inquiry SSD, judges whether these data are stored in SSD;
Read SSD module, if hit with in SSD, then these data read out from SSD, request completes.
The described cache management system based on SSD, comprising:
The data read from HDD are directly copied in DRAM, and after being screened by DRAM, just the replacement of part is in SSD buffer memory for data, and the content between buffer memory DRAM and SSD is not identical, the summation in the Shi Liangge space, space of two-level cache; If ask also miss in SSD, so need request to be sent in HDD to read.
The described cache management system based on SSD, described garbled data module comprises:
Place MRU and hold module, for when data first time enters into buffer memory DRAM, be first placed on the MRU end of first order LRU chained list, two-stage LRU chained list is all in buffer memory DRAM;
Set scale module, for being set to the certain proportion p of whole buffer memory DRAM size by the size of first order LRU chained list 1, 0<p 1<1;
Replacement module, for when first order chained list is full, adopt the mode of LRU to replace, the information of the page replaced out is saved in ghost buffer, its history access record is preserved, and this history access record is the data that access temperature is not high;
Secondary hit module, for when the data second time in first order LRU chained list is hit time, is then promoted in the chained list of the second level;
Temperature module, for when second level LRU chained list is full time, replaces the content of its second level LRU chained list in SSD, obtains accessing the higher data of temperature.
The described cache management system based on SSD, described adaptive change and aggregation module comprise:
Adaptive change module, for all with the addition of corresponding Shadowbuffer for two-stage LRU chained list, it stores the visit information of the page be replaced recently in appropriate level chained list respectively, the visit information record of two Shadowbuffer all same quantity of memory buffers DRAM; The size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, and namely the target sizes of one-level LRU chained list is TargetSize; Initial value is set to the half of buffer memory DRAM size; Load is subsequently changed process.
The described cache management system based on SSD, its adaptive change module also comprises:
Preserve historical information module, after the data page replacement of first order LRU chained list, historical information is kept in first order LRU chained list ShadowBuffer, simultaneously, after data in the LRU chained list of the second level are replaced, be also saved in the LRU chained list Shadowbuffer of the second level;
One-level increases length modules, for when data are hit in first order LRU chained list Shadowbuffer, increases the length of described first order LRU chained list, and described TargeSize increases;
Secondary increases length modules, for when data are hit in the LRU chained list Shadowbuffer of the second level, increases the length of described second level LRU chained list, and described TargetSize reduces.
The described cache management system based on SSD, described adaptive change and aggregation module also comprise:
Aggregation module, for the screening of two-stage LRU chained list in buffer memory DRAM, second level LRU storage of linked list be hotter data, from MRU end to the temperature of LRU end respectively from high to low; When second level LRU chained list is full, the page that second level LRU holds is replaced in buffer zone; Second level chained list is replaced at every turn and all can be made to accumulate a page in buffer zone, and through after a period of time, buffer zone can arrive 64 pages, has now arrived the size of Cluster, and therefore this Cluster is in ready state; When the page again replaced enters buffer zone, buffer zone is full, therefore brushes this Cluster in SSD, buffer empty.
Beneficial effect of the present invention is:
1, in the present invention, the flow direction of data is not identical with traditional buffer memory.The data read from HDD are not directly enter into SSD, but formerly take screen in DRAM after selectively enter SSD.The technique effect brought like this is data is after the screening and filtering of DRAM, just will enter SSD, and namely data first carry out observing " temperature " in DRAM.So just, make the write operation entering SSD greatly reduce, the buffer memory simultaneously decreasing SSD pollutes.
2, for buffer structure above, the present invention devises a kind of method of garbled data.The technique effect brought like this is the data in this way entering into SSD is that those are by frequently accessed data, therefore, it is possible to utilize SSD spatial cache substantially, the content of DRAM and SSD buffer memory is different simultaneously, and the space utilization of two-level cache can be made so more effective.
3, enter SSD all with the problem of the I/O pattern of small grain size random write for traditional data, the present invention devises data aggregation technique.Data are first in DRAM, several pages are polymerized to the page bunch (cluster) of a coarsegrain, then with page bunch for granularity is written in SSD, also adopt page bunch to be granularity when replacing, such technique effect is the small grain size random write can avoiding bringing on SSD.Corresponding, when in buffer memory, data are full time, we take the mode of coarsegrain to replace, the continuous print page of disposable replacement some, and this mode just entered with data combines, thus improve the performance of caching system.
In sum, present invention employs three above gordian techniquies, the optimization can carrying out to a certain degree to the caching system of SSD, thus more effectively utilize the performance of SSD.On system response time, owing to can reduce the request being sent to HDD, therefore system performance can increase.Secondly the buffer memory pollution problem on SSD is avoided.Finally, the random write of the small grain size of SSD can effectively be reduced, and thus the life-span of SSD also can extend.
Accompanying drawing explanation
Fig. 1 is traditional buffer organization mode;
Fig. 2 is the new caching system data flow figure of the present invention;
Fig. 3 is two-stage LRU chained list of the present invention management DRAM cache;
Fig. 4 is two-stage LRU chained list length dynamic change algorithm of the present invention;
Fig. 5 is that the present invention is polymerized the technology entering SSD;
Fig. 6 is the buffer memory management method schematic diagram that the present invention is based on SSD;
Fig. 7 is the cache management system schematic diagram that the present invention is based on SSD.
Embodiment
Provide the specific embodiment of the present invention below, by reference to the accompanying drawings to invention has been detailed description.
New caching system
New caching system is made up of DRAM, SSD, HDD, specifically sees accompanying drawing 2.SSD is between DRAM and HDD, and as the buffer memory of HDD, data persistence is stored in HDD, and DRAM is as first order buffer memory, and SSD is as second level buffer memory.DRAM and SSD constitutes the two-level cache of HDD.The content that in DRAM cache, itself stores will be recorded in DRAM, in order to whether certain page of quick position exists in DRAM cache, therefore the mode that the page stored in buffer memory shows with hash is managed, also to record the content in SSD simultaneously, the relevant information of data in SSD buffer memory will be recorded in DRAM, this needs to record content with hash table equally.Therefore need to store following information in DRAM:
LRU (Least Recently Used least recently used algorithm) link table information, this chained list has head pointer and tail pointer, is used for performing insertion, the operations such as replacement;
Whether DRAM content Hash shows, store the hash table of content in DRAM, be used for searching data and hit in DRAM cache;
The LRU chained list of SSD page, the same with the LRU chained list in DRAM, there are head pointer and tail pointer, are used for performing insertion, the operations such as replacement;
Whether the hash table of SSD content, store the hash table of content in SSD, be used for searching data and hit in SSD;
The processing procedure of request is such: whenever user CPU/cache sends a read/write requests, first checks in level cache DRAM whether hit, and searches hash table, checks in hash table, whether page page exists.If existed, namely hit in DRAM cache, then can directly the data in DRAM cache be returned, request completes.If do not existed in hash table, then need the hash table continuing inquiry SSD, see whether these data are stored in SSD.If hit in SSD, then these data read out from SSD, request completes.Note, data are not copied in first order buffer memory DRAM, and the content between them is not identical, is that is exclusive, and the benefit brought like this is the summation in the Shi Liangge space, space of buffer memory.If ask also miss in SSD, so need request to be sent in HDD to read.When request completes, be saved in DRAM cache by these data, namely in first order buffer memory, this is the consideration based on temporal locality, because also likely access these data likely.
Here replacement operation when buffer memory is full will be noted, because request may be write operation, if write operation hits in DRAM cache, its page is then needed to be set to " dirty ", if and this page is selected to replace, then need first the content of this page to be write back in HDD, then could remove from DRAM cache.
The strategy of garbled data
Want to reduce buffer memory to pollute, reduce to the small grain size random write on SSD, just must there is a kind of strategy of garbled data, and traditional caching system, as long as data are brought from HDD, buffer memory must be put into, and the present invention have employed two-stage LRU chained list in DRAM adds the mode of Ghost buffer to carry out the screening of data.Concrete situation is shown in Fig. 3.If take LRU chained list simply to manage, before also mentioned, cannot the temperature of authentication data, buffer memory is seriously polluted.And adopt two-stage LRU chained list effectively to address this problem, concrete operations flow process is as follows:
When data first time enters into DRAM cache, be first placed on MRU end (Most Recently Used) of first order LRU chained list.Two-stage LRU chained list is all in DRAM cache.The certain proportion that the size of first order chained list is set to whole DRAM cache size (is set to p 1, 0<p 1<1), Here it is pollutes to reduce buffer memory.When chained list is full, the mode of LRU is then adopted to replace, the information of the page replaced out is saved in ghost buffer, note, the visit information of this page is only preserved in ghost buffer, do not record the content of this page, therefore a page (4K) only needs 16 bytes just can preserve.Ghost buffer can preserve the visit information of the capacity sum of DRAM cache and SSD buffer memory, and shared amount of ram is also also few.SSD as a 1GB adds the DRAM cache of 128MB, then only need the space of about 4MB namely can store its history visit information.When the data second time in first order LRU chained list is hit time, be then promoted in the chained list of the second level.When second level LRU chained list is full time, the content of being held by its LRU is replaced in SSD, and these are data that access temperature is higher.And its history access record is preserved, because these are data that access temperature is not high.See on the whole, SSD and this second level LRU chained list define a large LRU chained list.
When a page of asking is miss in the buffer, under this strategy, need the operation adding one query Ghostbuffer, if find visit information in Ghost buffer, then think that these data are hotter data, therefore just put it into the LRU chained list of the second level after disk read data.Therefore also need to preserve following information in DRAM cache:
History visit information in Ghost buffer, manages each in LRU mode.If full, just replace old information.
The hash table of Ghost buffer, is used for query history Visitor Logs.
We adopt in DRAM above algorithm to carry out garbled data, the data entering SSD can be made like this to be the data comparing " heat ".But when DRAM is full, we need selection one sacrifice page to replace.Select the page in one-level LRU chained list or the page in secondary LRU chained list so on earth? if only select one of them to replace, the number of pages of another one chained list certainly will be caused to diminish gradually.And if be respectively the size that two chained lists select regular lengths, due to these two chained lists code Recency and Frequency two parameters respectively, if size constantization all always, then this algorithm just can not change according to the change of load accordingly.Therefore we need to enable these two parameters adaptive with the change of load change page.
The algorithm of two-stage chained list length adaptive change:
In other words, we need design parameter: the problem of the proportional distribution of one-level chain table size L1:L2.For this reason, we devise the algorithm of a dynamic change, as shown in Figure 4:
In Fig. 4, Shadowbuffer is identical with the function of Ghostbuffer in fact, and they only store the visit information of recently accessed page, and not storing data information.Therefore its expense is very little.And in order to distinguish with above-mentioned Ghostbuffer phase, we are referred to as Shadowbuffer.Herein, we with the addition of corresponding Shadowbuffer for every grade of LRU chained list, and this difference stores the visit information of the page be replaced recently in appropriate level chained list respectively.These two Shadowbuffer store the visit information record of the same quantity of DRAM cache.
We adopt these two Shadowbuffer to carry out the size of dynamic change two-stage LRU chained list, we set a desired value TargetSize for this reason, this value is the desired value of first order LRU chained list, and that is, the target sizes of one-level LRU chained list is TargetSize.Initial value is set to the half of DRAM size.Thereafter the dynamic change along with the change of load.Its change procedure is as follows:
After the data page replacement of first order LRU chained list, historical information is kept in L1ShadowBuffer, meanwhile, after the data in the LRU of the second level are replaced, is also saved in L2Shadow buffer.
When data are hit in L1Shadowbuffer, we think that one-level LRU chained list length needs to increase, TargeSize++;
When data are hit in L2Shadowbuffer, we think that secondary LRU chained list length needs to increase, TargetSize--;
We are the reason of why such dynamic change, two ShadowBuffer all store the page replaced out in appropriate level chained list, if and page request is hit in L1Shadow, what this means in DRAM cache is miss, hit in its ShadowBuffer, mean that L1 chained list length is inadequate, because if long enough, so current request just can not be miss, and therefore we make its TargetSize become large.In like manner, for L2, we should make L2 size become large, even if TargetSize diminishes.Complete the dynamic change of two-stage LRU chained list.
When DRAM cache is full, if the size of one-level chained list is more than TargetSize, then select when replacing to sacrifice page from L1; Otherwise then select to sacrifice page from L2.
In sum, we can bring following benefit after taking garbled data to enter the scheme of SSD:
Effectively control the data entering SSD, make the data entering SSD often be all access " heat " data.Stopped the possibility that the data that only access does not once just visit again enter SSD simultaneously.This can make the Buffer Utilization of SSD be effectively improved.
Decrease the write operation be sent on SSD.Screening through data operates, again can all through SSD buffer memory with the data read in HDD, and this write operation number that SSD is sent on SSD greatly reduces, and can make be extended the serviceable life of SSD
The data that DRAM and SSD stores are disjoint, namely exclusive.So, the useful space that two-level cache stores is the summation in two-level cache space.This makes whole caching system can whole spaces of more efficiently handy DRAM and SSD.
Polymerization write SSD and coarsegrain are replaced
Test shows, the I/O operation for SSD coarsegrain is more a lot of than the I/O good operation performance of small grain size.This is because can not the feature write of original place the causing of SSD, the operation of small grain size can cause SSD inside to occur fragmentation, thus reduce performance.Therefore present invention employs the technology of being polymerized and writing.When second level LRU chained list has been expired in DRAM cache time, be not adopt the granularity of page when we replace, but take the granularity of page bunch.We are page bunch size (being assumed to be C page) with the erasing granularity (erase block) of SSD or its multiple, that is the rear C page being positioned at LRU end is integrally replaced out DRAM cache together, they are integrally condensed together, then be written in SSD, as shown in Figure 5, its algorithm is as follows for idiographic flow:
The screening of two-stage LRU chained list in DRAM cache, second level LRU storage of linked list be hotter data.The temperature of holding from MRU end to LRU respectively from high to low;
When secondary LRU chained list is full, the page of LRU end is replaced in buffer zone;
Secondary chained list is replaced at every turn and all can be made to accumulate a page in buffer zone, and therefore through after a period of time, buffer zone can arrive 64 pages.Now arrived the size of Cluster, therefore this Cluster is in ready state
When the page again replaced enters buffer zone, buffer zone is full, therefore brushes this cluster in SSD, buffer empty, repeats step 1-4 afterwards.
After adding polymerization technique, we also need to manage the information of all pages bunch in SSD.The namely LRU of page bunch rank, therefore in DRAM cache we also need below information:
The LRU of page bunch rank, needs to record the weight of each page bunch
When SSD is full time, be also for granularity is replaced during replacement with page bunch.Doing so avoids the small grain size random write introduced when data enter and replace out SSD.
As shown in Figure 6, the present invention discloses a kind of buffer memory management method based on SSD, comprising:
Step 1, send read-write requests, to check in buffer memory DRAM whether hiting data, search hash table, judge whether described data exist, if existed, from buffer memory DRAM, read these data and return this request, if do not existed in buffer memory DRAM, then reading from HDD after in data to buffer memory DRAM and perform step 2;
Step 2, adopts two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Step 3, length for two-stage LRU chained list carries out the calculating of adaptive change, when LRU chained list in the second level in buffer memory DRAM is full, take the granularity of page bunch, the rear C page being positioned at second level LRU end is integrally condensed together and is replaced out buffer memory DRAM, then coarsegrain is written in SSD, and wherein a page bunch size is assumed to C page, and C is the integral multiple of Block number of pages in SSD.
The described buffer memory management method based on SSD, described step 1 comprises:
Step 21, if data exist, i.e. hiting data in buffer memory DRAM, then can directly the data in buffer memory DRAM be returned, request completes;
Step 22, if do not existed in hash table, then needs the hash table continuing inquiry SSD, judges whether these data are stored in SSD;
Step 23, if hit in SSD, then these data read out from SSD, request completes.
The described buffer memory management method based on SSD, comprising:
The data read from HDD are directly copied in buffer memory DRAM, and after being screened by buffer memory DRAM, just the replacement of part is in SSD buffer memory for data, and the content between buffer memory DRAM and SSD is not identical, the summation in the Shi Liangge space, space of buffer memory; If ask also miss in SSD, so need request to be sent in HDD to read.
The described buffer memory management method based on SSD, described step 2 comprises:
Step 41, when data first time enters into buffer memory DRAM, be first placed on the MRU end of first order LRU chained list, two-stage LRU chained list is all in buffer memory DRAM;
Step 42, the ratio that the size of first order LRU chained list is set to whole buffer memory DRAM size is set to p 1, 0<p 1<1;
Step 43, when first order chained list is full, adopt the mode of LRU to replace, the information of the page replaced out is saved in ghost buffer, its history access record is preserved, and this history access record is the data that access temperature is not high;
Step 44, when the data second time in first order LRU chained list is hit time, is then promoted in the chained list of the second level;
Step 45, when second level LRU chained list is full time, replaces the content of its second level LRU chained list in SSD, obtains accessing the higher data of temperature.
The described buffer memory management method based on SSD, in described step 3, the calculating of adaptive change comprises:
Step 51, for two-stage LRU chained list all with the addition of corresponding Shadowbuffer, it stores the visit information of the page be replaced recently in appropriate level chained list respectively, the visit information record of two Shadowbuffer all same quantity of memory buffers DRAM;
Step 52, the size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, and namely the target sizes of one-level LRU chained list is TargetSize; Initial value is set to the half of buffer memory DRAM size; Load is subsequently changed process.
The described buffer memory management method based on SSD, its change procedure comprises:
Step 61, after the data page replacement of first order LRU chained list, historical information is kept in first order LRU chained list ShadowBuffer, meanwhile, after the data in the LRU chained list of the second level are replaced, is also saved in the LRU chained list Shadowbuffer of the second level;
Step 62, when data are hit in first order LRU chained list Shadowbuffer, first order LRU chained list length needs to increase, TargeSize++;
Step 63, when data are hit in the LRU chained list Shadowbuffer of the second level, secondary LRU chained list length needs to increase, TargetSize--.
The described buffer memory management method based on SSD, described step 3 also comprises:
Step 71, the screening of two-stage LRU chained list in buffer memory DRAM, second level LRU storage of linked list be hotter data, from MRU end to the temperature of LRU end respectively from high to low;
Step 72, when second level LRU chained list is full, the page that second level LRU holds is replaced in buffer zone;
Step 73, second level chained list is replaced at every turn and all can be made to accumulate a page in buffer zone, and through after a period of time, buffer zone can arrive 64 pages, has now arrived the size of Cluster, and therefore this Cluster is in ready state;
Step 74, when the page again replaced enters buffer zone, buffer zone is full, therefore brushes this Cluster in SSD, buffer empty, repeats step 71-74 afterwards.
As shown in Figure 7, the present invention also discloses a kind of cache management system based on SSD, comprising:
Check buffer memory DRAM module, for sending read-write requests, to check in buffer memory DRAM whether hiting data, search hash table, judge whether described data exist, if existed, from buffer memory DRAM, reading these data and return this request, if do not existed in buffer memory DRAM, then from HDD, reading data to buffer memory DRAM;
Garbled data module, for the screening adopting two-stage LRU chained list and Ghost buffer to carry out data, the temperature of authentication data;
Adaptive change and aggregation module, for carrying out the calculating of adaptive change for the length of two-stage LRU chained list, when LRU chained list in the second level in buffer memory DRAM is full, take the granularity of page bunch, the rear C page being positioned at second level LRU end is integrally condensed together and is replaced out buffer memory DRAM, then coarsegrain is written in SSD, and wherein a page bunch size is assumed to C page, and C is the integral multiple of Block number of pages in SSD.
The described cache management system based on SSD, described inspection buffer memory DRAM module comprises:
Hiting data module, if existed for data, i.e. hiting data in buffer memory DRAM, then can directly the data in buffer memory DRAM be returned, request completes;
Inquiry SSD module, if for not existing in hash table, then needs the hash table continuing inquiry SSD, judges whether these data are stored in SSD;
Read SSD module, if hit with in SSD, then these data read out from SSD, request completes.
The described cache management system based on SSD, comprising:
The data read from HDD are directly copied in DRAM, and after being screened by DRAM, just the replacement of part is in SSD buffer memory for data, and the content between buffer memory DRAM and SSD is not identical, the summation in the Shi Liangge space, space of buffer memory; If ask also miss in SSD, so need request to be sent in HDD to read.
The described cache management system based on SSD, described garbled data module comprises:
Place MRU and hold module, for when data first time enters into buffer memory DRAM, be first placed on the MRU end of first order LRU chained list, two-stage LRU chained list is all in buffer memory DRAM;
Set scale module, the ratio that the size for first order LRU chained list is set to whole buffer memory DRAM size is set to p 1, 0<p 1<1;
Replacement module, for when first order chained list is full, adopt the mode of LRU to replace, the information of the page replaced out is saved in ghost buffer, its history access record is preserved, and this history access record is the data that access temperature is not high;
Secondary hit module, for when the data second time in first order LRU chained list is hit time, is then promoted in the chained list of the second level;
Temperature module, for when second level LRU chained list is full time, replaces the content of its second level LRU chained list in SSD, obtains accessing the higher data of temperature.
The described cache management system based on SSD, described adaptive change and aggregation module comprise:
Adaptive change module, for all with the addition of corresponding Shadowbuffer for two-stage LRU chained list, it stores the visit information of the page be replaced recently in appropriate level chained list respectively, the visit information record of two Shadowbuffer all same quantity of memory buffers DRAM; The size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, and namely the target sizes of one-level LRU chained list is TargetSize; Initial value is set to the half of buffer memory DRAM size; Load is subsequently changed process.
The described cache management system based on SSD, its adaptive change module also comprises:
Preserve historical information module, after the data page replacement of first order LRU chained list, historical information is kept in first order LRU chained list ShadowBuffer, simultaneously, after data in the LRU chained list of the second level are replaced, be also saved in the LRU chained list Shadowbuffer of the second level;
One-level increases length modules, and for when data are hit in first order LRU chained list Shadowbuffer, first order LRU chained list length needs to increase, TargeSize++;
Secondary increases length modules, and for when data are hit in the LRU chained list Shadowbuffer of the second level, second level LRU chained list length needs to increase, TargetSize--.
The described cache management system based on SSD, described adaptive change and aggregation module also comprise:
Aggregation module, for the screening of two-stage LRU chained list in buffer memory DRAM, second level LRU storage of linked list be hotter data, from MRU end to the temperature of LRU end respectively from high to low; When second level LRU chained list is full, the page that second level LRU holds is replaced in buffer zone; Second level chained list is replaced at every turn and all can be made to accumulate a page in buffer zone, and through after a period of time, buffer zone can arrive 64 pages, has now arrived the size of Cluster, and therefore this Cluster is in ready state; When the page again replaced enters buffer zone, buffer zone is full, therefore brushes this Cluster in SSD, buffer empty.
To sum up, we have invented a kind of new caching system, the hybrid cache system that this system is made up of DRAM and SSD, it is intended to the maximized performance utilizing SSD.Main technology have following some:
1. devise the caching system based on SSD, devise the framework that a kind of control enters SSD data.In order to avoid buffer memory pollutes, we take the mode of observed data temperature in DRAM to enter SSD to what make data selection.These are different from traditional cache management strategy, and it can avoid only access data once enter buffer memory and cause buffer memory to pollute.Therefore it more effectively can utilize the space of SSD buffer memory.
2. devise a kind of algorithm of garbled data.We adopt the technology of two-stage LRU chained list to screen the data entering SSD.One-level LRU storage of linked list only accesses data once, and secondary chained list then stores the data of at least accessing more than twice, namely compares the data of " heat ".Make the data that store in SSD more effective like this, meanwhile, this algorithm expense is very little, and dynamically can adapt to the change of load.
3., for the performance characteristics of SSD, devise polymerization and enter the data of SSD and the scheme of coarsegrain replacement.On the basis of this paper garbled data in DRAM, be polymerized by increasing a buffer zone data entering SSD.The non-constant of performance of SSD upper small grain size random write, and take such scheme to stop data to enter small grain size random write in the process of SSD, thus the usability of SSD is improved.Meanwhile, when in buffer memory, data are full time, we take the mode of coarsegrain to replace, the continuous print page of disposable replacement some, and this mode just write with data combines, thus improve the performance of caching system.
Those skilled in the art, under the condition not departing from the spirit and scope of the present invention that claims are determined, can also carry out various amendment to above content.Therefore scope of the present invention is not limited in above explanation, but determined by the scope of claims.

Claims (14)

1. based on a buffer memory management method of SSD, it is characterized in that, comprising:
Step 1, send read-write requests, to check in buffer memory DRAM whether hiting data, search the hash table of buffer memory DRAM, judge whether described data exist, if existed, from buffer memory DRAM, read these data and return this request, if do not existed in buffer memory DRAM, then reading from HDD after in data to buffer memory DRAM and perform step 2;
Step 2, adopts two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Step 3, length for two-stage LRU chained list carries out the calculating of adaptive change, when LRU chained list in the second level in buffer memory DRAM is full, take the granularity of page bunch, the rear C page of the LRU end being positioned at described second level LRU chained list is integrally condensed together and is replaced out buffer memory DRAM, then coarsegrain is written in SSD, and wherein a page bunch size is assumed to C page, and C is the integral multiple of Block number of pages in SSD.
2., as claimed in claim 1 based on the buffer memory management method of SSD, it is characterized in that, described step 1 comprises:
Step 21, if data exist, i.e. hiting data in buffer memory DRAM, then can directly the data in buffer memory DRAM be returned, request completes;
Step 22, if these data do not exist in the hash table of buffer memory DRAM, then needs the hash table continuing inquiry SSD, judges whether these data are stored in SSD;
Step 23, if hit in SSD, then these data read out from SSD, request completes.
3., as claimed in claim 2 based on the buffer memory management method of SSD, it is characterized in that, comprising:
The data read from HDD are directly copied in buffer memory DRAM, and after being screened by buffer memory DRAM, just the replacement of part is in SSD for data, and the content between buffer memory DRAM and SSD is not identical, and the space of two-level cache is the summation of described SSD and described buffer memory DRAM; If ask also miss in SSD, so need request to be sent in HDD to read.
4., as claimed in claim 1 based on the buffer memory management method of SSD, it is characterized in that, described step 2 comprises:
Step 41, when data first time enters into buffer memory DRAM, be first placed on the MRU end of first order LRU chained list, two-stage LRU chained list is all in buffer memory DRAM;
Step 42, the size of first order LRU chained list is set to the certain proportion p of whole buffer memory DRAM size 1, 0<p 1<1;
Step 43, when first order chained list is full, adopt the mode of LRU to replace, the information of the page replaced out is saved in ghost buffer, its history access record is preserved, and this history access record is the data that access temperature is not high;
Step 44, when the data second time in first order LRU chained list is hit time, is then promoted in the chained list of the second level;
Step 45, when second level LRU chained list is full time, replaces the content of its second level LRU chained list in SSD, obtains accessing the higher data of temperature.
5., as claimed in claim 1 based on the buffer memory management method of SSD, it is characterized in that, in described step 3, the calculating of adaptive change comprises:
Step 51, for two-stage LRU chained list all with the addition of corresponding Shadowbuffer, it stores the visit information of the page be replaced recently in appropriate level chained list respectively, the visit information record of two Shadowbuffer all same quantity of memory buffers DRAM;
Step 52, the size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, and namely the target sizes of one-level LRU chained list is TargetSize; Initial value is set to the half of buffer memory DRAM size; Load is subsequently changed process.
6., as claimed in claim 5 based on the buffer memory management method of SSD, it is characterized in that, its change procedure comprises:
Step 61, after the data page replacement of first order LRU chained list, historical information is kept in first order LRU chained list ShadowBuffer, meanwhile, after the data in the LRU chained list of the second level are replaced, is also saved in the LRU chained list Shadowbuffer of the second level;
Step 62, when data are hit in first order LRU chained list Shadowbuffer, increases the length of described first order LRU chained list, and described TargeSize increases;
Step 63, when data are hit in the LRU chained list Shadowbuffer of the second level, increases the length of described second level LRU chained list, and described TargetSize reduces.
7., as claimed in claim 1 based on the buffer memory management method of SSD, it is characterized in that, described step 3 also comprises:
Step 71, the screening of two-stage LRU chained list in buffer memory DRAM, second level LRU storage of linked list be hotter data, from MRU end to the temperature of LRU end respectively from high to low;
Step 72, when second level LRU chained list is full, the page that second level LRU holds is replaced in buffer zone;
Step 73, second level chained list is replaced at every turn and all can be made to accumulate a page in buffer zone, and through after a period of time, buffer zone can arrive 64 pages, has now arrived the size of Cluster, and therefore this Cluster is in ready state;
Step 74, when the page again replaced enters buffer zone, buffer zone is full, therefore brushes this Cluster in SSD, buffer empty, repeats step 71-74 afterwards.
8. based on a cache management system of SSD, it is characterized in that, comprising:
Check buffer memory DRAM module, for sending read-write requests, to check in buffer memory DRAM whether hiting data, search the hash table of buffer memory DRAM, judge whether described data exist, if existed, from buffer memory DRAM, reading these data and return this request, if do not existed in buffer memory DRAM, then from HDD, reading data to buffer memory DRAM;
Garbled data module, for the screening adopting two-stage LRU chained list and Ghost buffer to carry out data, the temperature of authentication data;
Adaptive change and aggregation module, for carrying out the calculating of adaptive change for the length of two-stage LRU chained list, when LRU chained list in the second level in buffer memory DRAM is full, take the granularity of page bunch, the rear C page of the LRU end being positioned at described second level LRU chained list is integrally condensed together and is replaced out buffer memory DRAM, then coarsegrain is written in SSD, and wherein a page bunch size is assumed to C page, and C is the integral multiple of Block number of pages in SSD.
9., as claimed in claim 8 based on the cache management system of SSD, it is characterized in that, described inspection buffer memory DRAM module comprises:
Hiting data module, if existed for data, i.e. hiting data in buffer memory DRAM, then can directly the data in buffer memory DRAM be returned, request completes;
Inquiry SSD module, if do not existed in the hash table of buffer memory DRAM for these data, then needs the hash table continuing inquiry SSD, judges whether these data are stored in SSD;
Read SSD module, if hit in SSD by these data, then these data read out from SSD, request completes.
10., as claimed in claim 9 based on the cache management system of SSD, it is characterized in that, comprising:
The data read from HDD are directly copied in DRAM, and after being screened by DRAM, just the replacement of part is in SSD for data, and the content between buffer memory DRAM and SSD is not identical, and the space of two-level cache is the summation of described SSD and described buffer memory DRAM; If ask also miss in SSD, so need request to be sent in HDD to read.
11. as claimed in claim 8 based on the cache management system of SSD, it is characterized in that, described garbled data module comprises:
Place MRU and hold module, for when data first time enters into buffer memory DRAM, be first placed on the MRU end of first order LRU chained list, two-stage LRU chained list is all in buffer memory DRAM;
Set scale module, for being set to the certain proportion p of whole buffer memory DRAM size by the size of first order LRU chained list 1, 0<p 1<1;
Replacement module, for when first order chained list is full, adopt the mode of LRU to replace, the information of the page replaced out is saved in ghost buffer, its history access record is preserved, and this history access record is the data that access temperature is not high;
Secondary hit module, for when the data second time in first order LRU chained list is hit time, is then promoted in the chained list of the second level;
Temperature module, for when second level LRU chained list is full time, replaces the content of its second level LRU chained list in SSD, obtains accessing the higher data of temperature.
12. as claimed in claim 8 based on the cache management system of SSD, and it is characterized in that, described adaptive change and aggregation module comprise:
Adaptive change module, for all with the addition of corresponding Shadowbuffer for two-stage LRU chained list, it stores the visit information of the page be replaced recently in appropriate level chained list respectively, the visit information record of two Shadowbuffer all same quantity of memory buffers DRAM; The size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, and namely the target sizes of one-level LRU chained list is TargetSize; Initial value is set to the half of buffer memory DRAM size; Load is subsequently changed process.
13. as claimed in claim 12 based on the cache management system of SSD, and it is characterized in that, its adaptive change module also comprises:
Preserve historical information module, after the data page replacement of first order LRU chained list, historical information is kept in first order LRU chained list ShadowBuffer, simultaneously, after data in the LRU chained list of the second level are replaced, be also saved in the LRU chained list Shadowbuffer of the second level;
One-level increases length modules, for when data are hit in first order LRU chained list Shadowbuffer, increases the length of described first order LRU chained list, and described TargeSize increases;
Secondary increases length modules, for when data are hit in the LRU chained list Shadowbuffer of the second level, increases the length of described second level LRU chained list, and described TargetSize reduces.
14. as claimed in claim 8 based on the cache management system of SSD, and it is characterized in that, described adaptive change and aggregation module also comprise:
Aggregation module, for the screening of two-stage LRU chained list in buffer memory DRAM, second level LRU storage of linked list be hotter data, from MRU end to the temperature of LRU end respectively from high to low; When second level LRU chained list is full, the page that second level LRU holds is replaced in buffer zone; Second level chained list is replaced at every turn and all can be made to accumulate a page in buffer zone, and through after a period of time, buffer zone can arrive 64 pages, has now arrived the size of Cluster, and therefore this Cluster is in ready state; When the page again replaced enters buffer zone, buffer zone is full, therefore brushes this Cluster in SSD, buffer empty.
CN201210160350.5A 2012-05-22 2012-05-22 SSD-based (Solid State Disk) cache management method and system Active CN102760101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210160350.5A CN102760101B (en) 2012-05-22 2012-05-22 SSD-based (Solid State Disk) cache management method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210160350.5A CN102760101B (en) 2012-05-22 2012-05-22 SSD-based (Solid State Disk) cache management method and system

Publications (2)

Publication Number Publication Date
CN102760101A CN102760101A (en) 2012-10-31
CN102760101B true CN102760101B (en) 2015-03-18

Family

ID=47054564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210160350.5A Active CN102760101B (en) 2012-05-22 2012-05-22 SSD-based (Solid State Disk) cache management method and system

Country Status (1)

Country Link
CN (1) CN102760101B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI553478B (en) * 2015-09-23 2016-10-11 瑞昱半導體股份有限公司 Device capable of using external volatile memory and device capable of releasing internal volatile memory

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198027A (en) * 2013-02-27 2013-07-10 天脉聚源(北京)传媒科技有限公司 Method and device for storing and providing files
CN104077242B (en) * 2013-03-25 2017-03-29 华为技术有限公司 A kind of buffer memory management method and device
CN103150136B (en) * 2013-03-25 2014-07-23 中国人民解放军国防科学技术大学 Implementation method of least recently used (LRU) policy in solid state drive (SSD)-based high-capacity cache
CN103744624B (en) * 2014-01-10 2017-09-22 浪潮电子信息产业股份有限公司 A kind of system architecture for realizing the data cached selectivity upgradings of storage system SSD
CN105094686B (en) 2014-05-09 2018-04-10 华为技术有限公司 Data cache method, caching and computer system
CN103984736B (en) * 2014-05-21 2017-04-12 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN104317958B (en) * 2014-11-12 2018-01-16 北京国双科技有限公司 A kind of real-time data processing method and system
CN104462388B (en) * 2014-12-10 2017-12-29 上海爱数信息技术股份有限公司 A kind of redundant data method for cleaning based on tandem type storage medium
US9588901B2 (en) 2015-03-27 2017-03-07 Intel Corporation Caching and tiering for cloud storage
CN104991743B (en) * 2015-07-02 2018-01-19 西安交通大学 Loss equalizing method applied to solid state hard disc resistance-variable storing device caching
CN105117174A (en) * 2015-08-31 2015-12-02 北京神州云科数据技术有限公司 Data hotness and data density based cache back-writing method and system
CN105488157A (en) * 2015-11-27 2016-04-13 浪潮软件股份有限公司 Data transmission method and device
CN105573669A (en) * 2015-12-11 2016-05-11 上海爱数信息技术股份有限公司 IO read speeding cache method and system of storage system
CN107463509B (en) * 2016-06-05 2020-12-15 华为技术有限公司 Cache management method, cache controller and computer system
CN106294197B (en) * 2016-08-05 2019-12-13 华中科技大学 Page replacement method for NAND flash memory
CN106527988B (en) * 2016-11-04 2019-07-26 郑州云海信息技术有限公司 A kind of method and device of solid state hard disk Data Migration
CN107015865B (en) * 2017-03-17 2019-12-17 华中科技大学 DRAM cache management method and system based on time locality
CN107133183B (en) * 2017-04-11 2020-06-30 深圳市联云港科技有限公司 Cache data access method and system based on TCMU virtual block device
CN107491523B (en) 2017-08-17 2020-05-05 三星(中国)半导体有限公司 Method and device for storing data object
CN109032969A (en) * 2018-06-16 2018-12-18 温州职业技术学院 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring
CN109324759A (en) * 2018-09-17 2019-02-12 山东浪潮云投信息科技有限公司 The processing terminal of big data platform, the method read data and write data
CN110309015A (en) * 2019-03-25 2019-10-08 深圳市德名利电子有限公司 A kind of method for writing data and device and equipment based on Ssd apparatus
CN111796757B (en) * 2019-04-08 2022-12-13 中移(苏州)软件技术有限公司 Solid state disk cache region management method and device
CN112015678A (en) * 2019-05-30 2020-12-01 北京京东尚科信息技术有限公司 Log caching method and device
US11074189B2 (en) 2019-06-20 2021-07-27 International Business Machines Corporation FlatFlash system for byte granularity accessibility of memory in a unified memory-storage hierarchy
CN111880739A (en) * 2020-07-29 2020-11-03 北京计算机技术及应用研究所 Near data processing system for super fusion equipment
CN111880900A (en) * 2020-07-29 2020-11-03 北京计算机技术及应用研究所 Design method of near data processing system for super fusion equipment
CN112559452B (en) * 2020-12-11 2021-12-17 北京云宽志业网络技术有限公司 Data deduplication processing method, device, equipment and storage medium
CN113050894A (en) * 2021-04-20 2021-06-29 南京理工大学 Agricultural spectrum hybrid storage system cache replacement algorithm based on cuckoo algorithm
CN116561020B (en) * 2023-05-15 2024-04-09 合芯科技(苏州)有限公司 Request processing method, device and storage medium under mixed cache granularity

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0114944B1 (en) * 1982-12-28 1989-09-27 International Business Machines Corporation Method and apparatus for controlling a single physical cache memory to provide multiple virtual caches
US5608890A (en) * 1992-07-02 1997-03-04 International Business Machines Corporation Data set level cache optimization
CN102118309A (en) * 2010-12-31 2011-07-06 中国科学院计算技术研究所 Method and system for double-machine hot backup
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN102362464A (en) * 2011-04-19 2012-02-22 华为技术有限公司 Memory access monitoring method and device
CN102364474A (en) * 2011-11-17 2012-02-29 中国科学院计算技术研究所 Metadata storage system for cluster file system and metadata management method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7380059B2 (en) * 2003-05-16 2008-05-27 Pillar Data Systems, Inc. Methods and systems of cache memory management and snapshot operations

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0114944B1 (en) * 1982-12-28 1989-09-27 International Business Machines Corporation Method and apparatus for controlling a single physical cache memory to provide multiple virtual caches
US5608890A (en) * 1992-07-02 1997-03-04 International Business Machines Corporation Data set level cache optimization
CN102118309A (en) * 2010-12-31 2011-07-06 中国科学院计算技术研究所 Method and system for double-machine hot backup
CN102362464A (en) * 2011-04-19 2012-02-22 华为技术有限公司 Memory access monitoring method and device
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN102364474A (en) * 2011-11-17 2012-02-29 中国科学院计算技术研究所 Metadata storage system for cluster file system and metadata management method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《FClock :一种面向SSD的自适应缓冲区管理算法》;汤显 等;《计算机学报》;20100831;第33卷(第8期);全文 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI553478B (en) * 2015-09-23 2016-10-11 瑞昱半導體股份有限公司 Device capable of using external volatile memory and device capable of releasing internal volatile memory

Also Published As

Publication number Publication date
CN102760101A (en) 2012-10-31

Similar Documents

Publication Publication Date Title
CN102760101B (en) SSD-based (Solid State Disk) cache management method and system
US10241919B2 (en) Data caching method and computer system
Jiang et al. S-FTL: An efficient address translation for flash memory by exploiting spatial locality
CN107066393B (en) Method for improving mapping information density in address mapping table
CN106547476B (en) Method and apparatus for data storage system
US8650362B2 (en) System for increasing utilization of storage media
KR101726824B1 (en) Efficient Use of Hybrid Media in Cache Architectures
EP2735978B1 (en) Storage system and management method used for metadata of cluster file system
JP6613375B2 (en) Profiling cache replacement
CN104794064B (en) A kind of buffer memory management method based on region temperature
US20090070526A1 (en) Using explicit disk block cacheability attributes to enhance i/o caching efficiency
CN107391398B (en) Management method and system for flash memory cache region
CN108762671A (en) Mixing memory system and its management method based on PCM and DRAM
Liu et al. PCM-based durable write cache for fast disk I/O
KR101297442B1 (en) Nand flash memory including demand-based flash translation layer considering spatial locality
US10318180B1 (en) Metadata paging mechanism tuned for variable write-endurance flash
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
CN110532200B (en) Memory system based on hybrid memory architecture
CN109002400B (en) Content-aware computer cache management system and method
CN109478164A (en) For storing the system and method for being used for the requested information of cache entries transmission
CN102354301A (en) Cache partitioning method
CN103885890A (en) Replacement processing method and device for cache blocks in caches
CN108664217A (en) A kind of caching method and system reducing the shake of solid-state disc storaging system write performance
KR101351550B1 (en) Dual Buffer Architecture and Data Management for Non-Volatile based Main Memory
Fan et al. Extending SSD lifespan with comprehensive non-volatile memory-based write buffers

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: HUAWEI TECHNOLOGY CO., LTD.

Effective date: 20130116

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100080 HAIDIAN, BEIJING TO: 100190 HAIDIAN, BEIJING

TA01 Transfer of patent application right

Effective date of registration: 20130116

Address after: 100190 Haidian District, Zhongguancun Academy of Sciences, South Road, No. 6, No.

Applicant after: Institute of Computing Technology, Chinese Academy of Sciences

Applicant after: Huawei Technologies Co., Ltd.

Address before: 100080 Haidian District, Zhongguancun Academy of Sciences, South Road, No. 6, No.

Applicant before: Institute of Computing Technology, Chinese Academy of Sciences

C14 Grant of patent or utility model
GR01 Patent grant