Summary of the invention
For addressing the above problem; To the performance characteristics of SSD, it is too much that the SSD that the objective of the invention is to solve prior art goes up the small grain size random write, the with serious pollution problem of buffer memory; A new buffer memory architecture has been proposed; Be made up of DRAM and SSD two-stage, DRAM is a level cache, and SSD is as the system of L2 cache.
The present invention discloses a kind of buffer memory management method based on SSD, comprising:
Step 1; Send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, reading of data back execution in step 2 to the buffer memory DRAM from HDD then;
Step 2 adopts two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Step 3; Carry out the calculating of adaptive change for the length of two-stage LRU chained list, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of Block number of pages among the SSD.
Described buffer memory management method based on SSD, said step 1 comprises:
Step 21, if data exist, promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, request is accomplished;
Step 22 if in the hash table, do not exist, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Step 23 if in SSD, hit, then reads out these data from SSD, request is accomplished.
Described buffer memory management method based on SSD comprises:
The data that from HDD, read directly are copied among the buffer memory DRAM, and data are by replacing to the SSD buffer memory of part just after the buffer memory DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
Described buffer memory management method based on SSD, said step 2 comprises:
Step 41 when data enter into buffer memory DRAM for the first time, is placed on the MRU end of first order LRU chained list earlier, and two-stage LRU chained list is all in buffer memory DRAM;
Step 42, the size of first order LRU chained list are set to whole buffer memory DRAM magnitude proportion and are made as p
1, 0<p
1<1;
Step 43 when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Step 44 when the data in the first order LRU chained list are hit for the second time, then is promoted to it in chained list of second level;
Step 45 when second level LRU chained list is full, is replaced the content of its second level LRU chained list to SSD, obtains visiting the temperature higher data.
Described buffer memory management method based on SSD, the calculating of adaptive change comprises in the said step 3:
Step 51, for two-stage LRU chained list has all added corresponding Shadowbuffer, it stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM;
Step 52, the size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
Described buffer memory management method based on SSD, its change procedure comprises:
Step 61, after the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously, after the data in the LRU chained list of the second level are replaced, also is saved among the LRU chained list Shadowbuffer of the second level;
Step 62, when data were hit in first order LRU chained list Shadowbuffer, first order LRU chained list length needed to increase TargeSize++;
Step 63, when data were hit in the LRU chained list Shadowbuffer of the second level, secondary LRU chained list length needed to increase TargetSize--.
Described buffer memory management method based on SSD, said step 3 also comprises:
Step 71, through the screening of two-stage LRU chained list among the buffer memory DRAM, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low;
Step 72, when second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone;
Step 73, the each replacement of second level chained list all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state;
Step 74, when the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties, afterwards repeating step 71-74.
The present invention also discloses a kind of cache management system based on SSD, comprising:
Inspection buffer memory DRAM module; Be used to send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, then from HDD reading of data to buffer memory DRAM;
The garbled data module is used to adopt two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Adaptive change and polymerization module; Be used for carrying out the calculating of adaptive change, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch for the length of two-stage LRU chained list; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of middle Block number of pages.
Described cache management system based on SSD, said inspection buffer memory DRAM module comprises:
The hiting data module exists if be used for data, and promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, and request is accomplished;
Inquiry SSD module if be used for not existing at the hash table, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Read the SSD module, if with in SSD, hitting, then these data are read out from SSD, request is accomplished.
Described cache management system based on SSD comprises:
The data that from HDD, read directly are copied among the DRAM, and data are by replacing to the SSD buffer memory of part just after the DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
Described cache management system based on SSD, said garbled data module comprises:
Place MRU end module, be used for when data enter into buffer memory DRAM for the first time, be placed on the MRU end of first order LRU chained list earlier, two-stage LRU chained list is all in buffer memory DRAM;
The ratio module is set, and the size that is used for first order LRU chained list is set to whole buffer memory DRAM magnitude proportion and is made as p
1, 0<p
1<1;
The replacement module is used for when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Secondary hits module, is used for when the data of first order LRU chained list are hit for the second time, then it being promoted in the chained list of the second level;
The temperature module is used for when second level LRU chained list is full, the content of its second level LRU chained list being replaced to SSD, obtains visiting the temperature higher data.
Described cache management system based on SSD, said adaptive change and polymerization module comprise:
The adaptive change module; Be used to two-stage LRU chained list and all added corresponding Shadowbuffer; It stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM; The size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
Described cache management system based on SSD, its adaptive change module also comprises:
Preserve the historical information module; After being used for the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously; After data in the LRU chained list of the second level are replaced, also be saved among the LRU chained list Shadowbuffer of the second level;
One-level increases the length module, be used for when data when first order LRU chained list Shadowbuffer hits, first order LRU chained list length needs growth, TargeSize++;
Secondary increases the length module, is used for when data when LRU chained list Shadowbuffer hits in the second level, and second level LRU chained list length needs growth, TargetSize--.
Described cache management system based on SSD, said adaptive change and polymerization module also comprise:
The polymerization module is used for the screening through buffer memory DRAM two-stage LRU chained list, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low; When second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone; Chained list each replacement in the second level all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state; When the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties.
Beneficial effect of the present invention is:
1, in the present invention, the flow direction of data and traditional buffer memory are also inequality.The data of from HDD, reading not are directly to enter into SSD, screen selectively entering SSD of back among the DRAM but formerly take.The technique effect that brings like this is that data are just to get into SSD after will passing through the screening and filtering of DRAM, and just data are observed earlier " temperature " in DRAM.The write operation that gets into SSD is reduced greatly, and the buffer memory that has reduced SSD simultaneously pollutes.
2, to top buffer structure, the present invention has designed a kind of method of garbled data.The technique effect that brings like this is that the data that in this way enter into SSD are those data of frequently being visited; Therefore can utilize the SSD spatial cache substantially; The content of DRAM and SSD buffer memory is different simultaneously, can make the space utilization of two-level cache more effective like this.
3, get into SSD all with the problem of the I/O pattern of small grain size random write to traditional data, the present invention has designed data aggregation technique.Data are earlier in DRAM; Several pages are polymerized to the page or leaf bunch (cluster) of a coarsegrain; Bunch be that granularity is written among the SSD with page or leaf then, when replacement, also adopt page or leaf bunch to be that granularity, such technique effect are to avoid the small grain size random write on SSD, brought.Corresponding, when data were full in the buffer memory, we taked the mode of coarsegrain to replace, the continuous page or leaf of disposable replacement some, and this just combines with the mode that data get into, thereby improves the performance of caching system.
In sum, the present invention has adopted three above gordian techniquies, can carry out optimization to a certain degree to the caching system of SSD, thereby more effectively utilize the performance of SSD.On system response time, owing to can reduce the request of sending out to HDD, so system performance can increase.Secondly the buffer memory pollution problem on the SSD is able to avoid.At last, the random write of the small grain size of SSD can effectively be reduced, thereby the life-span of SSD also can prolong.
Embodiment
Provide embodiment of the present invention below, the present invention has been made detailed description in conjunction with accompanying drawing.
New caching system
New caching system is by DRAM, SSD, and HDD constitutes, and specifically sees accompanying drawing 2.SSD is between DRAM and HDD, and as the buffer memory of HDD, data persistence is stored among the HDD, and DRAM is as first order buffer memory, and SSD is as second level buffer memory.DRAM and SSD have constituted the two-level cache of HDD.To write down the content of itself storing in the DRAM buffer memory among the DRAM; Whether in the DRAM buffer memory, exist in order to locate certain page fast; Therefore the page that stores in the buffer memory is managed with the mode of hash table; Also will write down simultaneously the content among the SSD, the relevant information of data in the SSD buffer memory will be recorded among the DRAM, this needs show recorded content with hash equally.Therefore in DRAM, need the following information of storage:
LRU (Least Recently Used LRU) chained list information, this chained list has head pointer and tail pointer, is used for carrying out inserting operations such as replacement;
DRAM content Hash table, whether the hash table of content is used for searching data and in the DRAM buffer memory, hits among the storage DRAM;
The LRU chained list of SSD page, the same with the LRU chained list among the DRAM, head pointer and tail pointer are arranged, be used for carrying out and insert operations such as replacement;
The hash table of SSD content, whether the hash table of content is used for searching data and in SSD, hits among the storage SSD;
Whether the processing of request process is such: whenever user CPU/cache sends a read, at first check among the level cache DRAM and hit, search the hash table, check whether the page page or leaf exists in the hash table.If exist, promptly in the DRAM buffer memory, hit, then can directly the data in the DRAM buffer memory be returned, request is accomplished.If in the hash table, do not exist, then need continue to inquire about the hash table of SSD, see whether these data are stored among the SSD.If in SSD, hit, then these data are read out from SSD, request is accomplished.Notice that data are not copied among the first order buffer memory DRAM, the content between them is inequality, that is to say it is exclusive, and the benefit of bringing like this is that the space of buffer memory is the summation in two spaces.If ask also missly in SSD, need request be sent among the HDD so and read.When request is accomplished, these data are saved in the DRAM buffer memory, just in the first order buffer memory, this is based on the consideration of temporal locality, because might also might visit these data later.
Here to note the replacement operation when buffer memory is full; Because request possibly be a write operation; If write operation hits in the DRAM buffer memory, then need its page or leaf be changed to " dirty ", and if this page or leaf is selected replacement; Then need elder generation that the content of this page is write back among the HDD, could from the DRAM buffer memory, remove then.
The strategy of garbled data
Wanting to reduce buffer memory pollutes; Reduce to the small grain size random write on the SSD; The strategy that a kind of garbled data just must be arranged, and traditional caching system, data are as long as bring from HDD; Must put into buffer memory, and the mode that the present invention has adopted two-stage LRU chained list to add Ghost buffer is carried out the screening of data in DRAM.Concrete situation is seen Fig. 3.If take the LRU chained list to manage simply, the front was also mentioned, temperature that can't authentication data, and buffer memory is seriously polluted.And adopt two-stage LRU chained list effectively to address this problem, the concrete operations flow process is following:
When data enter into the DRAM buffer memory for the first time, be placed on the MRU end (Most Recently Used) of first order LRU chained list earlier.Two-stage LRU chained list is all in the DRAM buffer memory.The certain proportion that the size of first order chained list is set to whole DRAM cache size (is made as p
1, 0<p
1<1), Here it is pollutes in order to reduce buffer memory.When chained list is full, then adopt the mode of LRU to replace, the information of the page or leaf of replacing out is saved among the ghost buffer; Note; Only preserve the visit information of this page among the ghost buffer, do not write down the content of this page, therefore a page (4K) only needs 16 bytes just can preserve.Ghost buffer can preserve the visit information of the capacity sum of DRAM buffer memory and SSD buffer memory, and shared amount of ram is also also few.Add the DRAM buffer memory of 128MB like the SSD of a 1GB, its historical visit information promptly can be stored in space that then only need about 4MB.When the data in the first order LRU chained list are hit for the second time, then it is promoted in the chained list of the second level.When second level LRU chained list is full, the content of its LRU end to be replaced to SSD, these are visit temperature higher data.And its history access record preserved, because these are the not high data of visit temperature.See that on the whole SSD has formed a big LRU chained list with this second level LRU chained list.
When the page or leaf of a request in buffer memory when miss; Under this strategy; Need add the operation of one query Ghost buffer; If in Ghost buffer, find visit information, think that then these data are data of comparative heat, just therefore behind disk read data, put it into the LRU chained list of the second level.Therefore in the DRAM buffer memory, also need preserve following information:
Historical visit information among the Ghost buffer is managed each with the LRU mode.If full, just replace old information.
The hash of Ghost buffer table is used for the query history Visitor Logs.
Algorithm above we adopt in DRAM carries out garbled data, like this can be so that get into the data that the data of SSD are comparison " heat ".Yet when DRAM was full, we need select a sacrifice page or leaf to replace.Is selecting the page or leaf in the one-level LRU chained list so on earth be the page or leaf in the secondary LRU chained list? If only select one of them to replace, certainly will cause the number of pages of another one chained list to diminish gradually.And if be respectively the size that two chained lists are selected regular lengths because these two chained lists code Recency and two parameters of Frequency respectively, if equal constantization always of size, just then this algorithm can not be according to the variation of load and corresponding variation.Therefore we need make these two parameters adaptively change with the load variations page or leaf.
The algorithm of two-stage chained list length adaptive change:
In other words, we need design a parameter: the problem of the proportional distribution of one-level chain table size L1:L2.For this reason, we have designed the algorithm of a dynamic change, and are as shown in Figure 4:
Shadowbuffer is identical with the function of Ghostbuffer in fact among Fig. 4, their only storage visit recently page visit information, and storing data information not.Therefore its expense is very little.And in order to distinguish mutually with above-mentioned Ghostbuffer, we are referred to as Shadowbuffer.Herein, we have added corresponding Shadowbuffer for every grade of LRU chained list, and the visit information of the page or leaf that is replaced recently in the appropriate level chained list is stored in this difference respectively.These two Shadowbuffer store the visit information record of the same quantity of DRAM buffer memory.
The size that we adopt these two Shadowbuffer to come dynamic change two-stage LRU chained list; We set a desired value TargetSize for this reason; This value is the desired value of first order LRU chained list, that is to say, the target sizes of one-level LRU chained list is TargetSize.Initial value is made as the half the of DRAM size.Thereafter the dynamic change along with the variation of load.Its change procedure is following:
After the data page replacement of first order LRU chained list, historical information is kept among the L1 ShadowBuffer, simultaneously, after the data among the LRU of the second level are replaced, also is saved among the L2 Shadow buffer.
When data were hit in L1 Shadowbuffer, we thought that one-level LRU chained list length needs to increase TargeSize++;
When data were hit in L2 Shadowbuffer, we thought that secondary LRU chained list length needs to increase TargetSize--;
We why like this reason of dynamic change be; Two ShadowBuffer all store the page or leaf of replacing out in the appropriate level chained list, and if page or leaf request is hit in L1 Shadow, this means miss in the DRAM buffer memory; In its ShadowBuffer, hit and mean that then L1 chained list length is not enough; If because long enough, just so current request can be not miss, so we make its TargetSize become big.In like manner, for L2, we should make the L2 size become big, even TargetSize diminishes.So just, accomplished the dynamic change of two-stage LRU chained list.
When the DRAM buffer memory is full,, from L1, select to sacrifice page or leaf when then replacing if the size of one-level chained list surpasses TargetSize; Otherwise then from L2, select to sacrifice page or leaf.
In sum, we can bring following benefit after taking garbled data to get into the scheme of SSD:
Effectively controlled the data that get into SSD, made that the data that get into SSD often all are " heat " data of visit.Stopped only to visit the possibility of the data entering SSD that does not once just visit again simultaneously.This can make the buffer memory utilization factor of SSD effectively improved.
Reduced the write operation that is sent on the SSD.Through the screening operation of data, again can be with the data that read among the HDD all through the SSD buffer memory, this write operation number that makes SSD send on the SSD greatly reduces, and can make obtain prolonging the serviceable life of SSD
The data of DRAM and SSD storage are disjoint, just exclusive.So, the useful space of two-level cache storage is the summation in two-level cache space.This makes that whole caching system can more efficiently handy DRAM and whole spaces of SSD.
Polymerization writes SSD and coarsegrain replacement
Test shows is more a lot of than the I/O good operation performance of small grain size for the I/O operation of SSD coarsegrain.This is that the operation of small grain size can cause SSD inside fragmentation to occur because the characteristics of can not the original place writing of SSD cause, thereby reduces performance.Therefore the present invention's technology of having adopted polymerization to write.When second level LRU chained list has been expired in the DRAM buffer memory, be not the granularity that adopts page or leaf when we replace, but take the granularity of page or leaf bunch.We are a page or leaf bunch size (being assumed to be the C page or leaf) with wipe granularity (erase block) or its multiple of SSD; That is to say that the back C page or leaf that is positioned at LRU end makes the as a whole DRAM buffer memory that is replaced out together; With them together as a bulk polymerization; Be written to then among the SSD, idiographic flow is as shown in Figure 5, and its algorithm is as follows:
Through the screening of two-stage LRU chained list in the DRAM buffer memory, second level LRU storage of linked list be the data of comparative heat.From MRU hold to the temperature of LRU end respectively from high to low;
When secondary LRU chained list was full, the page or leaf of LRU end was replaced to buffer zone;
The each replacement of secondary chained list all can make page of accumulation in the buffer zone, and therefore after after a while, buffer zone can arrive 64 pages.Arrived the size of Cluster this moment, so this Cluster is in the ready state
When the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this cluster was brushed to SSD, and buffer zone empties, afterwards repeating step 1-4.
After having added polymerization technique, we also need manage the information of all pages bunch in SSD.Other LRU of page or leaf bunch level just, so the information below we also need in the DRAM buffer memory:
Page or leaf bunch level other LRU needs the weight of each page of record bunch
When SSD is full, also be bunch to be that granularity is replaced during replacement with page or leaf.The small grain size random write of having introduced when so just having avoided the data entering and having replaced out SSD.
As shown in Figure 6, the present invention discloses a kind of buffer memory management method based on SSD, comprising:
Step 1; Send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, reading of data back execution in step 2 to the buffer memory DRAM from HDD then;
Step 2 adopts two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Step 3; Carry out the calculating of adaptive change for the length of two-stage LRU chained list, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of Block number of pages among the SSD.
Described buffer memory management method based on SSD, said step 1 comprises:
Step 21, if data exist, promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, request is accomplished;
Step 22 if in the hash table, do not exist, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Step 23 if in SSD, hit, then reads out these data from SSD, request is accomplished.
Described buffer memory management method based on SSD comprises:
The data that from HDD, read directly are copied among the buffer memory DRAM, and data are by replacing to the SSD buffer memory of part just after the buffer memory DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
Described buffer memory management method based on SSD, said step 2 comprises:
Step 41 when data enter into buffer memory DRAM for the first time, is placed on the MRU end of first order LRU chained list earlier, and two-stage LRU chained list is all in buffer memory DRAM;
Step 42, the size of first order LRU chained list are set to whole buffer memory DRAM magnitude proportion and are made as p
1, 0<p
1<1;
Step 43 when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Step 44 when the data in the first order LRU chained list are hit for the second time, then is promoted to it in chained list of second level;
Step 45 when second level LRU chained list is full, is replaced the content of its second level LRU chained list to SSD, obtains visiting the temperature higher data.
Described buffer memory management method based on SSD, the calculating of adaptive change comprises in the said step 3:
Step 51, for two-stage LRU chained list has all added corresponding Shadowbuffer, it stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM;
Step 52, the size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
Described buffer memory management method based on SSD, its change procedure comprises:
Step 61, after the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously, after the data in the LRU chained list of the second level are replaced, also is saved among the LRU chained list Shadowbuffer of the second level;
Step 62, when data were hit in first order LRU chained list Shadowbuffer, first order LRU chained list length needed to increase TargeSize++;
Step 63, when data were hit in the LRU chained list Shadowbuffer of the second level, secondary LRU chained list length needed to increase TargetSize--.
Described buffer memory management method based on SSD, said step 3 also comprises:
Step 71, through the screening of two-stage LRU chained list among the buffer memory DRAM, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low;
Step 72, when second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone;
Step 73, the each replacement of second level chained list all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state;
Step 74, when the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties, afterwards repeating step 71-74.
As shown in Figure 7, the present invention also discloses a kind of cache management system based on SSD, comprising:
Inspection buffer memory DRAM module; Be used to send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, then from HDD reading of data to buffer memory DRAM;
The garbled data module is used to adopt two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Adaptive change and polymerization module; Be used for carrying out the calculating of adaptive change, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch for the length of two-stage LRU chained list; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of middle Block number of pages.
Described cache management system based on SSD, said inspection buffer memory DRAM module comprises:
The hiting data module exists if be used for data, and promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, and request is accomplished;
Inquiry SSD module if be used for not existing at the hash table, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Read the SSD module, if with in SSD, hitting, then these data are read out from SSD, request is accomplished.
Described cache management system based on SSD comprises:
The data that from HDD, read directly are copied among the DRAM, and data are by replacing to the SSD buffer memory of part just after the DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
Described cache management system based on SSD, said garbled data module comprises:
Place MRU end module, be used for when data enter into buffer memory DRAM for the first time, be placed on the MRU end of first order LRU chained list earlier, two-stage LRU chained list is all in buffer memory DRAM;
The ratio module is set, and the size that is used for first order LRU chained list is set to whole buffer memory DRAM magnitude proportion and is made as p
1, 0<p
1<1;
The replacement module is used for when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Secondary hits module, is used for when the data of first order LRU chained list are hit for the second time, then it being promoted in the chained list of the second level;
The temperature module is used for when second level LRU chained list is full, the content of its second level LRU chained list being replaced to SSD, obtains visiting the temperature higher data.
Described cache management system based on SSD, said adaptive change and polymerization module comprise:
The adaptive change module; Be used to two-stage LRU chained list and all added corresponding Shadowbuffer; It stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM; The size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
Described cache management system based on SSD, its adaptive change module also comprises:
Preserve the historical information module; After being used for the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously; After data in the LRU chained list of the second level are replaced, also be saved among the LRU chained list Shadowbuffer of the second level;
One-level increases the length module, be used for when data when first order LRU chained list Shadowbuffer hits, first order LRU chained list length needs growth, TargeSize++;
Secondary increases the length module, is used for when data when LRU chained list Shadowbuffer hits in the second level, and second level LRU chained list length needs growth, TargetSize--.
Described cache management system based on SSD, said adaptive change and polymerization module also comprise:
The polymerization module is used for the screening through buffer memory DRAM two-stage LRU chained list, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low; When second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone; Chained list each replacement in the second level all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state; When the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties.
To sum up, we have invented a kind of new caching system, the hybrid cache system that this system is made up of DRAM and SSD, and it is intended to the performance of the maximized SSD of utilization.Major technology have following some:
1. designed caching system, designed a kind of framework that gets into the SSD data of controlling based on SSD.Pollute for fear of buffer memory, we take the mode of observed data temperature in DRAM to make data optionally get into SSD.These are different with traditional cache management strategy, and the data that it can avoid only visiting once get into buffer memory and cause buffer memory to pollute.Therefore the space that it can more efficient use SSD buffer memory.
2. designed a kind of algorithm of garbled data.We adopt the technology of two-stage LRU chained list to screen the data that get into SSD.One-level LRU storage of linked list is only visited data once, and the secondary chained list is then stored and visited above data at least twice, just compares the data of " heat ".Make that like this data of storing among the SSD are more effective, simultaneously, this arithmetic cost is very little, and can dynamically adapt to the variation of load.
3. to the performance characteristics of SSD, designed polymerization and got into the data of SSD and the scheme of coarsegrain replacement.On the basis of this paper garbled data in DRAM, come polymerization to get into the data of SSD through increasing a buffer zone.The non-constant of performance of the last small grain size random write of SSD, and take such scheme to stop the small grain size random write in the process of data entering SSD, thus the usability of SSD is improved.Meanwhile, when data were full in the buffer memory, we taked the mode of coarsegrain to replace, the continuous page or leaf of disposable replacement some, and this just combines with the mode that data write, thereby improves the performance of caching system.
Those skilled in the art can also carry out various modifications to above content under the condition that does not break away from the definite the spirit and scope of the present invention of claims.Therefore scope of the present invention is not limited in above explanation, but confirm by the scope of claims.