CN107391398B - Management method and system for flash memory cache region - Google Patents

Management method and system for flash memory cache region Download PDF

Info

Publication number
CN107391398B
CN107391398B CN201610324044.9A CN201610324044A CN107391398B CN 107391398 B CN107391398 B CN 107391398B CN 201610324044 A CN201610324044 A CN 201610324044A CN 107391398 B CN107391398 B CN 107391398B
Authority
CN
China
Prior art keywords
cold
linked list
data
data page
dirty
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610324044.9A
Other languages
Chinese (zh)
Other versions
CN107391398A (en
Inventor
王力玉
陈岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Microelectronics of CAS
Original Assignee
Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Microelectronics of CAS filed Critical Institute of Microelectronics of CAS
Priority to CN201610324044.9A priority Critical patent/CN107391398B/en
Publication of CN107391398A publication Critical patent/CN107391398A/en
Application granted granted Critical
Publication of CN107391398B publication Critical patent/CN107391398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms

Abstract

The invention provides a management method and a system of a flash memory cache region, wherein three linked lists, namely a cold dry clean linked list, a cold dirty linked list and a hot linked list, are established in the cache region to respectively manage a cold clean data page, a cold dirty data page and hot data, when the hot data is dispatched, whether the data page needs to be dispatched is judged according to a sequence from a table head to a table tail through a life value, the life value is a numerical value containing access times, novelty and read-write cost, the frequency of accessing the data page, the probability of accessing again and the read-write delay of the flash memory are fully considered, and therefore, the hit rate of data access is improved, and the operation performance of the cache region is improved.

Description

Management method and system for flash memory cache region
Technical Field
The present invention relates to the field of storage systems, and in particular, to a method and a system for managing a flash memory cache area.
Background
With the continuous development of big data application, the requirements on the performance of storage media are higher and higher, and a flash memory is a representative of a novel nonvolatile storage medium, has the advantages of high read-write speed, low power consumption, shock resistance and the like, and is widely applied to consumer electronic products and enterprise-level storage systems.
Lru (least Recently used) is the most basic caching algorithm, i.e. the least Recently used cached data page is replaced preferentially, in the algorithm, the data page is linked into the linked list according to the novelty of the data page, the head end of the linked list is the earlier accessed data page, the tail end of the linked list is the most Recently accessed data page, when the cached data page is eliminated, the elimination is performed from the head end.
However, the LRU algorithm is difficult to eliminate the least recently used data page, and the flash memory has the characteristic of asymmetric read and write, and the erase operation is required before the write operation, and the write operation requires a longer time than the read operation, so in the cache management, it is desirable to reduce the write operation to the flash memory as much as possible to improve the overall performance of the flash memory operation.
Based on the characteristics of a flash memory, a series of improvements are made on an LRU algorithm, in an existing improved LRU algorithm, a cold dry net linked list, a cold dirty linked list and a hot linked list are respectively established in a cache region, data pages which are only subjected to once read operation are stored in the cold dry net linked list and are called cold clean pages, data pages which are only subjected to once write operation are stored in the cold dirty linked list and are called cold dirty pages, data pages which are more than once read operation or write operation are stored in the hot linked list and comprise the hot clean pages and the hot dirty pages, when a hot linked list is dispatched to the cold linked list, the data pages are searched from the head of the hot linked list, and the clean pages are dispatched preferentially, so that the write operation on the flash memory can be reduced, but the frequency and novelty of data access are not really considered, and the hit rate and the running performance of the cache region are influenced. In addition, when data pages are replaced, firstly, the data pages are replaced from the cold dry clean pages, then, the data pages are replaced in the cold dirty pages and the hot clean pages through probability selection, and due to the fact that the data pages are replaced in a fixed probability mode, the data pages in a certain linked list are possibly eliminated too much, or the data pages which are just entered are eliminated too early, and the efficiency and the performance of cache operation are influenced.
Disclosure of Invention
The present invention is directed to solve at least one of the above problems, and provides a method and a system for managing a flash memory cache area, which really consider the access frequency, novelty, and read/write cost of data, and improve the hit rate and the operation performance of the cache area.
In order to achieve the purpose, the invention has the following technical scheme:
a management method of a flash memory cache region comprises the following steps:
according to the access characteristics of the flash memory, respectively establishing a cold dry clean linked list, a cold dirty linked list and a hot linked list in a cache region, wherein data pages which are only subjected to once reading operation are stored in the cold dry clean linked list, data pages which are only subjected to once writing operation are stored in the cold dirty linked list, data pages which are migrated after the data pages in the cold dry clean linked list or the cold dirty linked list are accessed again are stored in the hot linked list, and the data pages of the linked lists are provided with reading and writing marks and access times;
when a data page needs to be dispatched from the hot link table to the cold dry net linked list or the cold dirty linked list, dispatching processing is carried out, and the dispatching processing specifically comprises the following steps: sequentially judging whether the life value of the data page in the hot linked list is smaller than a preset value or the access frequency is 1 according to the sequence from the head to the tail of the list, if so, distributing the data page to a cold dry clean linked list or a cold dirty linked list according to the reading and writing mark of the data page, and if not, subtracting 1 from the access frequency of the data page; the weight factors of the life value comprise access times, novelty and read-write cost, the novelty is the probability of the data page being accessed again at the current moment, the closer the moment of the latest access of the data page to the current moment, the higher the probability of being accessed again is, and the cost value of the read operation in the read-write cost is smaller than the cost value in the write operation;
when the usage of the buffer reaches a replacement threshold, the data page is replaced from the buffer.
Optionally, the method further includes dynamically adjusting the size of the cold data area, specifically including: when the data page is replaced by the cold dry clean chain table and the cold dirty chain table, the times of replacing the data page by the cold dry clean chain table and the cold dirty chain table are respectively recorded, and when the replacement ratio is larger than the write-read cost ratio of the flash memory, the size of the cold clean chain table is expanded and the size of the cold dirty chain table is reduced; and when the replacement ratio is smaller than the write-read cost-to-write ratio of the flash memory, reducing the size of the cold clean linked list and expanding the size of the cold dirty linked list, wherein the replacement ratio is the ratio of the times of replacing the data pages from the cold clean linked list to the times of replacing the data pages from the cold dirty linked list, and the write-read cost-to-write ratio is the ratio of the sum of the read delay and the write delay of the flash memory to the read delay.
Optionally, when the usage amount of the buffer reaches the replacement threshold, the step of replacing the data page from the buffer includes:
when the usage amount of the buffer area reaches a replacement threshold value, judging whether the size of the cold dry clean linked list is larger than a first threshold value, if so, eliminating the data page from the head of the cold dry clean linked list;
if not, judging whether the size of the cold dirty page linked list is larger than a first threshold value, and if so, eliminating the data page from the header of the cold dirty linked list;
if not, judging whether the size of the cold clean page linked list is larger than a second threshold value, and if so, eliminating the data page from the header of the cold clean page linked list;
if not, judging whether the size of the cold dirty page linked list is larger than a second threshold value, and if so, eliminating the data page from the header of the cold dirty linked list, wherein the first threshold value is larger than the second threshold value;
if not, the data page is eliminated from the hot-link table.
Optionally, the step of eliminating the data page from the hot-link table comprises:
sequentially judging whether the life value of the data pages in the hot link list is smaller than a preset value or the access frequency is 1 according to the sequence from the head to the tail of the hot link list, and if so, eliminating the data pages; and if not, subtracting 1 from the access frequency of the data page.
Optionally, the management method is implemented in a multi-thread manner, and includes a main thread and a sub-thread, where the sub-thread is used to perform dispatch processing, replace a data page from a buffer, and dynamically adjust the size of a cold data area, and after receiving an access request, the main thread determines whether the sub-thread needs to be called and activates the sub-thread.
Alternatively, the formula for calculating the novelty R is as follows:
Figure BDA0000991090930000031
wherein, Tr is the latest accessed time of the data page, Tf is the first accessed time of the data page, and Tc is the current time.
Alternatively, the cost value of the write operation is
Figure BDA0000991090930000032
The cost value of the read operation is 1, wherein Cw is the write delay of the flash memory, and Cr is the read delay of the flash memory.
In addition, the invention also provides a management system of the flash memory cache region, which comprises the following steps:
the data pages of the cold dry clean linked list, the cold dirty linked list and the hot linked list are respectively established in the cache region according to the access characteristics of the flash memory, the data pages are provided with read-write marks and access times, the data pages which are only subjected to once read operation are stored in the cold dry clean linked list, the data pages which are only subjected to once write operation are stored in the cold dirty linked list, the data pages which are migrated after the data pages in the cold dry clean linked list or the cold dirty linked list are accessed again are stored in the hot linked list, and the data pages of the linked lists are provided with the read-write marks and the access times;
the dispatching processing unit is used for dispatching data pages from the hot link table to the cold dry net linked list or the cold dirty linked list, and specifically comprises the following steps: sequentially judging whether the life value of the data page in the hot linked list is smaller than a preset value or the access frequency is 1 according to the sequence from the head to the tail of the list, if so, distributing the data page to a cold dry clean linked list or a cold dirty linked list according to the reading and writing mark of the data page, and if not, subtracting 1 from the access frequency of the data page; the weight factors of the life value comprise access times, novelty and read-write cost, the novelty is the probability of the data page being accessed again at the current moment, the closer the moment of the latest access of the data page to the current moment, the higher the probability of being accessed again is, and the cost value of the read operation in the read-write cost is smaller than the cost value in the write operation;
a replacement unit for replacing the data page from the buffer when the usage amount of the buffer reaches a replacement threshold.
Optionally, the method further includes: the dynamic adjustment unit is used for dynamically adjusting the size of the cold data area, and specifically comprises: when the data page is replaced by the cold dry clean chain table and the cold dirty chain table, the times of replacing the data page by the cold dry clean chain table and the cold dirty chain table are respectively recorded, and when the replacement ratio is larger than the write-read cost ratio of the flash memory, the size of the cold clean chain table is expanded and the size of the cold dirty chain table is reduced; and when the replacement ratio is smaller than the write-read cost-to-write ratio of the flash memory, reducing the size of the cold clean linked list and expanding the size of the cold dirty linked list, wherein the replacement ratio is the ratio of the times of replacing the data pages from the cold clean linked list to the times of replacing the data pages from the cold dirty linked list, and the write-read cost-to-write ratio is the ratio of the sum of the read delay and the write delay of the flash memory to the read delay.
Optionally, the replacing unit includes:
a judging unit for judging whether the usage amount of the buffer reaches a replacement threshold;
the first replacement unit is used for eliminating the data pages from the head of the cold dry clean linked list when the usage of the buffer area reaches a replacement threshold value and the size of the cold dry clean linked list is larger than the first threshold value;
the second replacement unit is used for eliminating the data pages from the head of the cold dirty linked list when the usage of the buffer area reaches a replacement threshold value, the size of the cold clean linked list is smaller than the first threshold value and the size of the cold dirty linked list is larger than the first threshold value;
the third replacement unit is used for eliminating the data pages from the head of the cold dry clean linked list when the usage of the buffer area reaches a replacement threshold value, the sizes of the cold dry clean linked list and the cold dirty linked list are both smaller than the first threshold value, and the size of the cold dry clean linked list is larger than the second threshold value;
the fourth replacement unit is used for eliminating the data pages from the head of the cold and dirty linked list when the usage of the buffer area reaches a replacement threshold value, the sizes of the cold and dry clean linked list and the cold and dirty linked list are both smaller than the first threshold value, the size of the cold and dry clean linked list is smaller than the second threshold value, and the size of the cold and dirty linked list is larger than the second threshold value, wherein the first threshold value is larger than the second threshold value;
and the fifth replacing unit is used for eliminating the data page from the hot link list when the usage of the buffer area reaches the replacing threshold and the sizes of the cold dry clean link list and the cold dirty link list are smaller than the second threshold.
Optionally, in the fifth replacement unit, whether the life value of the data page in the hot link list is smaller than a predetermined value or the access frequency is 1 is sequentially determined according to the sequence from the head to the tail of the hot link list, and if yes, the data page is eliminated; and if not, subtracting 1 from the access frequency of the data page.
Optionally, the method further includes: the main thread is used for dispatching, replacing data pages from the buffer area and dynamically adjusting the size of the cold data area, and the main thread judges whether the sub-thread needs to be called or not and activates the sub-thread after receiving the access request.
Optionally, the formula for calculating the novelty R is as follows:
Figure BDA0000991090930000051
wherein, Tr is the latest accessed time of the data page, Tf is the first accessed time of the data page, and Tc is the current time.
Alternatively, the cost value of the write operation is
Figure BDA0000991090930000052
The cost value of the read operation is 1, wherein Cw is the write delay of the flash memory, and Cr is the read delay of the flash memory.
According to the management method and the management system for the flash memory cache region, provided by the embodiment of the invention, the three linked lists, namely the cold dry clean linked list, the cold dirty linked list and the hot linked list, are established in the cache region, the cold clean data page, the cold dirty data page and the hot data are respectively managed, when the hot data is dispatched, whether the data page needs to be dispatched is judged according to the sequence from the head of the table to the tail of the table through the life value, the life value is a numerical value containing the factors of the access times, the novelty and the read-write cost, the frequency of accessing the data page, the probability of accessing the data page again and the read-write delay cost are fully considered, therefore, the hit rate of data access is improved, and the operation performance of the cache region is.
Furthermore, when the data pages are replaced, the data pages are replaced from the cold area, elimination of the data pages from the cold dry clean linked list or the cold dirty linked list is judged according to two threshold values with different sizes, and finally elimination of the hot data from the hot linked list is considered, so that excessive elimination of the data pages of a certain linked list can be avoided, the situation that the data pages just read into the cache area are eliminated too early or the cold dirty data pages are retained in the cache too long is avoided, and the running performance of the cache is improved.
Furthermore, when the data page is replaced, the times of replacing the data page from the cold dry clean page chain table and the cold dirty chain table are respectively recorded, and the size relation between the cold dry clean chain table and the cold dirty chain table is dynamically adjusted by comparing the replacement ratio with the write-read cost-to-cost ratio of the flash memory, so that the clean data page or the dirty data page is prevented from being excessively eliminated, and the data which just enters the cache is prevented from being replaced.
Furthermore, the dispatching processing, the data page replacement from the buffer area and the dynamic adjustment of the size of the cold area are realized through multithreading, the parallelism of the buffer management is improved, the running time is reduced, and the running efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a diagram illustrating a flow direction of linked list data pages in a method for managing a flash cache according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a management system of a flash cache region according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
The invention provides a management method of a flash memory cache region, which is an improvement on an LRU algorithm facing a flash memory and comprises the following steps:
according to the operating characteristics of the flash memory, a cold dry clean linked list, a cold dirty linked list and a hot linked list are respectively established in a cache region, data pages which are only subjected to once reading operation are stored in the cold dry clean linked list, data pages which are only subjected to once writing operation are stored in the cold dirty linked list, data pages which are migrated after the data pages in the cold dry clean linked list or the cold dirty linked list are accessed again are stored in the hot linked list, and the data pages in the linked lists are provided with cold and hot marks and access times;
when a data page needs to be dispatched from the hot link table to the cold dry net linked list or the cold dirty linked list, dispatching processing is carried out, and the dispatching processing specifically comprises the following steps: sequentially judging whether the life value of the data page in the hot linked list is smaller than a preset value or the access frequency is 1 according to the sequence from the head of the table to the tail of the table, if so, distributing the data page to the tail of the cold clean linked list or the cold dirty linked list according to the cold and hot mark of the data page, and if not, subtracting 1 from the access frequency of the data page; the weight factors of the life value comprise access times, novelty and read-write cost, the novelty is the probability of the data page being accessed again at the current moment, the closer the moment of the latest access of the data page to the current moment, the higher the probability of being accessed again is, and the cost value of the read operation in the read-write cost is smaller than the cost value in the write operation;
when the usage of the buffer reaches a replacement threshold, the data page is replaced from the buffer.
The cold dry clean linked list, the cold dirty linked list and the hot linked list are dynamic linked lists, data pages accessed earlier are stored at the head end of the linked lists, and data pages accessed recently are stored at the tail end of the linked lists.
In the invention, three linked lists, namely a cold dry clean linked list, a cold dirty linked list and a hot linked list, are established in a cache region to respectively manage a cold clean data page, a cold dirty data page and hot data, when the hot data is dispatched, whether the data page needs to be dispatched is judged according to a life value from a header to a footer, the life value is a numerical value containing three factors of access times, novelty and read-write cost, and the frequency of accessing the data page, the probability of accessing the data page again and the read-write delay cost are fully considered, so that the hit rate of data access is improved, and the operation performance of the cache region is improved.
In order to better understand the technical solution and the technical effect of the present invention, a detailed description will be given below of a specific embodiment with reference to fig. 1.
Referring to fig. 1, in a cache region of a flash memory, according to an operation request to the flash memory, a cold dry clean linked list, a cold dirty linked list, and a hot linked list are respectively established in the cache region, where a data page that has only been read once is stored in the cold dry clean linked list, a data page that has only been write once is stored in the cold dirty linked list, and a data page that has been migrated after the data page in the cold dry clean linked list or the cold dirty linked list is accessed again is stored in the hot linked list, where the data page in the linked lists is provided with a cold flag and an access frequency.
When an upper layer carries out access request on a flash memory, the request is read operation or write operation generally, a cold dry clean linked list, a cold dirty linked list and a hot linked list are respectively established on a cache region according to different access characteristics, the cold dry clean linked list and the cold dirty linked list are cold data regions, so that the cache region is logically divided into three linked lists, each node in the linked lists corresponds to a specific data page, and if the access request is read operation and only once read operation is carried out on one data page, the data page is linked into the cold clean linked list; if the operation request is write operation and only once write operation is performed on a data page, linking the data page into a cold-dirty linked list; for the data pages in the cold dry clean linked list or the cold dirty linked list, if the data pages are accessed again, the access can be read operation or write operation, the access times are increased once, the data pages are linked into the hot linked list, that is, the hot linked list is linked with hot dirty data and clean data, and the data pages in the linked lists are at least provided with read-write marks and the access times.
In the preferred embodiment of the present invention, in order to facilitate indexing, a hash table is used to manage data pages in a cache region, the hash table supports operations such as insertion, lookup, and deletion of all data pages, and defines two data structures, one of which is a global data structure for recording parameters such as the size of usage amount of the cache region, access time, and head and tail pointers of each linked list; the other is a data structure corresponding to each data page and used for recording a read-write mark, a logical page number, a buffer page number, the time when the data page is accessed for the first time, the time when the data page is accessed for the last time, the access times, the position information and the like of each data page. Through the hash table, the access request can be quickly searched for which of the three linked lists belongs, the data pages in the hot linked list are migrated from the cold dry clean linked list or the cold dirty linked list, and the migrated data pages are always migrated to the tail of the hot linked list.
When the usage amount of the hot link table is excessive or the usage amount of the cold link table is insufficient, the dispatching processing is required, that is, hot data in the hot link table is to be migrated into the cold data area, and generally, when the size of the hot link table is larger than a preset maximum threshold range, the size of the hot link table needs to be reduced, so that the dispatching processing of the hot link table needs to be carried out; in addition, when the sizes of the cold dry clean linked list and the cold dirty linked list are both below a predetermined maximum threshold range, the data pages of the cold region need to be replenished, and thus, the data pages of the scatter need to be obtained from the hot linked list.
In the process of dispatching, it is desirable to eliminate data pages which are really inactive in the hot link list to the cold data area, and based on this, in the embodiment of the invention, the dispatching process comprises the following steps: whether the life value of a data page in a hot linked list is smaller than a preset value or the access frequency is 1 is sequentially judged according to the sequence from the head of the table to the tail of the table, if so, the data page is dispatched to the tail of a cold dry clean linked list or a cold dirty linked list according to a read-write mark of the data page, wherein the weight factors of the life value comprise the access frequency, novelty and read-write cost, the novelty is the probability that the data page is accessed again at the current moment, the closer the latest access moment of the data page is to the current moment, the higher the probability of being accessed again is, and the value of the read cost of the read operation is smaller than the value of the write cost in the write operation; and if not, subtracting 1 from the access frequency of the data page.
Whether the hot link table needs to be dispatched or not can be determined by judging whether the size of the hot link table exceeds a preset maximum threshold range or whether the sizes of the cold clean link table and the cold dirty link table are lower than the preset maximum threshold range, and if one of the two conditions is met, the dispatching is carried out. In the specific dispatching process, screening is carried out from the head of the hot link table, whether the corresponding data page is eliminated to the cold data area is determined by judging whether the life value of the data page of each node is smaller than a preset value or not, data accessed earlier is stored in the head of the hot link table, screening can be carried out from the head of the hot link table, and for the data page smaller than the preset value, the data page is dispatched to the corresponding cold dry clean chain table or cold dirty chain table according to the read-write identification, namely the tail of the corresponding cold dry clean chain table or cold dirty chain to which the data page is linked.
For the condition that the life value is larger than the preset value, the data page is likely to be revisited, the visit frequency is reduced by 1 instead of being directly dispatched, the data heat is reduced, if the data page continuously keeps the high life value but is not revisited, the visit frequency is reduced to 1, and the data page can be dispatched, so that when the screening is carried out, if the visit frequency of the data page is 1, the data page is dispatched to the cold dry net chain table or the cold dirty chain table according to the reading and writing mark of the data page.
For the life value, it includes three weight factors of the number of accesses, novelty and read-write cost of the flash memory, which will be described in detail below.
The access frequency is the frequency F of the data page accessed at the current moment, the access frequency F reflects the frequency of the data page accessed, and the probability of the data page being accessed again is higher when the access frequency is high.
The novelty is the probability of the data page being revisited at the current time, the probability of the data page being revisited can be expressed by the length of the time of the data page last visit from the current time, and it is considered that the closer the time of the data page last visit to the current time, the higher the probability of being revisited, in a specific embodiment, the novelty R is defined as:
Figure BDA0000991090930000101
wherein, Tr is the latest accessed time of the data page, Tf is the first accessed time of the data page, and Tc is the current time.
The read-write cost defines different cost values of the read operation and the write operation respectively, so that the cost value in the read operation is smaller than that in the write operation, and because the time delay of the write operation is larger than that of the read operation due to the characteristic of the flash memory, the write operation is given a larger value, data of the read operation can be preferentially eliminated, and the system operation time is reduced.
Figure BDA0000991090930000102
Wherein wr is write operation, rd is read operation, Cw is write delay of the flash memory, and Cr is read delay of the flash memory.
Thus, the vital value is determined taking into account the above three factors, and in one embodiment, the vital value L is defined as:
L=F*R*θ
in this embodiment, the weights of the three factors, i.e., the number of accesses F, the novelty R, and the read-write cost θ, are all set to the same weight value 1, the product of the three is used as a life value, and in the dispatch process, when the dispatch data page is screened, the predetermined value may be set to 1, and of course, other predetermined values may be set according to the specific dispatch requirement. It is understood that, in other embodiments, the three factors may be set to different weights according to specific needs, for example, the weight of the number of access times F is 0.8, the weight of the novelty R is 1.2, and the read-write cost θ is 1, so that expressions with different life values L may be obtained.
The dispatching processing in the management of the cache region is described in detail, in the dispatching processing of the invention, the frequency of accessing the data page, the probability of accessing the data page again and the cost of the read-write delay of the flash memory are fully considered, the data page in the hot linked list is ensured to be real relatively hot data, the cold data region is real relatively cold data, and the read-write cost is considered while the data heat degree is considered, so that the hit rate of data access is improved, and the operation performance of the cache region is improved.
When the usage amount of the buffer area reaches a replacement threshold value, if new data needs to be cached in the buffer area but no free space exists, the existing data page needs to be replaced, namely the original data of the buffer block is replaced to a flash memory so as to be stored in the new data page.
In the present invention, a suitable method can be adopted to replace the data pages from the buffer, and in a preferred embodiment, the cold clean pages can be replaced preferentially, and the cold dirty pages and the hot clean pages can be replaced selectively. In order to avoid excessive elimination of data pages of a linked list or early elimination of data pages which are just entered, in a preferred embodiment of the present invention, a dual threshold method is adopted to replace data pages of a cold data area, and specifically, the method includes the following steps:
when the usage amount of the buffer area reaches a replacement threshold value, judging whether the size of the cold dry clean linked list is larger than a first threshold value, if so, eliminating the data page from the head of the cold dry clean linked list;
if not, judging whether the size of the cold dirty page linked list is larger than a first threshold value, and if so, eliminating the data page from the header of the cold dirty page;
if not, judging whether the size of the cold clean page linked list is larger than a second threshold value, and if so, eliminating the data page from the header of the cold clean page linked list;
if not, judging whether the size of the cold dirty page linked list is larger than a second threshold value, if so, eliminating the data page from the header of the cold dirty page, wherein the first threshold value is larger than the second threshold value;
if not, the data page is eliminated from the hot-link table.
The above is a step performed when the usage amount of the buffer reaches the replacement threshold, even if the usage amount is greater than or equal to the replacement threshold, and the replacement step is stopped after the usage amount falls below the replacement threshold. When replacing data pages, firstly eliminating from cold clean data pages, then cold dirty data pages, and finally considering hot data pages, in the specific replacement, two thresholds, namely a first threshold and a second threshold, are set for the data pages of a cold data area, and the first threshold is larger than the second threshold.
Like this, when the size of cold dry clean linked list is greater than first threshold value, that is to say, when cold dry clean linked list capacity is in the great value, preferentially eliminate the data page from cold dry clean linked list, cold clean linked list capacity is big this moment, and the data of depositing is the cold data of saving for a long time, eliminates from here, can avoid eliminating the data page that just got into, also need not write back flash memory with the data page, improves replacement efficiency. If the size of the cold clean linked list is not larger than the first threshold, judging whether the size of the cold dirty linked list is larger than the first threshold, if the size of the cold dirty linked list is larger than the first threshold, eliminating the data page from the cold dirty linked list, wherein the capacity of the cold dirty linked list is large, the stored data is cold data which is stored for a long time, and the data page is eliminated, so that the elimination of the data page which is read just now can be avoided, and the long-term resident cache of the cold dirty data page is avoided.
If the size of the cold dirty linked list is smaller than the first threshold, it is indicated that the storage amounts of the cold clean data and the cold dirty data in the cache are not particularly large, but compared with the hot data, the cold data is replaced more appropriately to ensure the hit rate, therefore, whether the size of the cold clean linked list is larger than the second threshold is continuously judged, if the size of the cold clean linked list is larger than the second threshold, the data page is still eliminated from the cold clean linked list, if the size of the cold clean linked list is smaller than the second threshold, whether the size of the cold dirty linked list is larger than the second threshold is continuously judged, and if the size of the cold clean linked list is larger than the second threshold, the data page is eliminated from the cold dirty linked list.
And only when the sizes of the cold dry clean linked list and the cold dirty linked list are smaller than a second threshold value, data elimination from the hot linked list is considered, and at the moment, the storage capacity of the cold dry clean linked list and the cold dirty linked list in the cache region is smaller, so that the dispatching processing of the hot linked list can be skipped, and the data page can be directly eliminated from the hot linked list. The elimination of the data pages is performed from the head of the linked list, and the elimination has two processing modes, namely, if the data pages are clean pages, the data pages are directly deleted, and if the data pages are dirty pages, the data pages need to be written back to the flash memory.
In a preferred embodiment, the step of eliminating data pages from the hot-link table comprises:
sequentially judging whether the life value of the data pages in the hot link list is smaller than a preset value or the access frequency is 1 according to the sequence from the head to the tail of the hot link list, and if so, eliminating the data pages; and if not, subtracting 1 from the access frequency of the data page. That is, elimination of the data page is directly performed through the life value without performing the dispatching process, and the calculation of the life value is the same as the description of the life value in the dispatching process, which is not described herein again.
In addition, in the cache management, when the number of read/write requests of the user is relatively large, a step of dynamically adjusting the size of the cold data area needs to be further performed, which specifically includes: when the data page is replaced by the cold dry clean chain table and the cold dirty chain table, the times of replacing the data page by the cold dry clean chain table and the cold dirty chain table are respectively recorded and respectively recorded as an evict _ clear _ cnt and an evict _ dirty _ cnt, and when the replacement ratio is larger than the write-read cost-to-cost ratio of the flash memory, the size of the cold clean chain table is expanded and the size of the cold dirty chain table is reduced; and when the replacement ratio is smaller than the write-read cost ratio of the flash memory, reducing the size of the cold clean linked list and expanding the size of the cold dirty linked list, wherein the replacement ratio is the ratio of the times of replacing the data pages from the cold clean linked list to the times of replacing the data pages from the cold dirty linked list, namely evict _ clear _ cnt/evict _ dirty _ cnt, the write-read cost ratio is the ratio of the sum of the read delay and the write delay of the flash memory to the read delay of the flash memory, namely (Cr + Cw)/Cr, Cw is the write delay of the flash memory, and Cr is the read delay of the flash memory.
In addition, in the prior art, data page replacement is performed after the usage amount of the buffer reaches the replacement threshold, so that a new data page cannot be processed in time, which may cause blocking of user access. Therefore, in a preferred embodiment of the present invention, a multithreading implementation method is adopted, where the multithreading includes a main thread and a sub-thread, the sub-thread is used to perform dispatch processing, replace a data page from a buffer, and dynamically adjust the size of a cold data area, and after receiving an access request, the main thread determines whether the sub-thread that needs to be called and activates the sub-thread, and the specific determination may include: judging whether the usage amount of the buffer reaches a replacement threshold value, judging whether the data pages need to be dispatched from the hot link table to the cold dry linked list or the cold dirty linked list, and judging one or more of the relations between the replacement ratio and the write-read cost ratio. And after the main thread judges that the sub-thread needs to be called, the sub-thread executes a corresponding task until the main thread suspends the sub-thread. In specific execution, for a main thread and a sub-thread, a data page of a cache region and some management data are common data of the main thread and the sub-thread, and in order to ensure consistency of the common data, a mutual exclusion lock can be adopted to avoid data inconsistency possibly caused by multi-thread operation.
In addition, the present invention also provides a management system for a flash memory buffer area, which implements the above method, and as shown in fig. 2, the method includes:
according to the operation request to the flash memory, a cold dry clean linked list 101, a cold dirty linked list 102 and a hot linked list 103 are respectively established in a cache region, data pages of the cold dry clean linked list 101 are provided with read-write marks and access times, the cold dry clean linked list 101 stores the data pages which are only subjected to once read operation, the cold dirty linked list 102 stores the data pages which are only subjected to once write operation, the hot linked list 103 stores the data pages which are migrated after the data pages in the cold dry clean linked list or the cold dirty linked list are accessed again, and the data pages of the linked lists are provided with the read-write marks and the access times;
the dispatching processing unit 110 is configured to dispatch data pages from the hot link table to the cold dry linked list or the cold dirty linked list, where the dispatching processing specifically includes: sequentially judging whether the life value of the data page in the hot linked list is smaller than a preset value or the access frequency is 1 according to the sequence from the head to the tail of the list, if so, distributing the data page to a cold dry clean linked list or a cold dirty linked list according to the reading and writing mark of the data page, and if not, subtracting 1 from the access frequency of the data page; the weight factors of the life value comprise access times, novelty and read-write cost, the novelty is the probability of the data page being accessed again at the current moment, the closer the moment of the latest access of the data page to the current moment, the higher the probability of being accessed again is, and the cost value of the read operation in the read-write cost is smaller than the cost value in the write operation;
a replacement unit 120 for replacing the data page from the buffer when the usage of the buffer reaches a replacement threshold.
Further, still include: the dynamic adjustment unit is used for dynamically adjusting the size of the cold data area, and specifically comprises: when the data page is replaced by the cold dry clean chain table and the cold dirty chain table, the times of replacing the data page by the cold dry clean chain table and the cold dirty chain table are respectively recorded, and when the replacement ratio is larger than the write-read cost ratio of the flash memory, the size of the cold clean chain table is expanded and the size of the cold dirty chain table is reduced; and when the replacement ratio is smaller than the write-read cost-to-write ratio of the flash memory, reducing the size of the cold clean linked list and expanding the size of the cold dirty linked list, wherein the replacement ratio is the ratio of the times of replacing the data pages from the cold clean linked list to the times of replacing the data pages from the cold dirty linked list, and the write-read cost-to-write ratio is the ratio of the sum of the read delay and the write delay of the flash memory to the read delay.
Further, the replacement unit 120 includes:
a judging unit for judging whether the usage amount of the buffer reaches a replacement threshold;
the first replacement unit is used for eliminating the data pages from the head of the cold dry clean linked list when the usage of the buffer area reaches a replacement threshold value and the size of the cold dry clean linked list is larger than the first threshold value;
the second replacement unit is used for eliminating the data pages from the head of the cold dirty linked list when the usage of the buffer area reaches a replacement threshold value, the size of the cold clean linked list is smaller than the first threshold value and the size of the cold dirty linked list is larger than the first threshold value;
the third replacement unit is used for eliminating the data pages from the head of the cold dry clean linked list when the usage of the buffer area reaches a replacement threshold value, the sizes of the cold dry clean linked list and the cold dirty linked list are both smaller than the first threshold value, and the size of the cold dry clean linked list is larger than the second threshold value;
the fourth replacement unit is used for eliminating the data pages from the head of the cold and dirty linked list when the usage of the buffer area reaches a replacement threshold value, the sizes of the cold and dry clean linked list and the cold and dirty linked list are both smaller than the first threshold value, the size of the cold and dry clean linked list is smaller than the second threshold value, and the size of the cold and dirty linked list is larger than the second threshold value, wherein the first threshold value is larger than the second threshold value;
and the fifth replacing unit is used for eliminating the data page from the hot link list when the usage of the buffer area reaches the replacing threshold and the sizes of the cold dry clean link list and the cold dirty link list are smaller than the second threshold.
Further, in the fifth replacement unit, whether the life value of the data page in the hot link list is smaller than a preset value or the access frequency is 1 is sequentially judged according to the sequence from the head to the tail of the hot link list, and if yes, the data page is eliminated; and if not, subtracting 1 from the access frequency of the data page.
Further, still include: the main thread is used for dispatching, replacing data pages from the buffer area and dynamically adjusting the size of the cold data area, and the main thread judges whether the sub-thread needs to be called or not and activates the sub-thread after receiving the access request.
Further, the calculation formula of the novelty R is as follows:
Figure BDA0000991090930000151
wherein, Tr is the latest accessed time of the data page, Tf is the first accessed time of the data page, and Tc is the current time.
Further, the cost value of the write operation is
Figure BDA0000991090930000152
The read cost value in the read operation is 1, wherein Cw is the write delay of the flash memory, and Cr is the read delay of the flash memory.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, they are described in a relatively simple manner, and reference may be made to some descriptions of method embodiments for relevant points. The above-described system embodiments are merely illustrative, wherein the modules or units described as separate parts may or may not be physically separate, and the parts displayed as modules or units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing is only a preferred embodiment of the present invention, and although the present invention has been disclosed in the preferred embodiments, it is not intended to limit the present invention. Those skilled in the art can make numerous possible variations and modifications to the present teachings, or modify equivalent embodiments to equivalent variations, without departing from the scope of the present teachings, using the methods and techniques disclosed above. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention are still within the scope of the protection of the technical solution of the present invention, unless the contents of the technical solution of the present invention are departed.

Claims (14)

1. A method for managing a flash memory buffer area is characterized by comprising the following steps:
according to the access characteristics of the flash memory, respectively establishing a cold dry clean linked list, a cold dirty linked list and a hot linked list in a cache region, wherein data pages which are only subjected to once reading operation are stored in the cold dry clean linked list, data pages which are only subjected to once writing operation are stored in the cold dirty linked list, data pages which are migrated after the data pages in the cold dry clean linked list or the cold dirty linked list are accessed again are stored in the hot linked list, and the data pages of the linked lists are provided with reading and writing marks and access times;
when a data page needs to be dispatched from the hot link table to the cold dry net linked list or the cold dirty linked list, dispatching processing is carried out, and the dispatching processing specifically comprises the following steps: sequentially judging whether the life value of the data page in the hot linked list is smaller than a preset value or the access frequency is 1 according to the sequence from the head to the tail of the list, if so, distributing the data page to a cold dry clean linked list or a cold dirty linked list according to the reading and writing mark of the data page, and if not, subtracting 1 from the access frequency of the data page; the method comprises the following steps that the weight factors of a life value comprise access times, novelty and read-write cost, the novelty is the probability that a data page is accessed again at the current moment, the closer the latest access moment of the data page is to the current moment, the higher the probability that the data page is accessed again is, and the cost value of read operation in the read-write cost is smaller than the cost value of write operation;
when the usage of the buffer reaches a replacement threshold, the data page is replaced from the buffer.
2. The management method according to claim 1, further comprising dynamically adjusting a size of the cold data area, specifically comprising: when the data page is replaced by the cold dry clean chain table and the cold dirty chain table, the times of replacing the data page by the cold dry clean chain table and the cold dirty chain table are respectively recorded, and when the replacement ratio is larger than the write-read cost ratio of the flash memory, the size of the cold clean chain table is expanded and the size of the cold dirty chain table is reduced; and when the replacement ratio is smaller than the write-read cost-to-write ratio of the flash memory, reducing the size of the cold clean linked list and expanding the size of the cold dirty linked list, wherein the replacement ratio is the ratio of the times of replacing the data pages from the cold clean linked list to the times of replacing the data pages from the cold dirty linked list, and the write-read cost-to-write ratio is the ratio of the sum of the read delay and the write delay of the flash memory to the read delay.
3. The method of managing of claim 1, wherein the step of replacing the data page from the buffer when the usage of the buffer reaches the replacement threshold comprises:
when the usage amount of the buffer area reaches a replacement threshold value, judging whether the size of the cold dry clean linked list is larger than a first threshold value, if so, eliminating the data page from the head of the cold dry clean linked list;
if not, judging whether the size of the cold dirty page linked list is larger than a first threshold value, and if so, eliminating the data page from the header of the cold dirty linked list;
if not, judging whether the size of the cold clean page linked list is larger than a second threshold value, and if so, eliminating the data page from the header of the cold clean page linked list;
if not, judging whether the size of the cold dirty page linked list is larger than a second threshold value, and if so, eliminating the data page from the header of the cold dirty linked list, wherein the first threshold value is larger than the second threshold value;
if not, the data page is eliminated from the hot-link table.
4. The method of claim 3, wherein the step of eliminating data pages from the hot-link list comprises:
sequentially judging whether the life value of the data pages in the hot link list is smaller than a preset value or the access frequency is 1 according to the sequence from the head to the tail of the hot link list, and if so, eliminating the data pages; and if not, subtracting 1 from the access frequency of the data page.
5. The management method according to claim 4, wherein the management method is implemented by multiple threads, and includes a main thread and a sub-thread, the sub-thread is used for performing dispatch processing, replacing a data page from a buffer, and dynamically adjusting the size of a cold data area, and the main thread determines whether the sub-thread needs to be called and activates the sub-thread after receiving an access request.
6. The management method according to any one of claims 1 to 5, characterized in that the novelty R is calculated as follows:
Figure FDA0002298942840000021
wherein, Tr is the latest accessed time of the data page, Tf is the first accessed time of the data page, and Tc is the current time.
7. The method of any of claims 1-5, wherein the cost value of a write operation is
Figure FDA0002298942840000022
The cost value of the read operation is 1, wherein Cw is the write delay of the flash memory, and Cr is the read delay of the flash memory.
8. A system for managing a flash cache, comprising:
the data pages of the cold dry clean linked list, the cold dirty linked list and the hot linked list are respectively established in the cache region according to the access characteristics of the flash memory, the data pages are provided with read-write marks and access times, the data pages which are only subjected to once read operation are stored in the cold dry clean linked list, the data pages which are only subjected to once write operation are stored in the cold dirty linked list, the data pages which are migrated after the data pages in the cold dry clean linked list or the cold dirty linked list are accessed again are stored in the hot linked list, and the data pages of the linked lists are provided with the read-write marks and the access times;
the dispatching processing unit is used for dispatching data pages from the hot link table to the cold dry net linked list or the cold dirty linked list, and specifically comprises the following steps: sequentially judging whether the life value of the data page in the hot linked list is smaller than a preset value or the access frequency is 1 according to the sequence from the head to the tail of the list, if so, distributing the data page to a cold dry clean linked list or a cold dirty linked list according to the reading and writing mark of the data page, and if not, subtracting 1 from the access frequency of the data page; the method comprises the following steps that the weight factors of a life value comprise access times, novelty and read-write cost, the novelty is the probability that a data page is accessed again at the current moment, the closer the latest access moment of the data page is to the current moment, the higher the probability that the data page is accessed again is, and the cost value of read operation in the read-write cost is smaller than the cost value of write operation;
a replacement unit for replacing the data page from the buffer when the usage amount of the buffer reaches a replacement threshold.
9. The management system according to claim 8, further comprising: the dynamic adjustment unit is used for dynamically adjusting the size of the cold data area, and specifically comprises: when the data page is replaced by the cold dry clean chain table and the cold dirty chain table, the times of replacing the data page by the cold dry clean chain table and the cold dirty chain table are respectively recorded, and when the replacement ratio is larger than the write-read cost ratio of the flash memory, the size of the cold clean chain table is expanded and the size of the cold dirty chain table is reduced; and when the replacement ratio is smaller than the write-read cost-to-write ratio of the flash memory, reducing the size of the cold clean linked list and expanding the size of the cold dirty linked list, wherein the replacement ratio is the ratio of the times of replacing the data pages from the cold clean linked list to the times of replacing the data pages from the cold dirty linked list, and the write-read cost-to-write ratio is the ratio of the sum of the read delay and the write delay of the flash memory to the read delay.
10. The management system according to claim 8, wherein the replacement unit includes:
a judging unit for judging whether the usage amount of the buffer reaches a replacement threshold;
the first replacement unit is used for eliminating the data pages from the head of the cold dry clean linked list when the usage of the buffer area reaches a replacement threshold value and the size of the cold dry clean linked list is larger than the first threshold value;
the second replacement unit is used for eliminating the data pages from the head of the cold dirty linked list when the usage of the buffer area reaches a replacement threshold value, the size of the cold clean linked list is smaller than the first threshold value and the size of the cold dirty linked list is larger than the first threshold value;
the third replacement unit is used for eliminating the data pages from the head of the cold dry clean linked list when the usage of the buffer area reaches a replacement threshold value, the sizes of the cold dry clean linked list and the cold dirty linked list are both smaller than the first threshold value, and the size of the cold dry clean linked list is larger than the second threshold value;
the fourth replacement unit is used for eliminating the data pages from the head of the cold and dirty linked list when the usage of the buffer area reaches a replacement threshold value, the sizes of the cold and dry clean linked list and the cold and dirty linked list are both smaller than the first threshold value, the size of the cold and dry clean linked list is smaller than the second threshold value, and the size of the cold and dirty linked list is larger than the second threshold value, wherein the first threshold value is larger than the second threshold value;
and the fifth replacing unit is used for eliminating the data page from the hot link list when the usage of the buffer area reaches the replacing threshold and the sizes of the cold dry clean link list and the cold dirty link list are smaller than the second threshold.
11. The management system according to claim 10, wherein in the fifth replacement unit, it is sequentially determined whether the life value of the data page in the hot link table is smaller than a predetermined value or the number of accesses is 1 in an order from the head to the tail of the hot link table, and if so, the data page is eliminated; and if not, subtracting 1 from the access frequency of the data page.
12. The management system according to claim 8, further comprising: the main thread is used for dispatching, replacing data pages from the buffer area and dynamically adjusting the size of the cold data area, and the main thread judges whether the sub-thread needs to be called and activates the sub-thread after receiving the access request.
13. The management system according to any one of claims 8 to 12, wherein the novelty R is calculated as follows:
Figure FDA0002298942840000041
wherein Tr is the time when the data page was last accessed, and Tf is a numberThe time when the data page is accessed for the first time, Tc is the current time.
14. The management system according to any one of claims 8 to 12, wherein the cost value of a write operation is
Figure FDA0002298942840000042
The cost value of the read operation is 1, wherein Cw is the write delay of the flash memory, and Cr is the read delay of the flash memory.
CN201610324044.9A 2016-05-16 2016-05-16 Management method and system for flash memory cache region Active CN107391398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610324044.9A CN107391398B (en) 2016-05-16 2016-05-16 Management method and system for flash memory cache region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610324044.9A CN107391398B (en) 2016-05-16 2016-05-16 Management method and system for flash memory cache region

Publications (2)

Publication Number Publication Date
CN107391398A CN107391398A (en) 2017-11-24
CN107391398B true CN107391398B (en) 2020-04-14

Family

ID=60338602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610324044.9A Active CN107391398B (en) 2016-05-16 2016-05-16 Management method and system for flash memory cache region

Country Status (1)

Country Link
CN (1) CN107391398B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062278A (en) * 2018-01-12 2018-05-22 江苏华存电子科技有限公司 A kind of cold and hot data-analyzing machine of flash memory and analysis method
CN108762664B (en) * 2018-02-05 2021-03-16 杭州电子科技大学 Solid state disk page-level cache region management method
CN109857680B (en) * 2018-11-21 2020-09-11 杭州电子科技大学 LRU flash memory cache management method based on dynamic page weight
CN111506524B (en) * 2019-01-31 2024-01-30 华为云计算技术有限公司 Method and device for eliminating and preloading data pages in database
CN111736758A (en) * 2019-03-25 2020-10-02 贵州白山云科技股份有限公司 Setting method, device, equipment and medium of persistent cache
CN110032671B (en) * 2019-04-12 2021-06-18 北京百度网讯科技有限公司 User track information processing method and device, computer equipment and storage medium
CN110502452B (en) * 2019-07-12 2022-03-29 华为技术有限公司 Method and device for accessing mixed cache in electronic equipment
CN110888600B (en) * 2019-11-13 2021-02-12 西安交通大学 Buffer area management method for NAND flash memory
CN111159066A (en) * 2020-01-07 2020-05-15 杭州电子科技大学 Dynamically-adjusted cache data management and elimination method
CN113485642A (en) * 2021-07-01 2021-10-08 维沃移动通信有限公司 Data caching method and device
CN114896177A (en) * 2022-05-05 2022-08-12 北京骏德时空科技有限公司 Data storage management method, apparatus, device, medium and product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN103984736A (en) * 2014-05-21 2014-08-13 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8161241B2 (en) * 2010-01-12 2012-04-17 International Business Machines Corporation Temperature-aware buffered caching for solid state storage

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN103984736A (en) * 2014-05-21 2014-08-13 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache

Also Published As

Publication number Publication date
CN107391398A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107391398B (en) Management method and system for flash memory cache region
US10241919B2 (en) Data caching method and computer system
CN107193646B (en) High-efficiency dynamic page scheduling method based on mixed main memory architecture
CN102760101B (en) SSD-based (Solid State Disk) cache management method and system
CN108762664B (en) Solid state disk page-level cache region management method
CN103984736B (en) Efficient buffer management method for NAND flash memory database system
CN110888600B (en) Buffer area management method for NAND flash memory
CN104503703B (en) The treating method and apparatus of caching
CN105095116A (en) Cache replacing method, cache controller and processor
CN106294197B (en) Page replacement method for NAND flash memory
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
CN105556485A (en) Neighbor based and dynamic hot threshold based hot data identification
CN108139872A (en) A kind of buffer memory management method, cache controller and computer system
CN103257935A (en) Cache management method and application thereof
CN107247675B (en) A kind of caching selection method and system based on classification prediction
US20090094391A1 (en) Storage device including write buffer and method for controlling the same
CN108762671A (en) Mixing memory system and its management method based on PCM and DRAM
CN110532200B (en) Memory system based on hybrid memory architecture
CN110795363B (en) Hot page prediction method and page scheduling method of storage medium
CN107423229B (en) Buffer area improvement method for page-level FTL
CN103150136A (en) Implementation method of least recently used (LRU) policy in solid state drive (SSD)-based high-capacity cache
CN111580754B (en) Write-friendly flash memory solid-state disk cache management method
CN108572799B (en) Data page migration method of heterogeneous memory system of bidirectional hash chain table
CN110262982A (en) A kind of method of solid state hard disk address of cache
CN105243030A (en) Data caching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant