CN107391398A - Management method and system for flash memory cache region - Google Patents

Management method and system for flash memory cache region Download PDF

Info

Publication number
CN107391398A
CN107391398A CN201610324044.9A CN201610324044A CN107391398A CN 107391398 A CN107391398 A CN 107391398A CN 201610324044 A CN201610324044 A CN 201610324044A CN 107391398 A CN107391398 A CN 107391398A
Authority
CN
China
Prior art keywords
chained list
cold
data page
dirty
clean
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610324044.9A
Other languages
Chinese (zh)
Other versions
CN107391398B (en
Inventor
王力玉
陈岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Microelectronics of CAS
Original Assignee
Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Microelectronics of CAS filed Critical Institute of Microelectronics of CAS
Priority to CN201610324044.9A priority Critical patent/CN107391398B/en
Publication of CN107391398A publication Critical patent/CN107391398A/en
Application granted granted Critical
Publication of CN107391398B publication Critical patent/CN107391398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a management method and a system of a flash memory cache region, wherein three linked lists, namely a cold dry clean linked list, a cold dirty linked list and a hot linked list, are established in the cache region to respectively manage a cold clean data page, a cold dirty data page and hot data, when the hot data is dispatched, whether the data page needs to be dispatched is judged according to a sequence from a table head to a table tail through a life value, the life value is a numerical value containing access times, novelty and read-write cost, the frequency of accessing the data page, the probability of accessing again and the read-write delay of the flash memory are fully considered, and therefore, the hit rate of data access is improved, and the operation performance of the cache region is improved.

Description

A kind of management method and system in flash cache area
Technical field
The present invention relates to storage system field, the more particularly to a kind of management method and system in flash cache area.
Background technology
With the continuous development that big data is applied, requirement to the performance of storage medium also more and more higher, dodge Deposit be new non-volatile memory medium representative, there is the high and low power consumption of read or write speed and antidetonation, It is widely used in consumer electronics product and enterprise storage system.
LRU (Least Recently Used) is most basic cache algorithm, i.e., preferential replacement is minimum recently The data cached page used, in the algorithm, page chain is connected to by chained list according to the novel degree of data page In, gauge outfit one end of chained list is the data page more early accessed, and table tail one end of chained list is the number accessed recently According to page, when the data cached page of progress is eliminated, eliminated from gauge outfit one end.
However, the lru algorithm, it is difficult to accomplish to eliminate least recently used data page, moreover, flash memory With asymmetric feature is read and write, need to carry out erasing operation before writing, write operation is relative to be read to grasp Make the cost taken longer for, therefore, in cache management as far as possible, it is desirable to reduce and write behaviour to flash memory Make, to improve the overall performance of flash disk operation.
The characteristics of based on flash memory, a series of improvement is done to lru algorithm, it is existing improved at one In lru algorithm, cold clean chained list, cold dirty chained list and hot chained list are established respectively in buffer area, cold dry The data page for only having carried out a read operation is deposited in net chained list, the data page is referred to as cold clean page, cold The data page for only having carried out a write operation is deposited in dirty chained list, the data page is referred to as cold containing dirty pages, in hot chain The number of storage read operation or write operation exceedes data page once, including the clean page of heat and hot containing dirty pages in table, When being disbanded from hot chained list to cold chain table, begun look for from the gauge outfit of hot chained list, preferentially disband clean page, So, it is possible to reduce to the write operation of flash memory, but the not real visiting frequency in view of data and new Clever degree, influence the runnability of hit rate and buffer area.In addition, when carrying out the replacement of data page, it is first First it is replaced from cold clean page, then, then is entered in cold containing dirty pages and the clean page of heat by the selection of probability Row is replaced, and due to being replaced by the way of fixation probability, be may result in and has excessively been eliminated some Data page in chained list, or eliminated the data page having just enter into too early, influence caching operation efficiency and Performance.
The content of the invention
It is contemplated that one of at least solving the above problems, there is provided a kind of management method in flash cache area and System, really consider the visiting frequency, novel degree and read-write cost of data, improve hit rate and caching The runnability in area.
To achieve the above object, the present invention has following technical scheme:
A kind of management method in flash cache area, including:
According to the access characteristics to flash memory, cold clean chained list, cold dirty chained list and heat are established in buffer area respectively Chained list, deposits the data page for only having carried out a read operation in cold clean chained list, and storage is only in cold dirty chained list The data page of a write operation has been carried out, the data in cold clean chained list or cold dirty chained list are deposited in hot chained list Page is accessed again the data page of rear migration, and the data page of above-mentioned chained list is provided with read-write mark and visited Ask number;
When needing to disband data page to cold clean chained list or cold dirty chained list from hot chained list, carry out disbanding processing, Processing is disbanded to specifically include:According to the data page sequentially, judged successively from gauge outfit to table tail in hot chained list It is 1 that whether vital values, which are less than predetermined value or access times, if so, then according to the read-write mark of data page by number Disbanded according to page to cold clean chained list or cold dirty chained list, if it is not, then subtracting 1 by the access times of data page;Wherein, The weight of vital values includes access times, novel degree and read-write cost, and novel degree is at current time The probability that data page is accessed again, data page the last time is nearer from current time at the time of access, then The probability being accessed again is higher, reads and writes the cost value when cost value of read operation in cost is less than write operation;
When the usage amount of buffering area, which reaches, replaces threshold value, from buffering area replacement data page.
Alternatively, in addition to size dynamic in cold data area adjusts, and specifically includes:When from cold clean chained list and During cold dirty chained list replacement data page, time from cold clean chained list and cold dirty chained list replacement data page is recorded respectively Number, when replacement ratio is more than the write-read cost ratio of flash memory, the size for extending cold clean chained list simultaneously reduces cold The size of dirty chained list;When replacement ratio is less than the write-read cost ratio of flash memory, reduce the big of cold clean chained list Size that is small and extending cold dirty chained list, wherein, it is time from cold clean chained list replacement data page to replace ratio The ratio of number and the number from cold dirty chained list replacement data page, write-read cost is than the read latency for flash memory with writing Postpone the ratio of sum and read latency.
Alternatively, when the usage amount of buffering area, which reaches, replaces threshold value, from the step of buffering area replacement data page Suddenly include:
When the usage amount of buffering area, which reaches, replaces threshold value, judge the size of cold clean chained list whether more than the One threshold value, if so, then eliminating data page from the gauge outfit of cold clean chained list;
If it is not, judge whether the size of cold containing dirty pages chained list is more than first threshold, if so, then from cold dirty chained list Gauge outfit eliminate data page;
If it is not, judge whether the size of cold clean page chained list is more than Second Threshold, if so, then from cold clean The gauge outfit of chained list eliminates data page;
If it is not, judge whether the size of cold containing dirty pages chained list is more than Second Threshold, if so, then from cold dirty chained list Gauge outfit eliminate data page, wherein, first threshold is more than Second Threshold;
If it is not, then eliminate data page from hot chained list.
Alternatively, the step of eliminating data page from hot chained list includes:
According to the gauge outfit from hot chained list to table tail sequentially, the vital values of the data page in hot chained list are judged successively It is 1 whether to be less than predetermined value or access times, if so, then eliminating data page;If it is not, then by data page Access times subtract 1.
Alternatively, the management method is realized using multithreading, including main thread and a sub-line journey, son Thread is used to carry out disbanding processing, adjusted from buffering area replacement data page and cold data area size dynamic, Main thread judges whether the sub-line journey that needs call and activates sub-line journey after access request is received.
Alternatively, novel degree R calculation formula is as follows:
Wherein, at the time of Tr is that data page is the last accessed, Tf is data beginning of the page At the time of secondary accessed, Tc is current time.
Alternatively, the cost value of write operation isCost value during read operation is 1, wherein, Cw is the delay of writing of flash memory, and Cr is that the reading of flash memory is delayed.
In addition, present invention also offers a kind of management system in flash cache area, including:
Cold clean chained list, cold dirty chained list and the heat established respectively in buffer area according to the access characteristics to flash memory Chained list, its data page are provided with read-write mark and access times, deposit in cold clean chained list and only carry out The data page of read operation, the data page for only having carried out a write operation, hot chain are deposited in cold dirty chained list The data page that the data page in cold clean chained list or cold dirty chained list is accessed again rear migration is deposited in table, The data page of above-mentioned chained list is provided with read-write mark and access times;
Processing unit is disbanded, for disbanding data page to cold clean chained list or cold dirty chain from hot chained list when needs During table, carry out disbanding processing, disband processing and specifically include:According to the order from gauge outfit to table tail, sentence successively It is 1 that whether the vital values of the data page in disconnected hot chained list, which are less than predetermined value or access times, if so, then basis The read-write mark of data page disbands data page to cold clean chained list or cold dirty chained list, if it is not, then by data The access times of page subtract 1;Wherein, the weight of vital values includes access times, novel degree and read-write generation Valency, novel degree are the probability being accessed again in current time data page, the last access of data page Moment is nearer from current time, then the probability being accessed again is higher, reads and writes the cost of read operation in cost Value is less than cost value during write operation;
Replacement unit, for when the usage amount of buffering area reach replace threshold value when, from buffering area replacement data Page.
Optionally, in addition to:Dynamic adjustment unit, adjusted for cold data area size dynamic, specific bag Include:When from cold clean chained list and cold dirty chained list replacement data page, record respectively from cold clean chained list and cold The number of dirty chained list replacement data page, when replacement ratio is more than the write-read cost ratio of flash memory, extend cold dry The size of net chained list simultaneously reduces the size of cold dirty chained list;When replacement ratio is less than the write-read cost ratio of flash memory, Reduce the size of cold clean chained list and extend the size of cold dirty chained list, wherein, it is from cold clean to replace ratio The ratio of the number of chained list replacement data page and the number from cold dirty chained list replacement data page, write-read cost ratio For the ratio of the read latency and write delay sum and read latency of flash memory.
Optionally, replacement unit includes:
Judging unit, for judging whether the usage amount of buffering area reaches replacement threshold value;
First replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list When size is more than first threshold, data page is eliminated from the gauge outfit of cold clean chained list;
Second replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list When size is more than first threshold less than the size of first threshold and cold dirty chained list, washed in a pan from the gauge outfit of cold dirty chained list Eliminate data page;
3rd replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list and When the size that the size of cold dirty chained list is both less than first threshold and cold clean chained list is more than Second Threshold, from cold The gauge outfit of clean chained list eliminates data page;
4th replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list and The size of cold dirty chained list is both less than first threshold and the size of cold clean chained list is less than Second Threshold, cold dirty chain When the size of table is more than Second Threshold, data page is eliminated from the gauge outfit of cold dirty chained list, wherein, first threshold More than Second Threshold;
5th replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list and When the size of cold dirty chained list is both less than Second Threshold, data page is eliminated from hot chained list.
Optionally, in the 5th replacement unit, according to the gauge outfit from hot chained list to table tail order, judge successively It is 1 that whether the vital values of the data page in hot chained list, which are less than predetermined value or access times, if so, then eliminating number According to page;If it is not, then the access times of data page are subtracted 1.
Optionally, in addition to:Main thread and a sub-line journey, sub-line journey be used to disband processing, from Buffering area replacement data page and the trivial size dynamic of cold data adjust, and main thread is receiving access request Afterwards, judge whether the sub-line journey that needs call and activate sub-line journey.
Optionally, novel degree R calculation formula is as follows:
Wherein, at the time of Tr is that data page is the last accessed, Tf is data beginning of the page At the time of secondary accessed, Tc is current time.
Optionally, the cost value of write operation isCost value during read operation is 1, wherein, Cw For the delay of writing of flash memory, Cr is that the reading of flash memory is delayed.
The management method and system in flash cache area provided in an embodiment of the present invention, established in buffer area cold dry Net chained list, cold dirty chained list and hot chained list these three chained lists, cold clean data page, cold dirty is managed respectively Data page and dsc data, when dsc data is disbanded, according to the order from gauge outfit to table tail, pass through life Life value judges whether to need to disband data page, and vital values are to contain access times, novel degree and read-write generation The numerical value of this three's factor of valency, the probability for taken into full account the accessed frequency of data page, being accessed again And the cost of read-write delay, so as to improve the hit rate of data access, improve the maneuverability of buffer area Energy.
Further, when carrying out the replacement of data page, data replacement first is carried out from cold-zone, passes through two Threshold decision of different sizes carries out eliminating for data page from cold clean chained list or cold dirty chained list, finally, then Consider to carry out eliminating for dsc data from hot chained list, this way it is possible to avoid excessively eliminating the data of some chained list Page, and the data page for avoiding just reading in buffer area are eliminated or the long resident caching of cold dirty data page too early Situation, improve the runnability of caching.
Further, when data page is replaced, record replaced from cold clean page chained list and cold dirty chained list respectively The number of data page, by replace the write-read cost of ratio and flash memory than size compared with, dynamically adjust cold The magnitude relationship of clean chained list and cold dirty chained list, avoids excessively eliminating clean data page or dirty data page, keeps away The data for exempting to have just enter into caching are replaced.
Further, realized by multithreading and disband processing, from buffering area replacement data page and cold Area's size dynamic adjusts, and improves the degree of parallelism of cache management, reduces run time, improves operational efficiency.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to reality The required accompanying drawing used in example or description of the prior art is applied to be briefly described, it should be apparent that, below Accompanying drawing in description is some embodiments of the present invention, for those of ordinary skill in the art, not On the premise of paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 shows each linked list data in the management method in flash cache area according to embodiments of the present invention The schematic diagram of page flow direction;
Fig. 2 shows the structural representation of the management system in the flash cache area of the embodiment of the present invention.
Embodiment
In order to facilitate the understanding of the purposes, features and advantages of the present invention, below in conjunction with the accompanying drawings The embodiment of the present invention is described in detail.
Many details are elaborated in the following description to facilitate a thorough understanding of the present invention, still this hair Bright to be different from other manner described here using other and implement, those skilled in the art can be with Similar popularization is done in the case of without prejudice to intension of the present invention, therefore the present invention is not by following public specific The limitation of embodiment.
The present invention proposes a kind of management method in flash cache area, and this method is to the LRU towards flash memory The improvement of algorithm, including:
According to the operating characteristic to flash memory, cold clean chained list, cold dirty chained list and heat are established in buffer area respectively Chained list, deposits the data page for only having carried out a read operation in cold clean chained list, and storage is only in cold dirty chained list The data page of a write operation has been carried out, the data in cold clean chained list or cold dirty chained list are deposited in hot chained list Page is accessed again the data page of rear migration, the data page in these chained lists be provided with cold and hot mark and Access times;
When needing to disband data page to cold clean chained list or cold dirty chained list from hot chained list, carry out disbanding processing, Processing is disbanded to specifically include:According to the data page sequentially, judged successively from gauge outfit to table tail in hot chained list It is 1 that whether vital values, which are less than predetermined value or access times, if so, then according to the cold and hot mark of data page by number The table tail to cold clean chained list or cold dirty chained list is disbanded according to page, if it is not, then subtracting 1 by the access times of data page; Wherein, the weight of vital values includes access times, novel degree and read-write cost, and novel degree is to work as The probability that preceding time data page is accessed again, data page the last time get at the time of access from current time Closely, then the probability being accessed again is higher, reads and writes when the cost value of read operation in cost is less than write operation Cost value;
When the usage amount of buffering area, which reaches, replaces threshold value, from buffering area replacement data page.
Wherein, cold clean chained list, cold dirty chained list and hot chained list are all dynamic link table, in the table of these chained lists First end storage is the data page more early accessed, and the storage of table tail one end is the data page accessed recently.
In the present invention, cold clean chained list, cold dirty chained list and hot chained list these three chained lists are established in buffer area, Cold clean data page, cold dirty data page and dsc data are managed respectively, when dsc data is disbanded, According to the order from gauge outfit to table tail, judge whether to need to disband data page by vital values, vital values are bag Contain access times, novel degree and the numerical value for reading and writing these three factors of cost, take into full account data page quilt The cost of the frequency of access, the probability being accessed again and read-write delay, so as to improve data access Hit rate, improve the runnability of buffer area.
In order to be better understood from technical scheme and technique effect, below with reference to Fig. 1 to specific Embodiment be described in detail.
With reference to shown in figure 1, in the buffer area of flash memory, according to the operation requests to flash memory, caching respectively Area establishes cold clean chained list, cold dirty chained list and hot chained list, deposits in cold clean chained list and is only once read The data page of operation, the data page for only having carried out a write operation is deposited in cold dirty chained list, is deposited in hot chained list Let cool the data page that the data page in clean chained list or cold dirty chained list is accessed again rear migration, these chains Data page in table is provided with cold and hot mark and access times.
When upper strata conducts interviews request to flash memory, request is usually read operation or write operation, according to difference Access characteristics, establish cold clean chained list, cold dirty chained list and hot chained list respectively on buffer area, it is cold clean Chained list and cold dirty chained list are cold data area, so, buffer area logically are divided into three chained lists, this Each node in a little chained lists corresponds to specific data page, if access request is for read operation and to a data The only once read operation of page, then the page chain is connected in cold clean chained list;If operation requests are to write The page chain, then be connected in cold dirty chained list by operation and to the only once write operation of a data page;It is right Data page in cold clean chained list or cold dirty chained list, if being accessed to again, access can be read operation Or the page chain once, is then connected in hot chained list by write operation, access times increase, that is to say, that Chain is connected to the dirty data of heat and clean data in hot chained list, the data page in these chained lists all at least provided with There are read-write mark and access times.
In an embodiment of the present invention, each data page corresponds to corresponding data structure, for data page Management, in a preferred embodiment, for the ease of index, buffer area number is carried out using Hash table According to the management of page, Hash table supports insertion to all data pages, lookup, deletions etc. to operate, and defines Two data structures, one is global data structures, size, access for record buffer memory area usage amount The parameters such as the head and the tail pointer of moment and each chained list;Another is data structure corresponding to each data page, Read-write mark, logical page number (LPN), buffering area page number, data page for recording each data page are interviewed first At the time of asking, data page it is the last accessed at the time of, access times, positional information etc..Pass through Kazakhstan Uncommon table, it can quickly search access request and belong to which of above three chained list, in hot chained list Data page all comes from cold clean chained list or cold dirty chained list migration, and the data page migrated always moves To the afterbody of hot chained list.
For hot chained list usage amount it is excessive when or need during cold chain table usage amount deficiency to carry out disbanding processing, Seek to move to the dsc data in hot chained list in cold data area, be usually more than in the size of hot chained list , it is necessary to reduce the size of hot chained list, accordingly it is desirable to which hot chained list is carried out during predetermined max-thresholds scope Disband processing;In addition, when the size of cold clean chained list and cold dirty chained list is below predetermined max-thresholds model , it is necessary to supplement the data page of cold-zone when enclosing, accordingly it is desirable to obtain the data page disbanded from hot chained list.
When disbanding, it would be desirable to which real sluggish data page in hot chained list is eliminated to cold data area, base In this, in an embodiment of the present invention, the step of disbanding processing, includes:According to from gauge outfit to table tail order, Judge whether the vital values of the data page in hot chained list are less than predetermined value or access times for 1 successively, if so, Then data page is disbanded to the cold totally table tail of chained list or cold dirty chained list according to the read-write mark of data page, its In, the weight of vital values includes access times, novel degree and read-write cost, and novel degree is current The probability that time data page is accessed again, data page the last time is nearer from current time at the time of access, The probability being then accessed again is higher, and the value of the reading cost of read operation is less than the value for writing cost during write operation; If it is not, then the access times of data page are subtracted 1.
Whether predetermined max-thresholds scope or cold clean chain can be exceeded by judging the size of hot chained list The size of table and cold dirty chained list is below predetermined max-thresholds scope, to determine the need for hot chained list Carry out disbanding processing, if meeting one of the two conditions, carry out disbanding processing.Disbanded specifically Cheng Zhong, screening is proceeded by from the gauge outfit of hot chained list, the vital values of the data page by judging each node Whether predetermined value is less than, to determine whether to eliminate corresponding data page to cold data area, gauge outfit storage It is the data more early accessed, the efficiency that can improve screening is screened since gauge outfit, for less than predetermined value Data page, then disbanded according to read-write mark to corresponding cold clean chained list or cold dirty chained list, will The corresponding cold clean chained list or the table tail of cold dirty chain that it is linked to.
In the case of vital values are more than predetermined value, there is a possibility that it is larger be accessed again, this time simultaneously Do not disbanded directly, but its access times is subtracted 1, reduce the temperature of its data, if the data page is held Continuation of insurance is held high vital values but not visited again, and its access times can be reduced to 1, illustrate that the data page can , therefore,, also will be according to data page if the access times of data page are 1 when being screened to be disbanded Read-write mark data page is disbanded to cold clean chained list or cold dirty chained list.
For vital values, it include these three weights of the read-write cost of access times, novel degree and flash memory because Element, these three weights described in detail below.
Access times are the number F being accessed to current time data page, and access times F reflects data page Accessed frequency, the frequency height probability that then data page is accessed again of access are also higher.
Novel degree is the probability being accessed again in current time data page, can be nearest by the data page The probability that the data page is accessed again is expressed apart from the length at current time at the time of once access, is recognized Nearer from current time at the time of access for data page is the last, then the probability being accessed again is higher, In a specific embodiment, novel degree R is defined as:
Wherein, at the time of Tr is that data page is the last accessed, Tf be data page be accessed first when Carve, Tc is current time.
Read-write cost respectively defines the different cost values of read operation and write operation so that generation during read operation Value is less than cost value during write operation, because the time delayses of the characteristic write operation of flash memory are more than read operation Time delayses, assign write operation to bigger numerical value, can preferentially eliminate the data of read operation, reduction The time of system operation, in a specific embodiment, read-write is defined by reading and writing operation delay Cost, specifically, read-write cost θ is defined as:
Wherein, wr is write operation, and rd is read operation, and Cw is the delay of writing of flash memory, and Cr is the reading of flash memory Delay.
So, three above factor is considered to determine vital values, in one embodiment, by vital values L It is defined as:
L=F*R* θ
In this embodiment, by the power of these three factors of access times F, novel degree R and read-write cost θ Identical weighted value 1 is both configured to again, using the product of three as vital values, in processing is disbanded, is carried out When disbanding the screening of data page, predetermined value can be arranged to 1, can certainly according to specifically demand is disbanded To set other predetermined values.It is understood that in other embodiments, can be according to specific need Will, above three factor is set to different weights, such as access times F weight is 0.8, novel degree R weight is 1.2, and read-write cost θ is 1, so, can obtain different vital values L expression formula.
The processing of disbanding in the management of buffer area is described in detail above, in disbanding for the present invention In processing, the probability and flash reading and writing that taken into full account the accessed frequency of data page, are accessed again The cost of delay, ensure that the data page in hot chained list is really relative dsc data, and be in cold data area Really relative cold data, read-write cost is considered while data temperature is considered, so as to improve data The hit rate of access, improve the runnability of buffer area.
For buffer area, its space is limited, and the replacement for carrying out buffer area data page is the pipe of buffer area Necessary step in reason, when the usage amount of buffering area, which reaches, replaces threshold value, if there is new data to delay When depositing into next, but there is no free space, then need existing data page being replaced, i.e., by cache blocks Original data are swapped out to flash memory, to be stored in new data page.
In the present invention, suitable method replacement data page from buffering area can be used, preferable real Apply in example, can preferentially replace cold clean page, cold containing dirty pages and the clean page of heat are replaced in then selection.In order to keep away Exempt from excessively to eliminate the data page of some chained list or eliminate the data page having just enter into, currently preferred reality too early Apply in example, the replacement of the data page in cold data area is carried out using dual-threshold voltage, specifically, including following step Suddenly:
When the usage amount of buffering area, which reaches, replaces threshold value, judge the size of cold clean chained list whether more than the One threshold value, if so, then eliminating data page from the gauge outfit of cold clean chained list;
If it is not, judge whether the size of cold containing dirty pages chained list is more than first threshold, if so, then from cold containing dirty pages Gauge outfit eliminates data page;
If it is not, judge whether the size of cold clean page chained list is more than Second Threshold, if so, then from cold clean The gauge outfit of chained list eliminates data page;
If it is not, judge whether the size of cold containing dirty pages chained list is more than Second Threshold, if so, then from cold containing dirty pages Gauge outfit eliminates data page, wherein, first threshold is more than Second Threshold;
If it is not, then eliminate data page from hot chained list.
Above is when the usage amount of buffering area reaches replacement threshold value, i.e., usage amount, which is more than or equal to, replaces threshold The step of being carried out during value, after less than threshold value is replaced, then stop replacement step.In replacement data page, Eliminated first since cold clean data page, followed by cold dirty data page, finally consider further that dsc data page, In specific replace, two threshold values, first threshold and second are provided with for the data page in cold data area Threshold value, and first threshold is more than Second Threshold.
So, when the size of cold clean chained list is more than first threshold, that is to say, that cold clean chained list holds When amount is in higher value, data page preferentially is eliminated from cold clean chained list, now cold clean chained list capacity is big, The data of storage are to store more long cold data, are eliminated from here, can avoid eliminating what is had just enter into Data page, flash memory is write back without by data page, improves and replaces efficiency.If the size of cold clean chained list is simultaneously First threshold is not greater than, then goes to judge whether the size of cold dirty chained list is more than first threshold, if now cold The size of dirty chained list is more than first threshold, then eliminates data page from cold dirty chained list, and now cold dirty chained list holds Amount is big, and the data of storage are to store more long cold data, eliminates data page from here, can avoid eliminating The data page just read in, and avoid resident caching of cold dirty data page length phase.
If the size of now cold dirty chained list illustrates that cold clean data and cold dirty data exist again smaller than first threshold Amount of storage in caching is not especially big, but compared to dsc data, still first replaces cold data and more close It is suitable, to ensure hit rate, therefore, continue to judge whether the size of cold clean chained list is more than Second Threshold, If the size of cold clean chained list is more than Second Threshold, data page still is eliminated from cold clean chained list, if cold dry The size of net chained list is less than Second Threshold, then continues to judge whether the size of cold dirty chained list is more than Second Threshold, If being greater than Second Threshold, carry out eliminating data page from cold dirty chained list.
Only when the size of cold clean chained list and cold dirty chained list is both less than Second Threshold, just consider from hot chain Table eliminates data, and now cold clean chained list and cold dirty chained list are all smaller in the amount of storage of buffer area, can jump Overheat chained list disbands processing, and directly eliminates data page from hot chained list.More than superseded data page be all Carried out from the gauge outfit of chained list, eliminating has two kinds of processing modes, if clean page, then directly delete, If containing dirty pages, then need to write back flash memory.
In a preferred embodiment, the step of eliminating data page from hot chained list includes:
According to the gauge outfit from hot chained list to table tail sequentially, the vital values of the data page in hot chained list are judged successively It is 1 whether to be less than predetermined value or access times, if so, then eliminating data page;If it is not, then by data page Access times subtract 1.That is eliminating for data page is directly carried out by vital values, and without disbanding place Reason, the calculating of vital values is with the above-mentioned description for disbanding vital values in processing, and here is omitted.
In addition, in cache management, when the read-write requests number difference of user is more, it is also necessary to further The step of cold data area size dynamic adjusts is carried out, is specifically included:When from cold clean chained list and cold dirty chained list During replacement data page, the number from cold clean chained list and cold dirty chained list replacement data page is recorded respectively, respectively Evict_clean_cnt and evict_dirty_cnt are denoted as, when replacement ratio is more than the write-read cost ratio of flash memory, Extend the size of cold clean chained list and reduce the size of cold dirty chained list;When replacement ratio is less than the write-read of flash memory During cost ratio, reduce the size of cold clean chained list and extend the size of cold dirty chained list, wherein, replace ratio For from the ratio of the number and the number from cold dirty chained list replacement data page of cold clean chained list replacement data page, That is evict_clean_cnt/evict_dirty_cnt, write-read cost than the read latency for flash memory and write delay sum with The ratio of read latency, i.e. (Cr+Cw)/Cr, wherein, Cw is the delay of writing of flash memory, and Cr is flash memory Read delay.
It is to reach in the usage amount of buffering area and replace threshold value and then progress in addition, in the prior art Data page is replaced, and so, new data page can not be handled in time, and this can cause the resistance that user accesses Plug.Therefore, in a preferred embodiment of the invention, using the implementation method of multithreading, multithreading includes master Thread and a sub-line journey, sub-line journey are used to disband processing, from buffering area replacement data page and cold The trivial size dynamic of data adjusts, and main thread judges whether to need what is called after access request is received Sub-line journey simultaneously activates sub-line journey, specific to judge to include:Judge whether the usage amount of buffering area reaches Replace threshold value, judge whether to need from hot chained list disband data page to cold clean chained list or cold dirty chained list, sentence It is disconnected replace ratio and write-read cost than relation in one or more.Judge to need to call son in main thread After thread, sub-line journey performs corresponding task, until main thread hangs up sub-line journey.In specific perform, For main thread and sub-line journey, the data page of buffer area and some management data are the public of the two Data, in order to ensure the uniformity of common data, mutual exclusion lock can be used, avoid multithreading operation may Caused by data it is inconsistent.
The management method in the flash cache area of the embodiment of the present invention is described in detail above, in addition, Present invention also offers the management system in the flash cache area for realizing the above method, with reference to shown in figure 2, bag Include:
Cold clean chained list 101, the cold dirty chained list established respectively in buffer area according to the operation requests to flash memory 102 and hot chained list 103, its data page is provided with read-write mark and access times, cold clean chained list 101 It is middle to deposit the data page for only having carried out a read operation, deposit in cold dirty chained list 102 and only once write The data page of operation, the data page in cold clean chained list or cold dirty chained list is deposited in hot chained list 103 by again The data page migrated after access, the data page of above-mentioned chained list are provided with read-write mark and access times;
Processing unit 110 is disbanded, for disbanding data page to cold clean chained list or cold dirty from hot chained list when needs During chained list, carry out disbanding processing, disband processing and specifically include:According to from gauge outfit to table tail order, successively Judge whether the vital values of the data page in hot chained list are less than predetermined value or access times for 1, if so, then root Data page is disbanded to cold clean chained list or cold dirty chained list, if it is not, then by number according to the read-write mark of data page Subtract 1 according to the access times of page;Wherein, the weight of vital values includes access times, novel degree and read-write Cost, novel degree are the probability being accessed again in current time data page, and data page is the last to be accessed At the time of it is nearer from current time, then the probability being accessed again is higher, read and write cost in read operation generation Value is less than cost value during write operation;
Replacement unit 120, for when the usage amount of buffering area reaches and replaces threshold value, number to be replaced from buffering area According to page.
Further, in addition to:Dynamic adjustment unit, adjusted for cold data area size dynamic, specifically Including:When from cold clean chained list and cold dirty chained list replacement data page, record respectively from cold clean chained list and The number of cold dirty chained list replacement data page, when replacement ratio is more than the write-read cost ratio of flash memory, extension is cold The size of clean chained list simultaneously reduces the size of cold dirty chained list;When replacement ratio is less than the write-read cost ratio of flash memory When, reduce the size of cold clean chained list and extend the size of cold dirty chained list, wherein, it is from cold to replace ratio The ratio of the number and the number from cold dirty chained list replacement data page of clean chained list replacement data page, write-read generation Ratio of the valency than the read latency for flash memory and write delay sum and read latency.
Further, replacement unit 120 includes:
Judging unit, for judging whether the usage amount of buffering area reaches replacement threshold value;
First replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list When size is more than first threshold, data page is eliminated from the gauge outfit of cold clean chained list;
Second replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list When size is more than first threshold less than the size of first threshold and cold dirty chained list, washed in a pan from the gauge outfit of cold dirty chained list Eliminate data page;
3rd replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list and When the size that the size of cold dirty chained list is both less than first threshold and cold clean chained list is more than Second Threshold, from cold The gauge outfit of clean chained list eliminates data page;
4th replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list and The size of cold dirty chained list is both less than first threshold and the size of cold clean chained list is less than Second Threshold, cold dirty chain When the size of table is more than Second Threshold, data page is eliminated from the gauge outfit of cold dirty chained list, wherein, first threshold More than Second Threshold;
5th replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list and When the size of cold dirty chained list is both less than Second Threshold, data page is eliminated from hot chained list.
Further, in the 5th replacement unit, according to the gauge outfit from hot chained list to table tail order, sentence successively It is 1 that whether the vital values of the data page in disconnected hot chained list, which are less than predetermined value or access times, if so, then eliminating Data page;If it is not, then the access times of data page are subtracted 1.
Further, in addition to:Main thread and sub-line journey, sub-line journey are used to carry out disbanding processing, postpone Rush area's replacement data page and cold data trivial size dynamic adjust, main thread after access request is received, Judge whether the sub-line journey that needs call and activate sub-line journey.
Further, novel degree R calculation formula is as follows:
Wherein, at the time of Tr is that data page is the last accessed, Tf is data beginning of the page At the time of secondary accessed, Tc is current time.
Further, the cost value of write operation isReading cost value during read operation is 1, wherein, Cw is the delay of writing of flash memory, and Cr is that the reading of flash memory is delayed.
Each embodiment in this specification is described by the way of progressive, identical between each embodiment Similar part is mutually referring to what each embodiment stressed is the difference with other embodiment Part.For system embodiment, because it is substantially similar to embodiment of the method, so retouching State fairly simple, the relevent part can refer to the partial explaination of embodiments of method.Described above is Embodiment of uniting is only schematical, wherein the module illustrated as separating component or unit can be Or may not be physically separate, it can be as the part that module or unit are shown or also may be used Not to be physical location, you can with positioned at a place, or can also be distributed on multiple NEs. Some or all of module therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.Those of ordinary skill in the art are without creative efforts, you can to understand and implement.
Described above is only the preferred embodiment of the present invention, although the present invention is disclosed with preferred embodiment As above, but it is not limited to the present invention.Any those skilled in the art, this is not being departed from Under inventive technique scheme ambit, all using the methods and technical content of the disclosure above to skill of the present invention Art scheme makes many possible changes and modifications, or is revised as the equivalent embodiment of equivalent variations.Therefore, Every content without departing from technical solution of the present invention, the technical spirit according to the present invention is to above example institute Any simple modification, equivalent variation and modification done, still fall within the model of technical solution of the present invention protection In enclosing.

Claims (14)

  1. A kind of 1. management method in flash cache area, it is characterised in that including:
    According to the access characteristics to flash memory, cold clean chained list, cold dirty chained list and heat are established in buffer area respectively Chained list, deposits the data page for only having carried out a read operation in cold clean chained list, and storage is only in cold dirty chained list The data page of a write operation has been carried out, the data in cold clean chained list or cold dirty chained list are deposited in hot chained list Page is accessed again the data page of rear migration, and the data page of above-mentioned chained list is provided with read-write mark and visited Ask number;
    When needing to disband data page to cold clean chained list or cold dirty chained list from hot chained list, carry out disbanding processing, Processing is disbanded to specifically include:According to the data page sequentially, judged successively from gauge outfit to table tail in hot chained list It is 1 that whether vital values, which are less than predetermined value or access times, if so, then according to the read-write mark of data page by number Disbanded according to page to cold clean chained list or cold dirty chained list, if it is not, then subtracting 1 by the access times of data page;Wherein, The weight of vital values includes access times, novel degree and read-write cost, and novel degree is at current time The probability that data page is accessed again, data page the last time is nearer from current time at the time of access, then The probability being accessed again is higher, and the cost value for reading and writing read operation in cost is less than write operation Time Value;
    When the usage amount of buffering area, which reaches, replaces threshold value, from buffering area replacement data page.
  2. 2. management method according to claim 1, it is characterised in that also including cold data area size Dynamic adjusts, and specifically includes:When from cold clean chained list and cold dirty chained list replacement data page, record respectively From cold clean chained list and the number of cold dirty chained list replacement data page, when replacement ratio is more than the write-read generation of flash memory During valency ratio, extend the size of cold clean chained list and reduce the size of cold dirty chained list;It is less than sudden strain of a muscle when replacing ratio During the write-read cost ratio deposited, reduce the size of cold clean chained list and extend the size of cold dirty chained list, wherein, It is the number from cold clean chained list replacement data page and the number from cold dirty chained list replacement data page to replace ratio Ratio, ratio of the write-read cost than the read latency for flash memory and write delay sum and read latency.
  3. 3. management method according to claim 1, it is characterised in that when the usage amount of buffering area reaches To when replacing threshold value, include from the step of buffering area replacement data page:
    When the usage amount of buffering area, which reaches, replaces threshold value, judge the size of cold clean chained list whether more than the One threshold value, if so, then eliminating data page from the gauge outfit of cold clean chained list;
    If it is not, judge whether the size of cold containing dirty pages chained list is more than first threshold, if so, then from cold dirty chained list Gauge outfit eliminate data page;
    If it is not, judge whether the size of cold clean page chained list is more than Second Threshold, if so, then from cold clean The gauge outfit of chained list eliminates data page;
    If it is not, judge whether the size of cold containing dirty pages chained list is more than Second Threshold, if so, then from cold dirty chained list Gauge outfit eliminate data page, wherein, first threshold is more than Second Threshold;
    If it is not, then eliminate data page from hot chained list.
  4. 4. management method according to claim 3, it is characterised in that eliminate data page from hot chained list The step of include:
    According to the gauge outfit from hot chained list to table tail sequentially, the vital values of the data page in hot chained list are judged successively It is 1 whether to be less than predetermined value or access times, if so, then eliminating data page;If it is not, then by data page Access times subtract 1.
  5. 5. management method according to claim 4, it is characterised in that the management method uses more Thread realizes, including main thread and a sub-line journey, and sub-line journey is used to carrying out disbanding processing, from buffering area Replacement data page and cold data area size dynamic adjust, and main thread judges after access request is received Whether need the sub-line journey called and activate sub-line journey.
  6. 6. according to the management method any one of claim 1-5, it is characterised in that novel degree R Calculation formula it is as follows:
    Wherein, at the time of Tr is that data page is the last accessed, Tf is data beginning of the page At the time of secondary accessed, Tc is current time.
  7. 7. according to the management method any one of claim 1-5, it is characterised in that write operation Cost value isCost value during read operation is 1, wherein, Cw writes delay, Cr for flash memory It is delayed for the reading of flash memory.
  8. A kind of 8. management system in flash cache area, it is characterised in that including:
    Cold clean chained list, cold dirty chained list and the heat established respectively in buffer area according to the access characteristics to flash memory Chained list, its data page are provided with read-write mark and access times, deposit in cold clean chained list and only carry out The data page of read operation, the data page for only having carried out a write operation, hot chain are deposited in cold dirty chained list The data page that the data page in cold clean chained list or cold dirty chained list is accessed again rear migration is deposited in table, The data page of above-mentioned chained list is provided with read-write mark and access times;
    Processing unit is disbanded, for disbanding data page to cold clean chained list or cold dirty chain from hot chained list when needs During table, carry out disbanding processing, disband processing and specifically include:According to the order from gauge outfit to table tail, sentence successively It is 1 that whether the vital values of the data page in disconnected hot chained list, which are less than predetermined value or access times, if so, then basis The read-write mark of data page disbands data page to cold clean chained list or cold dirty chained list, if it is not, then by data The access times of page subtract 1;Wherein, the weight of vital values includes access times, novel degree and read-write generation Valency, novel degree are the probability being accessed again in current time data page, the last access of data page Moment is nearer from current time, then the probability being accessed again is higher, reads and writes the cost of read operation in cost Value is less than cost value during write operation;
    Replacement unit, for when the usage amount of buffering area reach replace threshold value when, from buffering area replacement data Page.
  9. 9. management system according to claim 8, it is characterised in that also include:Dynamic adjusts single Member, adjust, specifically include for cold data area size dynamic:Replaced when from cold clean chained list and cold dirty chained list When changing data page, the number from cold clean chained list and cold dirty chained list replacement data page is recorded respectively, works as replacement When ratio is more than the write-read cost ratio of flash memory, extends the size of cold clean chained list and reduce the big of cold dirty chained list It is small;When replacement ratio is less than the write-read cost ratio of flash memory, the size for reducing cold clean chained list simultaneously extends cold The size of dirty chained list, wherein, replace ratio be from the number of cold clean chained list replacement data page with from cold dirty The ratio of the number of chained list replacement data page, write-read cost than the read latency for flash memory and write delay sum with The ratio of read latency.
  10. 10. management system according to claim 8, it is characterised in that replacement unit includes:
    Judging unit, for judging whether the usage amount of buffering area reaches replacement threshold value;
    First replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list When size is more than first threshold, data page is eliminated from the gauge outfit of cold clean chained list;
    Second replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list When size is more than first threshold less than the size of first threshold and cold dirty chained list, washed in a pan from the gauge outfit of cold dirty chained list Eliminate data page;
    3rd replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list and When the size that the size of cold dirty chained list is both less than first threshold and cold clean chained list is more than Second Threshold, from cold The gauge outfit of clean chained list eliminates data page;
    4th replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list and The size of cold dirty chained list is both less than first threshold and the size of cold clean chained list is less than Second Threshold, cold dirty chain When the size of table is more than Second Threshold, data page is eliminated from the gauge outfit of cold dirty chained list, wherein, first threshold More than Second Threshold;
    5th replacement unit, for when the usage amount of buffering area reaches and replaces threshold value, cold clean chained list and When the size of cold dirty chained list is both less than Second Threshold, data page is eliminated from hot chained list.
  11. 11. management system according to claim 10, it is characterised in that in the 5th replacement unit, According to the gauge outfit from hot chained list to table tail order, judge successively data page in hot chained list vital values whether It is 1 less than predetermined value or access times, if so, then eliminating data page;If it is not, then by the access of data page Number subtracts 1.
  12. 12. management system according to claim 8, it is characterised in that also include:Main thread and One sub-line journey, sub-line journey are used to disband processing, from buffering area replacement data page and cold data area Area's size dynamic adjusts, and main thread judges whether to need the sub-line journey called after access request is received And activate sub-line journey.
  13. 13. according to the management system any one of claim 8-12, it is characterised in that novel degree R calculation formula is as follows:
    Wherein, at the time of Tr is that data page is the last accessed, Tf is data beginning of the page At the time of secondary accessed, Tc is current time.
  14. 14. according to the management system any one of claim 8-12, it is characterised in that write operation Cost value beCost value during read operation is 1, wherein, Cw writes delay, Cr for flash memory It is delayed for the reading of flash memory.
CN201610324044.9A 2016-05-16 2016-05-16 Management method and system for flash memory cache region Active CN107391398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610324044.9A CN107391398B (en) 2016-05-16 2016-05-16 Management method and system for flash memory cache region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610324044.9A CN107391398B (en) 2016-05-16 2016-05-16 Management method and system for flash memory cache region

Publications (2)

Publication Number Publication Date
CN107391398A true CN107391398A (en) 2017-11-24
CN107391398B CN107391398B (en) 2020-04-14

Family

ID=60338602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610324044.9A Active CN107391398B (en) 2016-05-16 2016-05-16 Management method and system for flash memory cache region

Country Status (1)

Country Link
CN (1) CN107391398B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062278A (en) * 2018-01-12 2018-05-22 江苏华存电子科技有限公司 A kind of cold and hot data-analyzing machine of flash memory and analysis method
CN108762664A (en) * 2018-02-05 2018-11-06 杭州电子科技大学 A kind of solid state disk page grade buffer queue management method
CN109857680A (en) * 2018-11-21 2019-06-07 杭州电子科技大学 A kind of LRU flash cache management method based on dynamic page weight
CN110032671A (en) * 2019-04-12 2019-07-19 北京百度网讯科技有限公司 Processing method, device, computer equipment and the storage medium of user trajectory information
CN110502452A (en) * 2019-07-12 2019-11-26 华为技术有限公司 Access the method and device of the hybrid cache in electronic equipment
CN110888600A (en) * 2019-11-13 2020-03-17 西安交通大学 Buffer area management method for NAND flash memory
CN111159066A (en) * 2020-01-07 2020-05-15 杭州电子科技大学 Dynamically-adjusted cache data management and elimination method
CN111506524A (en) * 2019-01-31 2020-08-07 华为技术有限公司 Method and device for eliminating and preloading data pages in database
CN111736753A (en) * 2019-03-25 2020-10-02 贵州白山云科技股份有限公司 Persistent cache method and device and computer equipment
CN113485642A (en) * 2021-07-01 2021-10-08 维沃移动通信有限公司 Data caching method and device
CN113688160A (en) * 2021-09-08 2021-11-23 北京沃东天骏信息技术有限公司 Data processing method, processing device, electronic device and storage medium
CN114896177A (en) * 2022-05-05 2022-08-12 北京骏德时空科技有限公司 Data storage management method, apparatus, device, medium and product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110173395A1 (en) * 2010-01-12 2011-07-14 International Business Machines Corporation Temperature-aware buffered caching for solid state storage
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN103984736A (en) * 2014-05-21 2014-08-13 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110173395A1 (en) * 2010-01-12 2011-07-14 International Business Machines Corporation Temperature-aware buffered caching for solid state storage
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN103984736A (en) * 2014-05-21 2014-08-13 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019136982A1 (en) * 2018-01-12 2019-07-18 江苏华存电子科技有限公司 Analyzer for cold and hot data of flash memory and analysis method
CN108062278A (en) * 2018-01-12 2018-05-22 江苏华存电子科技有限公司 A kind of cold and hot data-analyzing machine of flash memory and analysis method
CN108762664A (en) * 2018-02-05 2018-11-06 杭州电子科技大学 A kind of solid state disk page grade buffer queue management method
CN108762664B (en) * 2018-02-05 2021-03-16 杭州电子科技大学 Solid state disk page-level cache region management method
CN109857680A (en) * 2018-11-21 2019-06-07 杭州电子科技大学 A kind of LRU flash cache management method based on dynamic page weight
CN111506524A (en) * 2019-01-31 2020-08-07 华为技术有限公司 Method and device for eliminating and preloading data pages in database
CN111506524B (en) * 2019-01-31 2024-01-30 华为云计算技术有限公司 Method and device for eliminating and preloading data pages in database
CN111736753A (en) * 2019-03-25 2020-10-02 贵州白山云科技股份有限公司 Persistent cache method and device and computer equipment
CN110032671A (en) * 2019-04-12 2019-07-19 北京百度网讯科技有限公司 Processing method, device, computer equipment and the storage medium of user trajectory information
CN110502452B (en) * 2019-07-12 2022-03-29 华为技术有限公司 Method and device for accessing mixed cache in electronic equipment
CN110502452A (en) * 2019-07-12 2019-11-26 华为技术有限公司 Access the method and device of the hybrid cache in electronic equipment
CN110888600A (en) * 2019-11-13 2020-03-17 西安交通大学 Buffer area management method for NAND flash memory
CN111159066A (en) * 2020-01-07 2020-05-15 杭州电子科技大学 Dynamically-adjusted cache data management and elimination method
CN113485642A (en) * 2021-07-01 2021-10-08 维沃移动通信有限公司 Data caching method and device
CN113688160A (en) * 2021-09-08 2021-11-23 北京沃东天骏信息技术有限公司 Data processing method, processing device, electronic device and storage medium
CN114896177A (en) * 2022-05-05 2022-08-12 北京骏德时空科技有限公司 Data storage management method, apparatus, device, medium and product

Also Published As

Publication number Publication date
CN107391398B (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN107391398A (en) Management method and system for flash memory cache region
CN107193646B (en) High-efficiency dynamic page scheduling method based on mixed main memory architecture
CN103777905B (en) Software-defined fusion storage method for solid-state disc
CN102760101B (en) SSD-based (Solid State Disk) cache management method and system
CN103257935B (en) A kind of buffer memory management method and application thereof
CN110795363B (en) Hot page prediction method and page scheduling method of storage medium
CN106528454B (en) A kind of memory system caching method based on flash memory
US20120254523A1 (en) Storage system which utilizes two kinds of memory devices as its cache memory and method of controlling the storage system
CN110888600B (en) Buffer area management method for NAND flash memory
CN110413537B (en) Flash translation layer facing hybrid solid state disk and conversion method
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
CN107247675B (en) A kind of caching selection method and system based on classification prediction
CN106527988A (en) SSD (Solid State Drive) data migration method and device
CN105389135B (en) A kind of solid-state disk inner buffer management method
CN108762664A (en) A kind of solid state disk page grade buffer queue management method
CN106201348A (en) The buffer memory management method of non-volatile memory device and device
CN111580754B (en) Write-friendly flash memory solid-state disk cache management method
KR101166803B1 (en) System including volatile memory and non-volatile memory and processing mehthod thereof
CN109739780A (en) Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method
CN112783448B (en) Data storage method and system
CN110347338A (en) Mix internal storage data exchange and processing method, system and readable storage medium storing program for executing
Debnath et al. Large block CLOCK (LB-CLOCK): A write caching algorithm for solid state disks
CN106909323B (en) Page caching method suitable for DRAM/PRAM mixed main memory architecture and mixed main memory architecture system
CN109478164A (en) For storing the system and method for being used for the requested information of cache entries transmission
CN102097128B (en) Self-adaptive buffer area replacement method based on flash memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant