CN109857680A - A kind of LRU flash cache management method based on dynamic page weight - Google Patents

A kind of LRU flash cache management method based on dynamic page weight Download PDF

Info

Publication number
CN109857680A
CN109857680A CN201811394983.6A CN201811394983A CN109857680A CN 109857680 A CN109857680 A CN 109857680A CN 201811394983 A CN201811394983 A CN 201811394983A CN 109857680 A CN109857680 A CN 109857680A
Authority
CN
China
Prior art keywords
page
request
weight
workspace
exchange area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811394983.6A
Other languages
Chinese (zh)
Other versions
CN109857680B (en
Inventor
袁友伟
陶文鹏
张锦涛
贾刚勇
鄢腊梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201811394983.6A priority Critical patent/CN109857680B/en
Publication of CN109857680A publication Critical patent/CN109857680A/en
Application granted granted Critical
Publication of CN109857680B publication Critical patent/CN109857680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The LRU flash cache management method based on dynamic page weight that the invention discloses a kind of, step S1: page request in read request queue is simultaneously identified and is classified to page request type and locating region;Step S2: judgement insertion buffer state is determined to eliminate the page using the LRU, method based on dynamic page weight, adjusts buffer state, executes page request.Using technical solution of the present invention, buffer area type is divided into workspace and exchange area, the cold page, the hot page, containing dirty pages face, the clean page are distinguish, determine page request type and buffer zone, determines that eliminating the page completes page request in conjunction with the LRU, method based on dynamic page weight.Read-write consumption and time delay during flash cache read-write can be effectively reduced in technical solution of the present invention, while can also greatly promote the hit rate in flash cache read-write.

Description

A kind of LRU flash cache management method based on dynamic page weight
Technical field
The present invention relates to storage system fields, and in particular to a kind of LRU flash cache management based on dynamic page weight Method.
Background technique
High-performance that nand flash memory memory technology has by it, the characteristics such as small in size, low energy consumption, obtain in enterprise's application It is widely applied.But with the continuous development of big data technology in recent years, the processing and analysis of mass data are to storage system Data throughout and I/O delay more stringent requirements are proposed, in nand flash memory there is read-write consumption is big, asymmetry I/ The defects of O delay and block are wiped prevents it from substituting hard-disc storage completely.
By caching technology and storage equipment combine can be effectively reduced I/O delay, reduce it is asymmetric in different accumulation layers Property.However buffer memory management methods most at present is optimized for hard disk storage devices, and the buffer area for flash memory is lacked Management method.Traditional buffer memory management method, which is applied, has that hit rate is low in flash memory, and above-mentioned time delay is high, writes and disappears Consuming high problem can not solve.
Therefore, in view of the above-mentioned problems, the present invention proposes a kind of technical solution, overcome technological deficiency of the existing technology.
Summary of the invention
For the deficiency of current flash cache area management method, the present invention proposes a kind of LRU based on dynamic page weight Flash memory management method reduces the write operation of flash memory and the access office using memory device to improve the hit rate of buffer area Domain property and time locality come the buffer area of maintenance efficient.
In order to solve technical problem of the existing technology, technical scheme is as follows:
A kind of LRU flash memory management method based on dynamic page weight is divided into two stages progress, specific to execute step such as Under:
Step S1: page request in read request queue is simultaneously identified and is divided to page request type and locating region Class;
Step S2: judgement insertion buffer state determines the superseded page using the LRU, method based on dynamic page weight, Buffer state is adjusted, page request is executed.Wherein step S1 further will include:
Step S11: pre-processing page request queue, judges whether the corresponding page of page request is stored in buffering Qu Zhong, if buffer area hit thens follow the steps S12, if buffer area miss thens follow the steps S2.The step is that S11 is further wrapped It includes:
Step S111: page request queue is pre-processed, and sets page request as R, wherein including request number Rpid And page request mode Ram∈{read,write}.Request queue S can be indicated with following equation:
S={ R1,R2,...,Rj,...,Rn},1≤j≤n
Request queue S is multiple page request RpidSet, wherein n represent the request in request queue S sum, j generation Table page request number.
Step S112: setting the page as P, page set U={ P1,P2,...,Pi,...,Pm},1≤i≤m≤n.Wherein m The page sum in page set U is represented, i represents page number.Judge the page request R in request queue SjIt is corresponding Page PiIt is no to save in the buffer, if buffer area hit thens follow the steps S12, if buffer area miss thens follow the steps S2.
Step S12: hit buffer area page P is setiPage request set ASi={ AS1,...,ASk,...,ASo}。
Wherein o represents request sum, and k represents request number.Judgement hit buffer area page PiIn workspace or Exchange area, if PiS13 is thened follow the steps in workspace, if PiS2 is thened follow the steps in exchange area.
Step S13: the page will be transferred to region most-often used in workspace, and the content of pages of workspace is according to most normal It is adjusted using page algorithm, ending request page algorithm.
Step S2: judgement insertion buffer state determines the superseded page using the LRU, method based on dynamic page weight, Buffer state is adjusted, page request is executed.The step further comprises:
Step S21: judging whether the page has read, if it is unread then follow the steps S22 if the page has been read Data are read in flash memory and execute step S23.
Step S22: judging whether workspace has expired, and workspace completely thens follow the steps S23, and workspace is less than then by the page It is directly added into workspace and executes step S25.
Step S23: it calls to delete using renewal of the page algorithm recently in exchange area and eliminates the page, if the page carrys out self-exchange Area thens follow the steps S24, is directly added into exchange area if the page is from flash memory and executes step S25;Step S23 is further wrapped It includes:
Step S231: judging whether the page comes from exchange area, is, directly returns to the region the LRU parallel step in workspace Otherwise S234 carries out S232.
Step S232: the power of every page in exchange area Page Range [0, w] is calculated according to dynamic page weight more new algorithm Weight Pweight, and generate MapweightSave updated every page weight.Wherein w is page total size, PweightIt represents dynamic The weight of every page, Map in state page weight algorithmweightFor dynamic page weight map.
Step S233: traversal MapweightIn every page, find and wherein the smallest one page of weight and be denoted as the superseded page, It returns and eliminates page address.
Step S234: judging request page source, if request page thens follow the steps S24 from exchange area;If requested page Face then expels in exchange area from flash memory and eliminates the page, and request page is inserted into execution step S25 in exchange area.
Step S24: calling dynamic page weight to select to eliminate the page in workspace, and the selected superseded page is expelled best friend Area is changed, and the page need to be added, workspace is added.
Step S241: initialization sets target pages as Pi, request page address is Rpid, the kth request of request page ForS is current all request RiSet.Page, which is calculated, according to following equation works as front PiTime limitation
TLI is set as PiThe equispaced of arrival can be evaluated whether P by TLIiTime locality, be based onIt can be pre- The arrival time of other pages in page set U is surveyed,Be expressed as follows column formula:
Wherein m is buffer area page sum, and o is current page request number.
Step S242: read operation time-consuming LrWith write operation time-consuming LwBe expressed as follows column formula:
Wherein g represents current operation number, and q represents the Action number finally accessed.
Page delay degree is calculated using different algorithms for the clean page (CC, HC) and containing dirty pages face (CD, HD)
Set ASlatestFor PiIn the request signal that finally reaches, ASoFor the request signal currently reached, page freshnessSuch as following formula:
By acquiring page freshnessPage delay degreeAnd time limitationAccording to following equation meter Calculate workspace page PiDynamic page weight
Wherein α is setting weight coefficient, PamFor page read-write state, o is current page request number, and m is page sum, Q is recent visit operation.
The dynamic page weight of all pages is calculated in workspace, acquires minimum value
Step S243: willIt is denoted as the superseded page and expels to exchange area, execute step S25.
Step S25: current page request, which executes, to be terminated, and next request in page request queue is executed.
By adopting the above technical scheme, the present invention has following technical characterstic:
1. having used workspace and the page frame replacement region of exchange area two in this method, pass through access mode and access frequency To distinguish the page.The page is divided into four seed types: the lonely reason page (cold-clean), cold containing dirty pages face (cold-dirty), heat are clear Manage the page (hot-clean) and hot containing dirty pages face (hot-dirty).
2. distinguishing over conventional buffer queue management method, this method uses new page frame replacement method.In page frame replacement mistake Cheng Zhong, buffer area not will receive the pollution of sequence of pages scanning, will have enough clean pages therefore will not be full of in buffer area Containing dirty pages face.
3. proposing a kind of novel dynamic page Weight algorithm in this method, it considers time localities, operation Cost and page timeliness.Operating cost during buffer queue management can postpone according to the read-write of different flash memories to carry out Adjustment.
Compared with prior art, the present invention has following technical effect that
(1) low delay: the present invention distinguishes the cold page and the hot page.When buffer area is full, the hot page is expelled first, The cold page executes delay expulsion, so that effectively increasing system execution efficiency reduces time delay.Compared with traditional LRU, method, this Invention delay time reduces about 18.8%.
(2) high hit ratio: the present invention distinguishes containing dirty pages face and the clean page.Preferentially retain clean page in buffer area Face avoids the case where containing dirty pages face is full of buffer area, to promote hit ratio.Compared with traditional LRU, method, the present invention hits ratio Improve about 8.3%.
(3) low to write consumption: present invention improves over page migration methods.The superseded page when workspace is full, in workspace Will expel to exchange area and and indirect expulsion, write consumption to reduce in system.Compared with traditional LRU, method, this Consumption is write in invention reduces about 22.6%.
Detailed description of the invention
Fig. 1 is the LRU flash cache management method flow chart based on dynamic page weight;
Fig. 2 is that the LRU flash cache management method page based on dynamic page weight shifts figure;
Fig. 3 be flash memory in workspace with exchange section data interaction schematic diagram;
Fig. 4 is that synthesis path 1 reads and writes Expenditure Levels;
Fig. 5 is that synthesis path 2 reads and writes Expenditure Levels;
Fig. 6 is that synthesis path 3 reads and writes Expenditure Levels;
Fig. 7 is that synthesis path 4 reads and writes Expenditure Levels;
Fig. 8 is the present invention compared with the hit rate of other methods;
Fig. 9 be the present invention with other methods write consumption compared with;
Figure 10 is the present invention compared with the time delay of other methods.
Specific embodiment
Below with reference to accompanying drawing content to a kind of flash cache manager based on dynamic page weight provided by the invention Method further illustrates.
Referring to Fig. 1, it is shown flow chart of the invention, a specific embodiment of the invention following steps:
Step S1: page request in read request queue is simultaneously identified and is divided to page request type and locating region Class specifically comprises the following steps:
Step S11: page request queue is pre-processed, and sets page request as R, wherein including request number Rpid And page request mode Ram∈{read,write}.Request queue S can be indicated with following equation:
S={ R1,R2,...,Rj,...,Rn},1≤j≤n
Request queue S is multiple page request RpidSet, wherein n represent the request in request queue S sum, j Represent page request number.The page is set as P, page set U is indicated with following equation:
U={ P1,P2,...,Pi,...,Pm},1≤i≤m≤n
Page request is the set of multiple page P, and wherein m represents the sum of the page in page set U, and i represents page Face number.Judge the page request R in request queue SjCorresponding page PiIt is no to save in the buffer, if buffer area hit Step S12 is executed, if buffer area miss thens follow the steps S2.
Step S12: the page P in setting hit buffer areaiIn request set be ASiAs following equation indicates:
ASi={ AS1,AS2,...,ASk,...,ASo},1≤k≤o≤n
ASiFor to page PiThe set of request, wherein o represents request sum, and k represents request number.Judgement hit is slow Deposit area page PiIn workspace or exchange area, if PiS13 is thened follow the steps in workspace, if PiIt is then executed in exchange area Step S2.Page-hit and transfer figure are as shown in Figure 2.It include the logical address and read-write mode in request in upper layer request, Judge whether request page is located at buffer area and updates buffer area in Hash bucket by hash function.It is divided into work in buffer area Area and exchange area, the page is divided into the hot clean page (HC), hot containing dirty pages face (HD), the cold clean page (CC), cold containing dirty pages in buffer area Four kinds of face (CD), according to different page types and present position is read or the superseded page.
Step S13: the region MRU that the page will be transferred in workspace updates the page status in workspace, sign-off sheet It requests in person and asks.
Step S2: judgement insertion buffer state determines the superseded page using the LRU, method based on dynamic page weight, Buffer state is adjusted, page request is executed.The step further comprises:
Step S21: judging whether page request R is read request, if not read request thens follow the steps S22, asks if reading It asks, read data in a flash memory and executes step S25.
Step S22: judging whether workspace has expired, and workspace completely thens follow the steps S23, and workspace is less than then by the page It is directly added into workspace and executes step S24.
Step S23: being transferred to the region LRU for the superseded page in exchange area, if the page is thened follow the steps from exchange area S24 is directly added into exchange area if the page is from flash memory and executes step S25.Workspace is handed over interval censored data is exchanged in flash memory As shown in Figure 3, flash translation layer (FTL) retains page data with page address to mutual schematic diagram, and the superseded page in workspace is transferred to It is eliminated behind the region MRU to flash memory, the superseded page of exchange area is eliminated after being transferred to the region LRU to flash memory.
Page selection process comprises the steps of:
Step S231: judging whether the page comes from exchange area, is, directly returns to the region LRU in workspace and executes step Otherwise rapid S234 is carried out in next step.
Step S232: the power of every page in exchange area Page Range [0, w] is calculated according to dynamic page weight more new algorithm Weight Pweight, and generate MapweightSave updated every page weight.Wherein w is page total size, PweightIt represents dynamic The weight of every page, Map in state page weight algorithmweightFor dynamic page weight map.
Step S233: traversal MapweightIn every page, find and wherein the smallest one page of weight and be denoted as the superseded page, It returns and eliminates page address.
Step S234: judging request page source, if request page thens follow the steps S24 from exchange area;If requested page Face then expels in exchange area from flash memory and eliminates the page, and request page is inserted into execution step S25 in exchange area.
Step S24: calling dynamic page weight to select to eliminate the page in workspace, and the selected superseded page is expelled best friend Area is changed, and the page need to be added, workspace is added.
Dynamic page Weight algorithm comprises the steps of:
Step S241: initialization sets target pages as Pi, request page address is Rpid, the kth request of request page ForS is current all request RiSet.Page, which is calculated, according to following equation works as front PiTime limitation
TLI is set as PiThe equispaced of arrival can be evaluated whether P by TLIiTime locality, be based onIt can be pre- The arrival time of other pages in page set U is surveyed,Be expressed as follows column formula:
Wherein m is buffer area page sum, and o is current page request number.
Step S242: read operation time-consuming LrWith write operation time-consuming LwBe expressed as follows column formula:
Wherein g represents current operation number, and q represents the Action number finally accessed.
Page delay degree is calculated using different algorithms for the clean page (CC, HC) and containing dirty pages face (CD, HD)
Set ASlatestFor PiIn the request signal that finally reaches, ASoFor the request signal currently reached, page freshnessSuch as following formula:
Pass through page freshnessPage delay degreeAnd time limitationWork is calculated according to following equation Make area page PiDynamic page weight
Wherein α is setting weight coefficient, PamFor page read-write state, o is current page request number, and m is page sum, Q is recent visit operation.
The dynamic page weight of all pages is calculated in workspace, acquires minimum value
Step S243: willIt is denoted as the superseded page and expels to exchange area, execute step (8).
Step S25: current page requests RjExecution terminates, and executes next request in page request queue S.
This method and the performance of other traditional LRU, methods are compared below:
Assessment of the invention is evaluated by the way of synthetic workload, experimental configuration is Intel's Duo I7-8700K 4.3GHz processor, 32GB 3200mhz RAM and 64 Windows 10Pro operating systems.The present invention exists The nand flash memory for simulating 128MB is carried out on Visual Studio 2015, the block size of simulation flash equipment is 128KB, every piece has page 64, and erasing delay is set as 1.5ms, and the duration is set as 100000P/E, read-write delay ratio setting It is 1/8.Table 1 gives the detailed configuration of nand flash memory, and compares with physical device configuration.
The configuration of 1 nand flash memory of table
In order to simulate the access module in practical flash memory, experiment generates 4 synthesis paths based on zipf distribution.In table 2 The detailed features of four synthesis paths are described, wherein read-write ratio indicates to read and write proportion in all requests, read and write specific gravity Indicate the read-write operation ratio executed to certain one page.The road of four paths in these cases is respectively shown in 4~7 in figure As a result, the experimental results showed that after 2.5 ten thousand operations, the read-write operation of all tracking all becomes stable diameter.
2 synthesis path feature of table
In experimental evaluation, we compared other current state-of-the-art algorithms such as LRU, CF-LRU, LRU, AD-LRU, right Dynamic page weight has carried out comprehensive analysis.The experimental results showed that, distinguished by using the clean page and containing dirty pages face in Fig. 8 Algorithm, compared with traditional LRU, method, hit rate of the invention about 8.3%;In Fig. 9 the experimental results showed that, by using work Area with exchange section page migration method, compared with traditional LRU, method, the present invention, which writes consumption, reduces about 22.6%;In Figure 10 The experimental results showed that distinguishing algorithm by using the cold and hot page, compared with traditional LRU, method, delay time of the present invention is reduced About 18.8%.
In the patent, we have proposed the buffer area the LRU replacement methods based on dynamic page weight, effective with holding one Buffer area, improve the overall performance of flash memory.Lru algorithm based on dynamic page weight is local using the access of flash equipment Property, according to access mode and the frequency separation page.Buffering divides into two regions as workspace and exchange area, and the page is divided into four Seed type.Different from the common buffer area replacement method based on LRU, it is special that the LRU, method based on dynamic page weight has Conversion of page method.A kind of new dynamic page Weight algorithm is proposed, which considers temporal locality, operating cost With page timeliness to effectively improving buffering hit rate, reducing time delay and writing consumption.
The above description of the embodiment is only used to help understand the method for the present invention and its core ideas.It should be pointed out that pair For those skilled in the art, without departing from the principle of the present invention, the present invention can also be carried out Some improvements and modifications, these improvements and modifications also fall within the scope of protection of the claims of the present invention.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (1)

1. a kind of LRU flash cache management method based on dynamic page weight, which comprises the following steps:
Step S1: page request in read request queue is simultaneously identified and is classified to page request type and locating region;
Step S2: judgement insertion buffer state determines to eliminate the page, adjustment using the LRU, method based on dynamic page weight Buffer state executes page request;
Wherein, step S1 further will include:
Step S11: pre-processing page request queue, judges whether the corresponding page of page request is stored in buffer area In, if buffer area hit thens follow the steps S12, if buffer area miss thens follow the steps S2;Step S11 further comprises:
Step S111: page request queue is pre-processed, and sets page request as R, wherein including request number RpidAnd page Face request mode Ram∈ { read, write }, request queue S can be indicated with following equation:
S={ R1,R2,...,Rj,...,Rn},1≤j≤n
Wherein, request queue S is multiple page request RpidSet, n represent the request in request queue S sum, j represent Page request number;
Step S112: setting the page as P, page set U={ P1,P2,...,Pi,...,Pm},1≤i≤m≤n;Wherein m is represented Page sum in page set U, i represent page number;Judge the page request R in request queue SjThe corresponding page PiIt is no to save in the buffer, if buffer area hit thens follow the steps S12, if buffer area miss thens follow the steps S2;
Step S12: hit buffer area page P is setiPage request set ASi={ AS1,...,ASk,...,ASo};Wherein o generation Table request sum, k represent request number;
Judgement hit buffer area page PiIn workspace or exchange area, if PiS13 is thened follow the steps in workspace, if PiPlace S2 is thened follow the steps in exchange area;
Step S13: the page will be transferred to region most-often used in workspace, and the content of pages of workspace is according to most-often used Page algorithm is adjusted, ending request page algorithm;
The step S2 further comprises:
Step S21: judging whether the page has read, if unread then follow the steps S22 if the page has been read in flash memory Middle reading data simultaneously execute step S23;
Step S22: judging whether workspace has expired, and workspace completely thens follow the steps S23, and workspace is less than then direct by the page Workspace is added and executes step S25;
Step S23: it calls to delete using renewal of the page algorithm recently in exchange area and eliminates the page, if the page is from exchange area Step S24 is executed, be directly added into exchange area if the page is from flash memory and executes step S25;Step S23 further comprises:
Step S231: judging whether the page comes from exchange area, is, directly returns to the region the LRU parallel step in workspace Otherwise S234 carries out S232;
Step S232: the weight of every page in exchange area Page Range [0, w] is calculated according to dynamic page weight more new algorithm Pweight, and generate MapweightSave updated every page weight;Wherein w is page total size, PweightRepresent dynamic page The weight of every page, Map in the Weight algorithm of faceweightFor dynamic page weight map;
Step S233: traversal MapweightIn every page, find and wherein the smallest one page of weight and be denoted as the superseded page, return Eliminate page address;
Step S234: judging request page source, if request page thens follow the steps S24 from exchange area;If request page is come It is then expelled in exchange area from flash memory and eliminates the page, and request page is inserted into execution step S25 in exchange area;
Step S24: it calls dynamic page weight to select to eliminate the page in workspace, the selected superseded page is expelled to exchange Area, and the page need to be added, workspace is added, wherein the step S24 further comprises:
Step S241: initialization sets target pages as Pi, request page address is Rpid, request page kth request beS is current all request RiSet;Page, which is calculated, according to following equation works as front PiTime limitation Pi TLI:
TLI is set as PiThe equispaced of arrival can be evaluated whether P by TLIiTime locality, be based on Pi TLIIt can predict page The arrival time of other pages in the set U of face,Be expressed as follows column formula:
Wherein m is buffer area page sum, and o is current page request number;
Step S242: read operation time-consuming LrWith write operation time-consuming LwBe expressed as follows column formula:
Wherein g represents current operation number, and q represents the Action number finally accessed;
Page delay degree P is calculated using different algorithms for the clean page (CC, HC) and containing dirty pages face (CD, HD)i EC:
Set ASlatestFor PiIn the request signal that finally reaches, ASoFor the request signal currently reached, page freshness Pi RESuch as Following formula:
ASo≤ASlatest
By acquiring page freshness Pi RE, page delay degree Pi ECAnd time limitation Pi TLI, according to following equation calculating work space Page PiDynamic page weight Pi w:
Wherein α is setting weight coefficient, PamFor page read-write state, o is current page request number, and m is page sum, and q is Recent visit operation;
The dynamic page weight of all pages is calculated in workspace, acquires minimum value
Step S243: willIt is denoted as the superseded page and expels to exchange area, execute step S25;
Step S25: current page request, which executes, to be terminated, and next request in page request queue is executed.
CN201811394983.6A 2018-11-21 2018-11-21 LRU flash memory cache management method based on dynamic page weight Active CN109857680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811394983.6A CN109857680B (en) 2018-11-21 2018-11-21 LRU flash memory cache management method based on dynamic page weight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811394983.6A CN109857680B (en) 2018-11-21 2018-11-21 LRU flash memory cache management method based on dynamic page weight

Publications (2)

Publication Number Publication Date
CN109857680A true CN109857680A (en) 2019-06-07
CN109857680B CN109857680B (en) 2020-09-11

Family

ID=66890140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811394983.6A Active CN109857680B (en) 2018-11-21 2018-11-21 LRU flash memory cache management method based on dynamic page weight

Country Status (1)

Country Link
CN (1) CN109857680B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103257935A (en) * 2013-04-19 2013-08-21 华中科技大学 Cache management method and application thereof
CN104145252A (en) * 2012-03-05 2014-11-12 国际商业机器公司 Adaptive cache promotions in a two level caching system
CN104699422A (en) * 2015-03-11 2015-06-10 华为技术有限公司 Determination method and determination device of cache data
CN106527966A (en) * 2015-09-10 2017-03-22 蜂巢数据有限公司 Garbage collection in ssd drives
CN107391398A (en) * 2016-05-16 2017-11-24 中国科学院微电子研究所 A kind of management method and system in flash cache area
CN107590084A (en) * 2017-08-22 2018-01-16 浙江万里学院 A kind of page level buffering area improved method based on classification policy
US20180095896A1 (en) * 2013-04-03 2018-04-05 International Business Machines Corporation Maintaining cache consistency in a cache for cache eviction policies supporting dependencies
US20180300257A1 (en) * 2016-05-31 2018-10-18 International Business Machines Corporation Using an access increment number to control a duration during which tracks remain in cache
US20180314432A1 (en) * 2016-06-06 2018-11-01 International Business Machines Corporation Invoking input/output (i/o) threads on processors to demote tracks from a cache
CN108762664A (en) * 2018-02-05 2018-11-06 杭州电子科技大学 A kind of solid state disk page grade buffer queue management method
CN108845957A (en) * 2018-03-30 2018-11-20 杭州电子科技大学 It is a kind of to replace and the adaptive buffer management method of write-back

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104145252A (en) * 2012-03-05 2014-11-12 国际商业机器公司 Adaptive cache promotions in a two level caching system
US20180095896A1 (en) * 2013-04-03 2018-04-05 International Business Machines Corporation Maintaining cache consistency in a cache for cache eviction policies supporting dependencies
CN103257935A (en) * 2013-04-19 2013-08-21 华中科技大学 Cache management method and application thereof
CN104699422A (en) * 2015-03-11 2015-06-10 华为技术有限公司 Determination method and determination device of cache data
CN106527966A (en) * 2015-09-10 2017-03-22 蜂巢数据有限公司 Garbage collection in ssd drives
CN107391398A (en) * 2016-05-16 2017-11-24 中国科学院微电子研究所 A kind of management method and system in flash cache area
US20180300257A1 (en) * 2016-05-31 2018-10-18 International Business Machines Corporation Using an access increment number to control a duration during which tracks remain in cache
US20180314432A1 (en) * 2016-06-06 2018-11-01 International Business Machines Corporation Invoking input/output (i/o) threads on processors to demote tracks from a cache
CN107590084A (en) * 2017-08-22 2018-01-16 浙江万里学院 A kind of page level buffering area improved method based on classification policy
CN108762664A (en) * 2018-02-05 2018-11-06 杭州电子科技大学 A kind of solid state disk page grade buffer queue management method
CN108845957A (en) * 2018-03-30 2018-11-20 杭州电子科技大学 It is a kind of to replace and the adaptive buffer management method of write-back

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘恒: "并行计算框架Spark中一种新的RDD分区权重缓存替换算法", 《小型微型计算机系统》 *

Also Published As

Publication number Publication date
CN109857680B (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN107193646B (en) High-efficiency dynamic page scheduling method based on mixed main memory architecture
Jin et al. AD-LRU: An efficient buffer replacement algorithm for flash-based databases
CN104572491B (en) A kind of read buffer management method and device based on solid state hard disc
Wang et al. Adaptive placement and migration policy for an STT-RAM-based hybrid cache
EP3414665B1 (en) Profiling cache replacement
Jiang et al. S-FTL: An efficient address translation for flash memory by exploiting spatial locality
US8601216B2 (en) Method and system for removing cache blocks
CN106528454B (en) A kind of memory system caching method based on flash memory
CN108762664B (en) Solid state disk page-level cache region management method
Lv et al. Operation-aware buffer management in flash-based systems
CN103795781B (en) A kind of distributed caching method based on file prediction
CN104794064A (en) Cache management method based on region heat degree
CN103136121A (en) Cache management method for solid-state disc
CN110888600B (en) Buffer area management method for NAND flash memory
CN106569732B (en) Data migration method and device
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
CN106775466A (en) A kind of FTL read buffers management method and device without DRAM
Wu et al. APP-LRU: A new page replacement method for PCM/DRAM-based hybrid memory systems
CN100428193C (en) Data preacquring method for use in data storage system
CN110297787A (en) The method, device and equipment of I/O equipment access memory
Sun et al. Co-active: A workload-aware collaborative cache management scheme for NVMe SSDs
Wang et al. A novel buffer management scheme based on particle swarm optimization for SSD
CN110795363A (en) Hot page prediction method and page scheduling method for storage medium
CN109478164A (en) For storing the system and method for being used for the requested information of cache entries transmission
CN102521161B (en) Data caching method, device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240112

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 310018 Xiasha Higher Education Zone, Hangzhou, Zhejiang

Patentee before: HANGZHOU DIANZI University

TR01 Transfer of patent right