CN107451071A - A kind of caching replacement method and system - Google Patents
A kind of caching replacement method and system Download PDFInfo
- Publication number
- CN107451071A CN107451071A CN201710661703.2A CN201710661703A CN107451071A CN 107451071 A CN107451071 A CN 107451071A CN 201710661703 A CN201710661703 A CN 201710661703A CN 107451071 A CN107451071 A CN 107451071A
- Authority
- CN
- China
- Prior art keywords
- data
- item
- data cached
- cached item
- heap
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/122—Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0888—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using selective caching, e.g. bypass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/45—Caching of specific data in cache memory
- G06F2212/451—Stack data
Abstract
The invention discloses a kind of caching replacement method and system, belong to Computer Cache technical field.The caching replacement method of the present invention, utilize the data cached item of storage of array, with position of the hash table memory buffers data item in array, an access frequency counter is designed for each data cached item, use the small top data cached item of heap data structure organization, according to the count results of access frequency counter, data cached item is ranked up, it is the minimum data cached item of access times that heap, which serves as a fill-in evidence, when having data access request, if the data cached item of data hit, then the data cached item access frequency counter adds one, and data cached item is resequenced, if data are miss data cached item, then eliminate the data cached item in heap top, and create new data cached item.The caching replacement method of the invention can quickly determine the minimum data item of access frequency, reduce the complexity of cached configuration, boosting algorithm execution efficiency, have good application value.
Description
Technical field
The present invention relates to Computer Cache technical field, specifically provides a kind of caching replacement method and system.
Background technology
The use of current cache has use than wide between CPU and internal memory, in storage software.No matter which is applied to
A little aspects, its basic caching replacement method are all similar.Caching replacement algorithm relatively common at present mainly has FIFO
(First in First out)Fifo algorithm, LRU(Least recently used)LRU(Base
In access time)And LFU(Least Frequently used)LRU(Based on access times)Deng.But
These algorithms are mainly to give the guidance method of realizing caching replacement, different realizations the execution efficiency of algorithm is influenceed compared with
Greatly.For example, the data that data access number counts and eliminates least referenced in LFU algorithms just have many implementation methods, but
Be realize after algorithm operational efficiency difference it is also very big.
The content of the invention
The technical assignment of the present invention is to be directed to above-mentioned problem, there is provided a kind of quickly to determine that access frequency is minimum
Data item, reduce the complexity of cached configuration, the caching replacement method of boosting algorithm execution efficiency.
Further technical assignment of the invention is to provide a kind of caching replacement system.
To achieve the above object, the invention provides following technical scheme:
A kind of caching replacement method, the caching replacement method are based on LFU, using the data cached item of storage of array, with hash tables
Position of the memory buffers data item in array, an access frequency counter, record buffer are designed for each data cached item
The number that data item is hit, using the small top data cached item of heap data structure organization, according to the record of access frequency counter
Number, data cached item is ranked up, heap is served as a fill-in according to being the minimum data cached item of access times, there is data access request
When, if the data cached item of data hit, access frequency counter corresponding to the data cached item adds one, and to data cached item
Resequenced, if the miss data cached item of data, eliminate the data cached item in heap top, and create newly data cached
, it is inserted into heap top position.
Using position of the hash table memory buffers data item in array, data cached item search efficiency is improved, when there is number
During according to access request, quickly confirm to access whether data hit data cached item by hash lookup algorithms.
Use the operations such as insertion of the small top heap data structure realization to data cached item, deletion, sequence, renewal.For slow
The corresponding data cached item of situation renewal of deposit data item hit, and data cached item is resequenced, for caching number
According to the situation that item is miss, the data cached item in heap top is directly deleted, and enters new data cached item in heap tip cutting, new caching number
According to the relevant information that current data access request is included in item.
Preferably, the caching replacement method specifically includes following steps:
S1:Data cached item is stored in array, uses position of the hash table memory buffers data item in array;
S2:Using small top heap data structure, according to the data cached data cached item of item access times tissue;
S3:An access frequency counter is designed for each data cached item, records the accessed number of each data cached item;
S4:According to data cached item access times, data cached item is ranked up, the useful evidence of heap, which is that access times are minimum, to be delayed
Deposit data item;
S5:When having data access request, the data item in small top heap is carried out again according to the number that data cached item is accessed
Sequence.
Preferably, step S5 operating process is:
1)If data access request hits data cached item, access frequency counter corresponding to the data cached item adds one;
2)The data in small top heap are resequenced according to data cached item access times, generate new little Ding Dui, and jump
Go to 5);
3)If data access request is miss data cached item, the data cached item on heap top is eliminated, and create new caching
Data item, it is inserted into heap top position;
4)Data cached item in displacement caching;
5)Terminate.
4. a kind of caching replacement system, including with lower module:
Array module:For memory buffers data item;
Hash table memory modules:For position of the memory buffers data item in array;
Counting module:For being recorded to the access times of data cached item;
Data cached item order module:For the access times according to data cached item, data cached item is ranked up.
Preferably, the counting module is access frequency counter, each data cached item is correspondingly designed with a visit
Ask frequency counter.
Preferably, the data cached item order module is using small top heap data structure.
Compared with prior art, caching replacement method of the invention has beneficial effect following prominent:
(One)The caching replacement method of the present invention uses position of the hash table memory buffers data item in array, improves caching number
According to item search efficiency, when there is data access request, quickly confirm to access whether data hit caching by hash lookup algorithms
Data item;
(Two)Using operations such as insertion of the small top heap data structure realization to data cached item, deletion, sequence, renewals, for slow
The corresponding data cached item of situation renewal of deposit data item hit, and data cached item is resequenced, for caching number
According to the situation that item is miss, the data cached item in heap top is directly deleted, and enters new data cached item in heap tip cutting, new caching number
According to the relevant information that current data access request is included in item, there is good application value.
Brief description of the drawings
Fig. 1 is the flow chart of caching replacement method of the present invention;
Fig. 2 is the small top heap data structure organization mode logical schematic of caching replacement method of the present invention.
Embodiment
Below in conjunction with drawings and examples, the caching replacement method and system of the present invention are described in further detail.
Embodiment
As shown in figure 1, the caching replacement method of the present invention is based on LFU, using the data cached item of storage of array, with hash tables
Position of the memory buffers data item in array, an access frequency counter, record buffer are designed for each data cached item
The number that data item is hit.As shown in Fig. 2 using the small top data cached item of heap data structure organization, according to access frequency meter
The record number of number device, is ranked up to data cached item, and heap is served as a fill-in according to being the minimum data cached item of access times, there is data
During access request, if the data cached item of data hit, access frequency counter corresponding to the data cached item adds one, and to slow
Deposit data item is resequenced, if the miss data cached item of data, eliminates the data cached item in heap top, and creates new delay
Deposit data item, it is inserted into heap top position.Specifically include following steps:
S1:Data cached item is stored in array, uses position of the hash table memory buffers data item in array.
S2:Using small top heap data structure, according to the data cached data cached item of item access times tissue.
S3:An access frequency counter is designed for each data cached item, records what each data cached item was accessed
Number.
S4:According to data cached item access times, data cached item is ranked up, it is that access times are minimum that heap, which serves as a fill-in evidence,
Data cached item.
S5:When having data access request, the data item in small top heap is carried out according to the number that data cached item is accessed
Rearrangement, operating process are:
1)If data access request hits data cached item, access frequency counter corresponding to the data cached item adds one;
2)The data in small top heap are resequenced according to data cached item access times, generate new little Ding Dui, and jump
Go to 5);
3)If data access request is miss data cached item, the data cached item on heap top is eliminated, and create new caching
Data item, it is inserted into heap top position;
4)Data cached item in displacement caching;
5)Terminate.
A kind of caching replacement system, including with lower module:
Array module:For memory buffers data item.
Hash table memory modules:For position of the memory buffers data item in array.
Counting module:For being recorded to the access times of data cached item.Counting module is access frequency counter,
Each data cached item is correspondingly designed with an access frequency counter, if data access request hits data cached item, this is slow
Access frequency counter adds one corresponding to deposit data item.
Data cached item order module:For the access times of the data cached item according to access frequency counter records,
Data cached item is ranked up.Data cached item order module is using small top heap data structure.
Embodiment described above, it is the present invention more preferably embodiment, those skilled in the art is at this
The usual variations and alternatives carried out in the range of inventive technique scheme should all include within the scope of the present invention.
Claims (6)
- A kind of 1. caching replacement method, it is characterised in that:The caching replacement method is based on LFU, and number is cached using storage of array According to item, with position of the hash table memory buffers data item in array, an access frequency meter is designed for each data cached item Number device, the number that record buffer data item is hit, using the small top data cached item of heap data structure organization, according to access frequency The record number of counter, is ranked up to data cached item, and heap is served as a fill-in according to being the minimum data cached item of access times, there is number During according to access request, if the data cached item of data hit, access frequency counter corresponding to the data cached item adds one, and right Data cached item is resequenced, if the miss data cached item of data, eliminates the data cached item in heap top, and create newly Data cached item, it is inserted into heap top position.
- 2. caching replacement method according to claim 1, it is characterised in that:The caching replacement method specifically includes following Step:S1:Data cached item is stored in array, uses position of the hash table memory buffers data item in array;S2:Using small top heap data structure, according to the data cached data cached item of item access times tissue;S3:An access frequency counter is designed for each data cached item, records the accessed number of each data cached item;S4:According to data cached item access times, data cached item is ranked up, the useful evidence of heap, which is that access times are minimum, to be delayed Deposit data item;S5:When having data access request, the data item in small top heap is carried out again according to the number that data cached item is accessed Sequence.
- 3. caching replacement method according to claim 2, it is characterised in that:Step S5 operating process is:1)If data access request hits data cached item, access frequency counter corresponding to the data cached item adds one;2)The data in small top heap are resequenced according to data cached item access times, generate new little Ding Dui, and jump Go to 5);3)If data access request is miss data cached item, the data cached item on heap top is eliminated, and create new caching Data item, it is inserted into heap top position;4)Data cached item in displacement caching;5)Terminate.
- A kind of 4. caching replacement system, it is characterised in that:Including with lower module:Array module:For memory buffers data item;Hash table memory modules:For position of the memory buffers data item in array;Counting module:For being recorded to the access times of data cached item;Data cached item order module:For the access times according to data cached item, data cached item is ranked up.
- 5. caching replacement system according to claim 4, it is characterised in that:The counting module counts for access frequency Device, each data cached item are correspondingly designed with an access frequency counter.
- 6. the caching replacement system according to claim 4 or 5, it is characterised in that:The data cached item order module is adopted With small top heap data structure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710661703.2A CN107451071A (en) | 2017-08-04 | 2017-08-04 | A kind of caching replacement method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710661703.2A CN107451071A (en) | 2017-08-04 | 2017-08-04 | A kind of caching replacement method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107451071A true CN107451071A (en) | 2017-12-08 |
Family
ID=60490865
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710661703.2A Pending CN107451071A (en) | 2017-08-04 | 2017-08-04 | A kind of caching replacement method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107451071A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108153890A (en) * | 2017-12-28 | 2018-06-12 | 泰康保险集团股份有限公司 | Buffer memory management method and device |
CN109857934A (en) * | 2019-01-21 | 2019-06-07 | 广州大学 | Software module cache prefetching method, apparatus and medium based on user behavior analysis |
CN110569261A (en) * | 2019-08-09 | 2019-12-13 | 苏州浪潮智能科技有限公司 | method and device for updating resources stored in cache region |
CN111176560A (en) * | 2019-12-17 | 2020-05-19 | 腾讯科技(深圳)有限公司 | Cache management method and device, computer equipment and storage medium |
CN111221749A (en) * | 2019-11-15 | 2020-06-02 | 新华三半导体技术有限公司 | Data block writing method and device, processor chip and Cache |
CN112015679A (en) * | 2020-08-07 | 2020-12-01 | 苏州浪潮智能科技有限公司 | Cache optimization method and system based on access frequency |
CN113434091A (en) * | 2021-07-07 | 2021-09-24 | 中国人民解放军国防科技大学 | Cold and hot key value identification method based on hybrid DRAM-NVM |
CN113760782A (en) * | 2021-08-23 | 2021-12-07 | 南京森根科技股份有限公司 | Dynamically adjustable annular cache system and control method thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6763440B1 (en) * | 2000-06-02 | 2004-07-13 | Sun Microsystems, Inc. | Garbage collection using nursery regions for new objects in a virtual heap |
CN1996268A (en) * | 2006-12-28 | 2007-07-11 | 北京时代民芯科技有限公司 | Method for implementing on-chip command cache |
CN102231139A (en) * | 2011-06-29 | 2011-11-02 | 内蒙古大学 | Subgroup-based self-adapting cache memory block replacement policy |
CN103049399A (en) * | 2012-12-31 | 2013-04-17 | 北京北大众志微系统科技有限责任公司 | Substitution method for inclusive final stage cache |
CN103106153A (en) * | 2013-02-20 | 2013-05-15 | 哈尔滨工业大学 | Web cache replacement method based on access density |
-
2017
- 2017-08-04 CN CN201710661703.2A patent/CN107451071A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6763440B1 (en) * | 2000-06-02 | 2004-07-13 | Sun Microsystems, Inc. | Garbage collection using nursery regions for new objects in a virtual heap |
CN1996268A (en) * | 2006-12-28 | 2007-07-11 | 北京时代民芯科技有限公司 | Method for implementing on-chip command cache |
CN102231139A (en) * | 2011-06-29 | 2011-11-02 | 内蒙古大学 | Subgroup-based self-adapting cache memory block replacement policy |
CN103049399A (en) * | 2012-12-31 | 2013-04-17 | 北京北大众志微系统科技有限责任公司 | Substitution method for inclusive final stage cache |
CN103106153A (en) * | 2013-02-20 | 2013-05-15 | 哈尔滨工业大学 | Web cache replacement method based on access density |
Non-Patent Citations (2)
Title |
---|
开发者知识库: "基于HashHeap的LFU实现", 《HTTP://WWW.ITDAAN.COM/BLOG/2015/10/27/16BB5FE37F8D704D1F142314378E764E.HTML》 * |
满小茂: "页面置换算法LRU&LFU", 《HTTPS://MY.OSCHINA.NET/MANMAO/BLOG/603253》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108153890A (en) * | 2017-12-28 | 2018-06-12 | 泰康保险集团股份有限公司 | Buffer memory management method and device |
CN109857934A (en) * | 2019-01-21 | 2019-06-07 | 广州大学 | Software module cache prefetching method, apparatus and medium based on user behavior analysis |
CN110569261A (en) * | 2019-08-09 | 2019-12-13 | 苏州浪潮智能科技有限公司 | method and device for updating resources stored in cache region |
CN110569261B (en) * | 2019-08-09 | 2022-07-12 | 苏州浪潮智能科技有限公司 | Method and device for updating resources stored in cache region |
CN111221749A (en) * | 2019-11-15 | 2020-06-02 | 新华三半导体技术有限公司 | Data block writing method and device, processor chip and Cache |
CN111176560A (en) * | 2019-12-17 | 2020-05-19 | 腾讯科技(深圳)有限公司 | Cache management method and device, computer equipment and storage medium |
CN111176560B (en) * | 2019-12-17 | 2022-02-18 | 腾讯科技(深圳)有限公司 | Cache management method and device, computer equipment and storage medium |
CN112015679A (en) * | 2020-08-07 | 2020-12-01 | 苏州浪潮智能科技有限公司 | Cache optimization method and system based on access frequency |
CN113434091A (en) * | 2021-07-07 | 2021-09-24 | 中国人民解放军国防科技大学 | Cold and hot key value identification method based on hybrid DRAM-NVM |
CN113760782A (en) * | 2021-08-23 | 2021-12-07 | 南京森根科技股份有限公司 | Dynamically adjustable annular cache system and control method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107451071A (en) | A kind of caching replacement method and system | |
CN103885728B (en) | A kind of disk buffering system based on solid-state disk | |
Tang et al. | {RIPQ}: Advanced photo caching on flash for facebook | |
Wu et al. | {AC-Key}: Adaptive caching for {LSM-based}{Key-Value} stores | |
CN102760101B (en) | SSD-based (Solid State Disk) cache management method and system | |
CN104794064B (en) | A kind of buffer memory management method based on region temperature | |
CN103856567B (en) | Small file storage method based on Hadoop distributed file system | |
US7962700B2 (en) | Systems and methods for reducing latency for accessing compressed memory using stratified compressed memory architectures and organization | |
US10564880B2 (en) | Data deduplication method and apparatus | |
CN102542034B (en) | A kind of result set cache method of database interface | |
US20080059728A1 (en) | Systems and methods for masking latency of memory reorganization work in a compressed memory system | |
US9507705B2 (en) | Write cache sorting | |
CN109446117B (en) | Design method for page-level flash translation layer of solid state disk | |
CN111930316B (en) | Cache read-write system and method for content distribution network | |
US11163691B2 (en) | Apparatus and method for performing address translation using buffered address translation data | |
JP2004133933A (en) | Method and profiling cache for management of virtual memory | |
US10061517B2 (en) | Apparatus and method for data arrangement | |
US7716424B2 (en) | Victim prefetching in a cache hierarchy | |
US8402198B1 (en) | Mapping engine for a storage device | |
US11630779B2 (en) | Hybrid storage device with three-level memory mapping | |
CN101404649B (en) | Data processing system based on CACHE and its method | |
Chang et al. | Stable greedy: Adaptive garbage collection for durable page-mapping multichannel SSDs | |
WO2012021847A2 (en) | Apparatus, system and method for caching data | |
US20170262485A1 (en) | Non-transitory computer-readable recording medium, data management device, and data management method | |
CN102354301A (en) | Cache partitioning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171208 |