CN114238417A - Data caching method - Google Patents

Data caching method Download PDF

Info

Publication number
CN114238417A
CN114238417A CN202111611719.5A CN202111611719A CN114238417A CN 114238417 A CN114238417 A CN 114238417A CN 202111611719 A CN202111611719 A CN 202111611719A CN 114238417 A CN114238417 A CN 114238417A
Authority
CN
China
Prior art keywords
cache
data
space
function
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111611719.5A
Other languages
Chinese (zh)
Inventor
谢彰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Qiruike Technology Co Ltd
Original Assignee
Sichuan Qiruike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Qiruike Technology Co Ltd filed Critical Sichuan Qiruike Technology Co Ltd
Priority to CN202111611719.5A priority Critical patent/CN114238417A/en
Publication of CN114238417A publication Critical patent/CN114238417A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a data caching method, which is based on an Lrucache storage mechanism, performs encapsulation processing on stored contents and elimination and deletion operations, and transfers eliminated data to a file caching space after the eliminated data is deleted from a memory caching space; the invention realizes the long-time storage of the cache data on the basis of the characteristics of light weight storage and quick reading.

Description

Data caching method
Technical Field
The invention relates to the technical field of data caching, in particular to a data caching method.
Background
Data caching is an essential function and frequently used function in android application development, and particularly in the era of the internet of things, a large amount of network data is an important link of daily development. The request of the cloud interface and the data cache loading determine the experience of the user on the whole APP, so that the reasonable use of the cache is very important for the data processing of the network request.
At present, a processing mechanism of a general cache solution of an android native system generally uses LruCache and disklucache to perform secondary cache on network request data, and most of the processing mechanisms are self-use and self-package and need frequent encoding work. The two-level cache method of the system can enable the system to simultaneously cache the Lrucache and the DiskLrucache, thereby causing the waste of memory space and the repeated operation of a processor. However, the cache mechanism of LruCache has a limited requirement on the capacity of the memory, and when the capacity exceeds the cache part of the memory, elimination and deletion are performed, which is mandatory, and causes that the system cannot access the cache data and throws out an exception when reading the cache data again.
Disclosure of Invention
In order to solve the problems in the prior art, the invention aims to provide a data caching method, which realizes long-time storage of cached data on the basis of the characteristics of light weight storage and quick reading.
In order to achieve the purpose, the invention adopts the technical scheme that: a data caching method is based on an Lrucache storage mechanism, the storage content and the elimination and deletion operations are packaged, eliminated data are stored in a file cache space after being deleted from a memory cache space, and the method specifically comprises the following steps:
step 1, developing a path and a value data value for providing a data cache based on a memory cache and a file cache, wherein the path is stored in the memory cache, namely an LruCache mechanism, and the value data value is uniformly stored in the file cache; setting the size of a memory cache, setting the size of a path cache space in a file cache, setting the size of a value data value cache space in the file cache, setting the name of a storage folder of the file cache, and presetting the size of the cache space, namely the size of the cache space to be processed by a sizeOf () function;
step 2, writing in a cache operation, calling a put () function, caching a key corresponding to data needing to be cached into a memory space to be stored by a path, storing a corresponding value data value into a file cache space, storing the value data value in a file form, starting a cache thread, and setting effective time of caching;
step 3, requesting network data or network pictures, calling corresponding data for caching, calling a trimSize () function when a large number of network pictures are cached to a memory cache to cause the set internal storage space size to be full, and transferring a cache data path which is not used for a long time, so as to clear the cache space;
and 4, reading the cache data, calling a get () function, and loading the value data value in the corresponding file storage from the path of the local internal cache before the page is opened again for network request, so as to finish the timely effectiveness of page loading.
As a further improvement of the invention, the method also comprises the following steps:
step 5, clearing the cache data, calling a clearCache () function, and automatically deleting the access data which fails to be searched in the operation of obtaining the cache data; or a path is provided to delete the specified cache data, thereby releasing the storage space.
As a further improvement of the present invention, in step 1, when the sizeOf () function is specifically called, the sizeOf () function call is encapsulated by one layer, and the size of the calculation is returned, thereby grasping the storage space of the actual cache.
As a further improvement of the invention, the data cache adopts a LinkedHashMap data structure of a bidirectional linked list, and when the LinkedHashMap is constructed, a data storage mode in the LinkedHashMap is specified through an int value; when the put () function in step 2 is called, the node of linked hashmap is updated or accessed, the int value of the node, which specifies the storage mode, is modified, and the node is moved to the tail of the linked list, so that the node which is just recently stored or updated is at the linked list, the node which is least recently used is at the head of the linked list, and when the storage space returned by the sizeOf () function is insufficient, the cache data is sequentially transferred from the head of the linked list.
As a further improvement of the present invention, in step 3, a trimSize () function is called to perform a dump operation on a cache data path that is not used for a long time, so as to clean the cache space specifically as follows:
when a new cache data is input by the put () function, in order to ensure that the size occupied by the current cache does not exceed the total size of the specified memory cache returned by the sizeOf () function, the cache space is cleaned by the trimSize () function; cleaning, not removing the cache data, but transferring, and when the cache size of the current data exceeds the cache space, starting transferring the least recently used data until the size meets the requirement; when the TrimSize () function calls the put () function and the get () function certainly, the normal access of the memory cache is further saved, and the memory overflow condition can not occur; the data transferred from the memory cache space, that is, the path value of the stored data, is transferred to the path storage area divided in advance in the file storage space.
The invention has the beneficial effects that:
1. the method aims at the problems that the cache space of an LruCache cache mechanism is limited and the space capacity is vacated by automatically deleting data when the space is insufficient, and performs optimization processing, so that the capacity of cache data is improved under the condition that the space is not changed, the speed of reading the cache data by a system is optimized, and the LruCache cache data can not be deleted, so that the cache data is accessed and the abnormality is thrown out due to error; the essence of the invention is based on a second-level cache optimization scheme performed by an LruCache cache mechanism, compared with the prior art, the invention realizes the long-time storage of cache data on the basis of the characteristics of light weight storage and quick reading;
2. by adopting the method and the device for caching the data related to the page of the network request in the android App development, the data rendering page can be effectively read from the cache before the network request when the page is re-entered; and in the long-term use of the App, after a large amount of network data are cached, page data are still smoothly loaded, the crash probability of a program for reading the cached data is reduced, the user experience is greatly improved, and the maintenance cost of the App in the later period is reduced.
Drawings
FIG. 1 is a block flow diagram of an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Examples
As shown in fig. 1, a data caching method includes:
1) when the android application is developed, the method is integrated in the framework, and initialization operation is carried out on each parameter. The method includes but is not limited to providing context for the method, opening a debug printing log, setting a storage data conversion mode (the method supports caching a plurality of common data including common character strings, JsonObjects, JsonAlrray, bitmaps, Drawable, serialized java objects, byte data and the like), setting the size of a memory cache, setting the size of a path cache space in a file cache, setting the size of a value cache space in the file cache, and setting the name of a storage folder of the file cache. The preset buffer space size is the buffer space size that needs to be processed by the sizeOf () function.
2) Writing in a cache operation, calling a put () function of the method, caching a key corresponding to data needing to be cached into a memory space for path storage, storing a corresponding value into a file cache space, storing the value in a file form, starting a cache thread, and setting effective time of caching.
3) The method calls a trimSize () function method to transfer and store the cache data path which is not used for a long time when a large number of network pictures are cached to a memory cache to cause the set internal storage space size to be full.
4) Reading the cache data, calling a get () function of the method, and before a network request of a page is opened again, loading value data in a corresponding file storage from a path of a local internal cache, completing the timely effectiveness of page loading, optimizing the efficiency of page loading, and improving the use experience of APP.
5) The cache data is cleared, the clearCache () function method of the method can be called, in the operation of obtaining the cache data, the automatic deletion operation is carried out on the access data which is found to be failed, and the path can also be provided to carry out the deletion operation on the specified cache data, so that the storage space is released.
This embodiment is further illustrated below:
the android cache is divided into a memory cache and a file cache, a secondary cache and a method of simultaneously adopting the memory cache and the file cache. The memory cache has the characteristics of high reading speed and limited storage space, and the LruCache mechanism is the memory cache, wherein objects are strongly referenced. Lru, named as Least recently Used, is a replacement algorithm to preferentially eliminate the objects that are not Used for the longest time. When an object is newly added, the cache control is insufficient, at this time, the original data in the cache is subjected to elimination or deletion operation according to a certain algorithm, the method is based on a storage mechanism of LruCache, the stored content and the elimination and deletion operation are packaged, and the eliminated data is transferred to a file cache (disk storage) space after being deleted from a memory space, so that the problem that the deleted cache data path is accessed by a system to cause abnormity is solved.
Firstly, a cache path and a cache value are required to be developed and provided based on android memory cache and file cache, wherein the path is stored in the memory cache, namely an LruCache mechanism, and the value is uniformly stored in the file cache, so that the memory space can be effectively saved when the picture data is faced, and the probability of access errors caused by overflow of memory space data and deletion of cache data by a system is reduced. Meanwhile, when the system reads the cache data, according to the method provided by the scheme, the path of the data cache is read from the LruCache mechanism, and then the real data in the file cache is read.
Secondly, the key of the LrucCache mechanism lies in sizeOf () and safeSizeOf () methods, and because the measurement standards of various data types are not uniform, the method can rewrite the method to measure when the sizeOf is called specifically through the property of polymorphism, package the sizeOf calling by one layer, and return the size of calculation, thereby grasping the storage space of the actual cache.
The method adopts a LinkedHashMap data structure, based on an LrucCache mechanism, the method can store cached objects based on an access sequence, the path of the data is stored in the memory cache, the LinkedHashMap data structure is a bidirectional circular linked list, and when the LinkedHashMap is constructed, the method specifies the data storage mode in the LinkedHashMap through an int value.
The put () method of the method caches data, when calling the method, the node of LinkedHashMap is updated or accessed, the int value of the node, which specifies the storage mode, is modified, the node is moved to the tail of the linked list, therefore, the node which is just recently stored or updated is in the linked list, and the node which is least recently used is at the head of the linked list. When the storage space returned by the sizeOf is insufficient, the cache data are sequentially stored from the head of the linked list.
And fifthly, cleaning the buffer space by trimSize (), wherein when new buffer data is input to the put, in order to ensure that the occupied size of the current buffer does not exceed the total size of the specified memory buffer returned by sizeOf calculation, the method cleans the buffer space by trimSize (). The scrubbing does not remove the cached data, but rather the unloading. When the buffer size of the current data exceeds the buffer space, the least recently used data is unloaded until the size meets the requirement. The TrimSze () method is called certainly when in put () and get (), then the access of the memory cache is kept normal, and the condition of memory overflow (OOM) can not occur. The data transferred from the memory cache space and the path value of the stored data are transferred to a path storage area which is specially divided in advance in the file storage space according to the established algorithm of the method.
Sixthly, obtaining cache data get (), wherein according to the provided path, the method can preferentially search cached path data from a memory cache of an LruCache mechanism, and directly accesses a specific value data value in a file cache according to the path after finding the corresponding data; if the corresponding path value is not found in the memory cache, querying the path storage area divided by the file storage area, and accessing the corresponding value data value in the file storage area after finding the path value.
And seventhly, clearing the cache clearCache (), wherein the method automatically clears the abnormal cache data of the path inquired by the memory cache and the file cache for part of reasons, and returns to null, so that the cache is released, the resources are recovered, and the system performance is provided.
The above-mentioned embodiments only express the specific embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (5)

1. A data caching method is characterized in that a memory content and elimination and deletion operations are packaged based on an Lrucache memory mechanism, eliminated data are transferred to a file caching space after being deleted from a memory caching space, and the method specifically comprises the following steps:
step 1, developing a path and a value data value for providing a data cache based on a memory cache and a file cache, wherein the path is stored in the memory cache, namely an LruCache mechanism, and the value data value is uniformly stored in the file cache; setting the size of a memory cache, setting the size of a path cache space in a file cache, setting the size of a value data value cache space in the file cache, setting the name of a storage folder of the file cache, and presetting the size of the cache space, namely the size of the cache space to be processed by a sizeOf () function;
step 2, writing in a cache operation, calling a put () function, caching a key corresponding to data needing to be cached into a memory space to be stored by a path, storing a corresponding value data value into a file cache space, storing the value data value in a file form, starting a cache thread, and setting effective time of caching;
step 3, requesting network data or network pictures, calling corresponding data for caching, calling a trimSize () function when a large number of network pictures are cached to a memory cache to cause the set internal storage space size to be full, and transferring a cache data path which is not used for a long time, so as to clear the cache space;
and 4, reading the cache data, calling a get () function, and loading the value data value in the corresponding file storage from the path of the local internal cache before the page is opened again for network request, so as to finish the timely effectiveness of page loading.
2. The data caching method of claim 1, further comprising the steps of:
step 5, clearing the cache data, calling a clearCache () function, and automatically deleting the access data which fails to be searched in the operation of obtaining the cache data; or a path is provided to delete the specified cache data, thereby releasing the storage space.
3. The data caching method according to claim 1, wherein in step 1, when the sizeOf () function is specifically called, the sizeOf () function call is subjected to one-layer encapsulation, and the size of the computation is returned, thereby grasping a storage space of the actual cache.
4. The data caching method according to claim 3, wherein the data cache adopts a LinkedHashMap data structure of a doubly linked list, and when the LinkedHashMap is constructed, a data storage mode in the LinkedHashMap is specified through an int value; when the put () function in step 2 is called, the node of linked hashmap is updated or accessed, the int value of the node, which specifies the storage mode, is modified, and the node is moved to the tail of the linked list, so that the node which is just recently stored or updated is at the linked list, the node which is least recently used is at the head of the linked list, and when the storage space returned by the sizeOf () function is insufficient, the cache data is sequentially transferred from the head of the linked list.
5. The data caching method of claim 4, wherein in step 3, a trimSize () function is called to perform a dump operation on the cache data path that is not used for a long time, so as to clean the cache space as follows:
when a new cache data is input by the put () function, in order to ensure that the size occupied by the current cache does not exceed the total size of the specified memory cache returned by the sizeOf () function, the cache space is cleaned by the trimSize () function; cleaning, not removing the cache data, but transferring, and when the cache size of the current data exceeds the cache space, starting transferring the least recently used data until the size meets the requirement; when the TrimSize () function calls the put () function and the get () function certainly, the normal access of the memory cache is further saved, and the memory overflow condition can not occur; the data transferred from the memory cache space, that is, the path value of the stored data, is transferred to the path storage area divided in advance in the file storage space.
CN202111611719.5A 2021-12-27 2021-12-27 Data caching method Pending CN114238417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111611719.5A CN114238417A (en) 2021-12-27 2021-12-27 Data caching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111611719.5A CN114238417A (en) 2021-12-27 2021-12-27 Data caching method

Publications (1)

Publication Number Publication Date
CN114238417A true CN114238417A (en) 2022-03-25

Family

ID=80763354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111611719.5A Pending CN114238417A (en) 2021-12-27 2021-12-27 Data caching method

Country Status (1)

Country Link
CN (1) CN114238417A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225623A (en) * 2022-07-20 2022-10-21 贵阳语玩科技有限公司 Network picture loading method, device and medium based on Unity engine

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2009765A1 (en) * 1989-05-17 1990-11-17 Ernest Dysart Baker Fault tolerant data processing system
CN1836268A (en) * 2003-06-20 2006-09-20 汤姆森普罗梅特里克公司 System and method for computer based testing using cache and cacheable objects to expand functionality of a test driver application
US20080147974A1 (en) * 2006-12-18 2008-06-19 Yahoo! Inc. Multi-level caching system
CN102508638A (en) * 2011-09-27 2012-06-20 华为技术有限公司 Data pre-fetching method and device for non-uniform memory access
DE102013210839A1 (en) * 2012-06-15 2013-12-19 International Business Machines Corp. Method for facilitating processing within multiprocessor-data processing environment, involves performing action based on determination that transaction is cancelled for number of times, and repeating transaction once or multiple times
CN104778270A (en) * 2015-04-24 2015-07-15 成都汇智远景科技有限公司 Storage method for multiple files
CN106557396A (en) * 2015-09-25 2017-04-05 北京计算机技术及应用研究所 Virtual machine program running state monitoring method based on qemu
CN106802955A (en) * 2017-01-19 2017-06-06 济南浪潮高新科技投资发展有限公司 A kind of image data caching method
CN107203555A (en) * 2016-03-17 2017-09-26 阿里巴巴集团控股有限公司 Page loading processing method and device
CN111159066A (en) * 2020-01-07 2020-05-15 杭州电子科技大学 Dynamically-adjusted cache data management and elimination method
CN111694547A (en) * 2019-03-12 2020-09-22 湛江市霞山区新软佳科技有限公司 Automatic coding data processing application design tool based on data state change

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2009765A1 (en) * 1989-05-17 1990-11-17 Ernest Dysart Baker Fault tolerant data processing system
EP0398694A2 (en) * 1989-05-17 1990-11-22 International Business Machines Corporation Fault tolerant data processing system
CN1836268A (en) * 2003-06-20 2006-09-20 汤姆森普罗梅特里克公司 System and method for computer based testing using cache and cacheable objects to expand functionality of a test driver application
US20080147974A1 (en) * 2006-12-18 2008-06-19 Yahoo! Inc. Multi-level caching system
CN102508638A (en) * 2011-09-27 2012-06-20 华为技术有限公司 Data pre-fetching method and device for non-uniform memory access
DE102013210839A1 (en) * 2012-06-15 2013-12-19 International Business Machines Corp. Method for facilitating processing within multiprocessor-data processing environment, involves performing action based on determination that transaction is cancelled for number of times, and repeating transaction once or multiple times
CN104778270A (en) * 2015-04-24 2015-07-15 成都汇智远景科技有限公司 Storage method for multiple files
CN106557396A (en) * 2015-09-25 2017-04-05 北京计算机技术及应用研究所 Virtual machine program running state monitoring method based on qemu
CN107203555A (en) * 2016-03-17 2017-09-26 阿里巴巴集团控股有限公司 Page loading processing method and device
CN106802955A (en) * 2017-01-19 2017-06-06 济南浪潮高新科技投资发展有限公司 A kind of image data caching method
CN111694547A (en) * 2019-03-12 2020-09-22 湛江市霞山区新软佳科技有限公司 Automatic coding data processing application design tool based on data state change
CN111159066A (en) * 2020-01-07 2020-05-15 杭州电子科技大学 Dynamically-adjusted cache data management and elimination method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DANIEL SANCHEZ等: "the zcache:decoupling ways and associativity", 2010 43RD ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, 8 December 2010 (2010-12-08), pages 1 - 10 *
刘志鹏;: "MCDS:大规模移动通信数据计算的单机实现", 中国科学技术大学学报, vol. 46, no. 01, 15 January 2016 (2016-01-15), pages 36 - 46 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225623A (en) * 2022-07-20 2022-10-21 贵阳语玩科技有限公司 Network picture loading method, device and medium based on Unity engine
CN115225623B (en) * 2022-07-20 2023-08-29 贵阳语玩科技有限公司 Method, device and medium for loading network picture based on Unity engine

Similar Documents

Publication Publication Date Title
US11307765B2 (en) System and methods for storage data deduplication
US6658533B1 (en) Method and apparatus for write cache flush and fill mechanisms
US5577227A (en) Method for decreasing penalty resulting from a cache miss in multi-level cache system
US8949544B2 (en) Bypassing a cache when handling memory requests
US5778430A (en) Method and apparatus for computer disk cache management
US5687368A (en) CPU-controlled garbage-collecting memory module
US6738875B1 (en) Efficient write-watch mechanism useful for garbage collection in a computer system
CN109800185B (en) Data caching method in data storage system
US6078992A (en) Dirty line cache
JP5142995B2 (en) Memory page management
JP3864256B2 (en) Method and profiling cache for managing virtual memory
US20090327621A1 (en) Virtual memory compaction and compression using collaboration between a virtual memory manager and a memory manager
US20120265924A1 (en) Elastic data techniques for managing cache storage using ram and flash-based memory
US8990159B2 (en) Systems and methods for durable database operations in a memory-mapped environment
US11449430B2 (en) Key-value store architecture for key-value devices
JP2022050016A (en) Memory system
US11341042B2 (en) Storage apparatus configured to manage a conversion table according to a request from a host
TW200417857A (en) Allocating cache lines
US20080301372A1 (en) Memory access control apparatus and memory access control method
CN114238417A (en) Data caching method
US6256711B1 (en) Method for purging unused data from a cache memory
US20230297257A1 (en) Resiliency and performance for cluster memory
US20230273751A1 (en) Resiliency and performance for cluster memory
US11907065B2 (en) Resiliency and performance for cluster memory
CN110162268A (en) It is calculated using real-time for the method and system by block data tissue and placement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination