CN111858556A - Distributed cache management method based on version control and manager - Google Patents

Distributed cache management method based on version control and manager Download PDF

Info

Publication number
CN111858556A
CN111858556A CN202010708455.4A CN202010708455A CN111858556A CN 111858556 A CN111858556 A CN 111858556A CN 202010708455 A CN202010708455 A CN 202010708455A CN 111858556 A CN111858556 A CN 111858556A
Authority
CN
China
Prior art keywords
cache
data
version
version number
distributed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010708455.4A
Other languages
Chinese (zh)
Inventor
刘津
赵山
许晓笛
刘金伟
马少博
张哲铭
王亚楠
刘步云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Cloud Information Technology Co Ltd
Original Assignee
Inspur Cloud Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Cloud Information Technology Co Ltd filed Critical Inspur Cloud Information Technology Co Ltd
Priority to CN202010708455.4A priority Critical patent/CN111858556A/en
Publication of CN111858556A publication Critical patent/CN111858556A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a distributed cache management method and a manager based on version control, the method realizes the cache service of a distributed system by using a key value memory database, manages the writing and deleting of cache by introducing the version control, the distributed cache manager realized based on the version control technology effectively ensures the double-write consistency of the distributed cache system by introducing a cache version control mode on the basis of managing data cache by using a bypass mode under the premise of extremely little extra cache expense and almost no increase of performance expense, avoids the problem of dirty reading of cache data of the distributed cache system under the bypass mode, has the characteristics of simple realization, high efficiency and reliability, support of various programming languages, support of integration and independent deployment of various deployment modes, and can know the frequent degree of writing and updating of the cache data by analyzing the cache version number, the pertinence of processing write hot data can be improved.

Description

Distributed cache management method based on version control and manager
Technical Field
The invention relates to the technical field of cloud computing/cloud storage systems, and particularly provides a distributed cache management method and a manager based on version control.
Background
The caching technology is a common technology aiming at improving the performance and the scalability of a system by temporarily copying frequently accessed data into a fast storage close to an application program, is mainly used for a business scene with high concurrent mass reading requests, and is widely applied by internet companies in recent years.
Caching techniques generally have the following concepts:
hit: data requests are queried from the cache and return the needed data (otherwise, a miss);
and (4) timeout: the Time that the cache data exists reaches the maximum Time To Live (TTL) that the cache data is allowed to exist in the system;
elimination: the cache system discards some cache data (is covered by new data) according to a certain strategy;
and (3) failure: the data in the cache is eliminated because the preset overtime time is reached or eliminated according to a preset elimination strategy after the storage space of the cache system reaches the upper limit;
penetration: the phenomenon that data inquired by an inquiry request is not in cache and is directly inquired in a persistent layer is changed;
breakdown: the phenomenon that a large number of query requests directly request a database at the moment of cache failure is pointed out;
the following three common cache usage modes are mainly used:
1. Bypass mode (Cache glide Pattern): in the bypass mode, the read request directly reads data from the persistent layer and writes the data back to the cache after the cache is not hit, and the write request directly writes the data into the persistent layer and then eliminates the cache.
The bypass mode has the advantages of simple implementation and the disadvantages that the service for maintaining the application cache needs to maintain the cache and the persistence layer at the same time, and the problem of double-write consistency exists.
2. Direct Read/Write mode (Read/Write Through Pattern): in a direct reading/direct writing mode, a read request newly opens a cache to be used after cache miss, reads data from a persistent layer to the cache, and finally returns the data to be read; when the write request hits the cache, the cache block is directly updated, and when the write request does not hit the cache, the data is directly written into the persistent layer.
The direct-reading/direct-writing mode has the advantages that only the cache needs to be maintained, the usability of the cache is high, the writing is likely to be lost once the cache data is lost, and the efficiency of updating the cache is not as high as that of directly eliminating the cache.
3. Background Write mode (Write while): in a background writing mode, when a reading request hits a cache, directly returning data to be read after confirming that a cache block has a value, if the cache is not hit, firstly opening a cache block to be used, and then executing the operation; when the write request hits the cache, the new data is directly written into the cache block, then the cache block is confirmed to have a value, if the cache is not hit, a cache block to be used is opened up, and the operation is executed. When the cache block is confirmed, if the cache block has a value, the previous value of the cache block is written back to the persistent layer, and then data is read from the persistent layer to the cache block; if the cache block has no value, the data is read directly from the persistent layer to the cache block.
The background write-in mode has the advantages that the read-write data only directly operate the memory, the step of confirming the cache value can be asynchronously realized, the cache write operations can be combined and then persisted for a plurality of times, and the read-write speed is high; the data is not strongly consistent, the data in the cache may be lost, the implementation logic is complex, and it is necessary to determine which cache data needs to be flushed to the persistent layer and which persistent layer data needs to be flushed to the cache.
The core idea of caching is to improve performance by sacrificing strong consistency. In contrast, the bypass mode performs better than the direct read/write mode, and is simpler to implement than the background write mode, and therefore is the most widely used of the three modes. But in extreme cases there is a "double write consistency" problem due to the simultaneous maintenance of the cache and persistence layers. The problem of "double write consistency" refers to that, when the bypass mode cache model is used, after a write request updates a persistent layer and eliminates a cache, there is a possibility that the previous read request reads old data from the persistent layer due to a network delay problem and then updates the old data to the cache again, so that the data written by the cache and the data written by the persistent layer are inconsistent.
As shown in fig. 5, a read request and a write request are two concurrent threads requesting the same persistence system, and the read request requests cache data (r1.request) first, misses due to cache miss (r2.miss), and requests the persistence layer (r3.request) directly instead. Normally, the query request is returned immediately, but for some reason (such as network jitter), the request is not returned immediately, and at the same time, the write request updates the persistent layer (w1.update) first, and after the update of the persistent layer is returned (w2. update), the data in the cache is cleared (w3.delete), and only the return data (r 4.deleted response with old data) of the persistent layer is received by the read request, and then the cache is overwritten with the old data (r5.update with old data) obtained before after the cache is cleared by the write request (w4. deleted). At the time of the next read request (r7.request), dirty data (r8.response with dirty data) brought back by the last read request is obtained, and the persistent layer is updated as early as the time of the write request (w1.update), so that the problem of data inconsistency between the cache and the persistent layer occurs.
Generally, to solve this problem, an expiration time needs to be set for the cache, and after the cache expires, when the read request penetrates the cache again, the updated data is queried from the persistent layer and stored in the cache. However, if the cache expiration time is set too long, the existence time of the dirty data is long, and the client frequently accesses and uses the dirty data to update other data, so that a chain reaction is caused; if the cache expiration time is set to be too short, frequent cache breakdown and even avalanche can occur, so that the system performance is reduced, and the expected effect of caching is not achieved.
So in addition to setting an expiration time for the cache, it is conventional to manage the distributed cache using a "delayed double delete" policy. The delayed double deletion means that after the write request deletes the cache data for the first time, the cache data is deleted again after a period of time, and the delayed double deletion is used for clearing dirty data which is introduced slowly when the read request returns old data due to network jitter during the period. However, the delay time of this method is not well determined, and too short delay time may result in that dirty data cannot be effectively removed (dirty data may be introduced into the cache after the second deletion), and too long delay time may result in that dirty data cannot be removed in time (data requested before the cache is deleted for the second time may be dirty data).
Disclosure of Invention
The technical task of the invention is to provide a distributed cache management method and a manager based on version control, aiming at the existing problems, and solving the problem that the data of a cache and a persistent layer are possibly inconsistent in a bypass mode by introducing the version control of a cache data key.
In order to achieve the purpose, the invention provides the following technical scheme:
a distributed cache management method based on version control, the method realizes the cache service of a distributed system by using a key value memory database, and manages the writing and deleting of the cache by introducing the version control; by using a small memory overhead, the data consistency of the cache and the persistence layer is improved without significant performance degradation.
The method comprises the steps of trying to obtain a cache version number before receiving a read request to inquire requested data, trying to obtain the cache version number of the entity object again after reading data from a persistent layer when cache is not hit, and determining whether to update the cache or not by comparing the two cache version numbers.
When processing a write request, the method firstly adds 1 to the cache version number, updates the data of the persistent layer and finally clears the cache content.
The generation process of the cache version number is as follows:
Before a write request is received to update a persistent layer, the type name (table name) and the unique identification (primary key) of a cached entity are spliced by using an '@' symbol as keys (such as the table name @ primary key) of a cached version number, and a self-increment sequence generated by a counter is used as the value of the version number.
The processing procedure after the method receives the read request comprises the following steps:
after receiving the read request, trying to obtain a cache version number V1, and then reading the cache;
if the cache hit happens, the cache data is directly returned, and if the cache miss happens, the data is read from the persistent layer, and whether the cache has a value is judged;
if the cached value indicates that other read requests are written with the new value, directly returning the new value;
if there is no value in the cache then an attempt is made to obtain the cache version number V2, and compare it with the first obtained cache version number V1:
if the version numbers of the two times are the same, the fact that no new data are written in the period of reading the persistence layer is shown, the data read from the persistence layer are directly used for updating the cache;
otherwise, the new data is written in and the cache is cleared, the old data is read at this time, the cache is not updated and the data is directly returned, and the data is reserved for the next read request processing.
The method uses Redis to store the cache version number, and uses the INCR command of Redis to generate the self-increment version number. Redis is an open source log-type and Key-Value database which is written by using ANSI C language, supports network, can be based on memory and can also be persistent, and provides API of multiple languages.
The method comprises the steps of obtaining a version number by using a GET command of Redis before querying request data after receiving a read request, directly reading data from a persistent layer and then obtaining the version number again when the version number V1 is not obtained or cache is not hit, and determining whether to update the cache or not by comparing the two version numbers.
The cache version information in the implementation process of the method uses independent storage space management (can be local cache or extra distributed cache), and expiration time is not set, because only the cache version information is stored, the used storage space is smaller, and a large amount of version information can be stored by basically using less cache space.
The method ensures that the Recently unused cache version is eliminated each time by setting an eviction policy of LRU (least recent used) to the key that holds the cache version. And setting proper expiration time for the cache data to ensure that the content of the cache data can be finally updated under unknown abnormal conditions.
A version control based distributed cache manager, the manager comprising: a request controller (independent deployment)/interceptor (integrated deployment), a version service module, a cache service module, and a data service module, wherein:
The request controller/interceptor is used for receiving or intercepting the client request, calling the version service of the version service module according to the action of the request and executing corresponding cache version control; calling a cache service of a cache service module and executing cache operation; calling the data service of the data service module to perform data persistence layer operation;
the version service module is used for calling the cache service of the cache service module and acquiring or updating the cache version number.
Compared with the prior art, the distributed cache management method and the manager based on version control have the following outstanding beneficial effects:
the distributed cache manager realized based on the version control technology effectively ensures the double-write consistency of the distributed cache system and avoids the dirty reading problem of the cache data of the distributed cache system under the bypass mode by introducing the cache version control mode on the basis of using the bypass mode to manage the data cache under the premise of little extra cache overhead and almost not increasing the performance overhead, has the characteristics of simple realization, high efficiency and reliability, supporting various programming languages and supporting integration and independent deployment of various deployment modes, can know the frequent degree of the cache data written and updated by analyzing the cache version number, and can improve the pertinence of processing the written hot data.
Drawings
FIG. 1 is a flow chart of the method write request processing of the present invention;
FIG. 2 is a flow chart of the read request processing of the method of the present invention;
FIG. 3 is a diagram of the cache manager component of the present invention;
FIG. 4 is a diagram of the cache manager classes of the present invention;
FIG. 5 is a timing diagram of the "double write consistency" problem.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
A distributed cache management method based on version control uses Redis to realize a counter function, the ID of a cached object name is spliced to be used as a key of a cached version, and the value of a single key can be increased by 1 by self at each time by using the INCR command of the Redis.
Taking a data entity book as an example, before a cache manager receives a write request and updates a persistent layer, splicing the name of the book entity and the unique identifier thereof by using '@' and then using a key of a version number to generate a value of a self-increment version number by using an INCR (index register) command of Redis; and obtaining the version number by using a GET command before receiving the read request and requesting data from the persistent layer, obtaining the version number again after reading the data from the persistent layer when the cache is not hit, and determining whether to update the cache by comparing the version numbers twice.
Besides controlling the cache version number, the cache version number can also judge the frequency of updating a certain cache according to the version number, and perform targeted processing on certain caches.
As shown in fig. 1, when the cache manager processes the write request, first add 1 to the cache version number, then update the data of the persistent layer, and finally clear the cache content.
As shown in fig. 2, the cache manager attempts to obtain the cache version number V1 after receiving the read request, and then reads the cache. And directly returning the cache data when the cache is hit, reading the data from the persistent layer when the cache is not hit, and judging whether the cache has a value or not. If the cached value indicates that there are other read requests to which the new value was written, the new value is returned directly. If no value in the cache attempts to obtain the cache version number V2 compared to the first obtained cache version number: if the version numbers of the two times are the same, the fact that no new data are written in the period of reading the persistent layer is indicated, the cache is updated by the data read from the persistent layer, otherwise, the fact that new data are written in the period and the cache is cleared is considered, the old data are read in the period, the cache is not updated, the data are directly returned, and the data are reserved for the next read request processing.
A distributed cache manager based on version control can have versions of multiple programming languages (C/java/Golang), can be deployed as an independent application, and can also be integrated into a business application as a dependent plug-in.
The component diagram and the class diagram of the cache manager are shown in fig. 3 and 4, and the cache manager is composed of a request controller (RequestController, independent deployment)/interceptor (RequestHandler, integrated deployment), a version service (VersionService), a cache service (CachService), and a data service (DataService). The request controller/interceptor is used for receiving or intercepting the client request, calling the version service to execute corresponding cache version control according to the action of the request, calling the cache service to execute cache operation, and calling the data service to perform data persistence layer operation. The version service calls the cache service to obtain or update the cache version number.
The above-described embodiments are merely preferred embodiments of the present invention, and general changes and substitutions by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention.

Claims (10)

1. A distributed cache management method based on version control is characterized in that cache service of a distributed system is achieved through a key value memory database, and writing and deleting of a cache are managed through introducing version control.
2. The distributed cache management method based on version control as claimed in claim 1, wherein the method obtains the cache version number before querying the data to be requested after receiving the read request, and obtains the entity object cache version number again after reading the data from the persistent layer when the cache misses, and determines whether to update the cache by comparing the two cache version numbers.
3. The distributed cache management method based on version control as claimed in claim 2, wherein when processing a write request, the method adds 1 to the cache version number, updates the persistent layer data, and finally clears the cache content.
4. The distributed cache management method based on version control according to claim 2, wherein the generation process of the cache version number is as follows:
before a write request is received to update a persistent layer, the type name and the unique identification of the cached entity are spliced by using an '@' symbol to serve as a key of a cached version number, and a self-increment sequence generated by a counter is used as the value of the version number.
5. The distributed cache management method based on version control according to any one of claims 1 to 4, wherein the processing procedure after the method receives the read request comprises:
after receiving the read request, trying to obtain a cache version number V1, and then reading the cache;
if the cache is hit, the cache data is directly returned, the version number V1 is not obtained, or the data is directly read from the persistent layer if the cache is not hit, and then whether the cache has a value or not is judged;
if the cached value indicates that other read requests are written with the new value, directly returning the new value;
If there is no value in the cache then an attempt is made to obtain the cache version number V2, and compare it with the first obtained cache version number V1:
if the version numbers of the two times are the same, the fact that no new data are written in the period of reading the persistence layer is shown, the data read from the persistence layer are directly used for updating the cache;
otherwise, the new data is written in and the cache is cleared, the old data is read at this time, the cache is not updated and the data is directly returned, and the data is reserved for the next read request processing.
6. The method according to claim 5, wherein said method uses Redis to store the cache version number, and uses the INCR command of Redis to generate the self-increment sequence of the version number.
7. The method as claimed in claim 6, wherein the method uses a GET command of Redis to obtain the version number before querying the requested data after receiving the read request, and tries to obtain the version number of the object again after directly reading the data from the persistent layer when the version number V1 is not obtained or the cache misses, and determines whether to update the cache by comparing the two version numbers.
8. The distributed cache management method based on version control as claimed in claim 8, wherein the cache version information in the implementation process of the method uses a separate storage space management without setting an expiration time.
9. The distributed cache management method based on version control as claimed in claim 8, wherein the method ensures that the cache version which is not used recently is eliminated each time by setting the eviction policy of LRU to the key for storing cache version.
10. A version control based distributed cache manager, the manager comprising: request controller/interceptor, version service module, cache service module and data service module, wherein:
the request controller/interceptor is used for receiving or intercepting the client request, calling the version service of the version service module according to the action of the request and executing corresponding cache version control; calling a cache service of a cache service module and executing cache operation; calling the data service of the data service module to perform data persistence layer operation;
the version service module is used for calling the cache service of the cache service module and acquiring or updating the cache version number.
CN202010708455.4A 2020-07-22 2020-07-22 Distributed cache management method based on version control and manager Pending CN111858556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010708455.4A CN111858556A (en) 2020-07-22 2020-07-22 Distributed cache management method based on version control and manager

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010708455.4A CN111858556A (en) 2020-07-22 2020-07-22 Distributed cache management method based on version control and manager

Publications (1)

Publication Number Publication Date
CN111858556A true CN111858556A (en) 2020-10-30

Family

ID=73002315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010708455.4A Pending CN111858556A (en) 2020-07-22 2020-07-22 Distributed cache management method based on version control and manager

Country Status (1)

Country Link
CN (1) CN111858556A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699325A (en) * 2021-01-14 2021-04-23 福建天晴在线互动科技有限公司 Method and system for guaranteeing data consistency through cache secondary elimination
CN112749198A (en) * 2021-01-21 2021-05-04 中信银行股份有限公司 Multi-level data caching method and device based on version number
CN113254465A (en) * 2021-05-25 2021-08-13 四川虹魔方网络科技有限公司 Cache final consistency updating method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102165424A (en) * 2008-09-24 2011-08-24 松下电器产业株式会社 Cache memory, memory system and control method therefor
CN102187313A (en) * 2008-10-15 2011-09-14 微软公司 Caching runtime generated code
CN102541757A (en) * 2011-11-30 2012-07-04 华为技术有限公司 Write cache method, cache synchronization method and device
CN106874465A (en) * 2017-02-15 2017-06-20 浪潮软件集团有限公司 Method for efficiently managing cache based on data version
CN110750579A (en) * 2019-10-21 2020-02-04 浪潮云信息技术有限公司 Efficient memory distribution method and system for cloud database Redis
CN111177161A (en) * 2019-11-07 2020-05-19 腾讯科技(深圳)有限公司 Data processing method and device, computing equipment and storage medium
US10678697B1 (en) * 2019-01-31 2020-06-09 Salesforce.Com, Inc. Asynchronous cache building and/or rebuilding
US20200201775A1 (en) * 2018-08-25 2020-06-25 Panzura, Inc. Managing a distributed cache in a cloud-based distributed computing environment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102165424A (en) * 2008-09-24 2011-08-24 松下电器产业株式会社 Cache memory, memory system and control method therefor
CN102187313A (en) * 2008-10-15 2011-09-14 微软公司 Caching runtime generated code
CN102541757A (en) * 2011-11-30 2012-07-04 华为技术有限公司 Write cache method, cache synchronization method and device
CN106874465A (en) * 2017-02-15 2017-06-20 浪潮软件集团有限公司 Method for efficiently managing cache based on data version
US20200201775A1 (en) * 2018-08-25 2020-06-25 Panzura, Inc. Managing a distributed cache in a cloud-based distributed computing environment
US10678697B1 (en) * 2019-01-31 2020-06-09 Salesforce.Com, Inc. Asynchronous cache building and/or rebuilding
CN110750579A (en) * 2019-10-21 2020-02-04 浪潮云信息技术有限公司 Efficient memory distribution method and system for cloud database Redis
CN111177161A (en) * 2019-11-07 2020-05-19 腾讯科技(深圳)有限公司 Data processing method and device, computing equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔自峰;刘竹旺;闫修林;: "分布式系统缓存一致性设计与应用", 指挥信息系统与技术, no. 06, 28 December 2015 (2015-12-28) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699325A (en) * 2021-01-14 2021-04-23 福建天晴在线互动科技有限公司 Method and system for guaranteeing data consistency through cache secondary elimination
CN112749198A (en) * 2021-01-21 2021-05-04 中信银行股份有限公司 Multi-level data caching method and device based on version number
CN113254465A (en) * 2021-05-25 2021-08-13 四川虹魔方网络科技有限公司 Cache final consistency updating method

Similar Documents

Publication Publication Date Title
CN111858556A (en) Distributed cache management method based on version control and manager
US5668958A (en) Heterogeneous filing system with common API and reconciled file management rules
EP2478442B1 (en) Caching data between a database server and a storage system
US7711916B2 (en) Storing information on storage devices having different performance capabilities with a storage system
KR101038963B1 (en) Cache allocation upon data placement in network interface
JP2021513176A (en) Cache for efficient record lookup in LSM data structures
US20020166022A1 (en) Access control method, access control apparatus, and computer-readable memory storing access control program
US20180336125A1 (en) Unified paging scheme for dense and sparse translation tables on flash storage systems
JP2011503725A (en) Network with distributed shared memory
CN103076992B (en) A kind of internal storage data way to play for time and device
US7251716B2 (en) Method and system for data processing with recovery capability
US20220398201A1 (en) Information processing apparatus and method
WO2024066613A1 (en) Access method and apparatus and data storage method and apparatus for multi-level cache system
US11593268B2 (en) Method, electronic device and computer program product for managing cache
CN111506604B (en) Method, apparatus and computer program product for accessing data
CN101067820A (en) Method for prefetching object
US11099998B2 (en) Method and device for optimization of data caching
CN110955488A (en) Virtualization method and system for persistent memory
CN106991059B (en) access control method for data source
US11194720B2 (en) Reducing index operations in a cache
CN113849119A (en) Storage method, storage device, and computer-readable storage medium
CN117033831A (en) Client cache method, device and medium thereof
US20050027933A1 (en) Methods and systems for managing persistent storage of small data objects
JP7450735B2 (en) Reducing requirements using probabilistic data structures
CN113742253B (en) Storage medium management method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination