CN114138186B - Caching method and device capable of being dynamically adjusted - Google Patents

Caching method and device capable of being dynamically adjusted Download PDF

Info

Publication number
CN114138186B
CN114138186B CN202111326347.1A CN202111326347A CN114138186B CN 114138186 B CN114138186 B CN 114138186B CN 202111326347 A CN202111326347 A CN 202111326347A CN 114138186 B CN114138186 B CN 114138186B
Authority
CN
China
Prior art keywords
data
area
cache
data area
hot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111326347.1A
Other languages
Chinese (zh)
Other versions
CN114138186A (en
Inventor
卢振雨
汪本义
孙彦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Arcvideo Technology Co ltd
Original Assignee
Hangzhou Arcvideo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Arcvideo Technology Co ltd filed Critical Hangzhou Arcvideo Technology Co ltd
Priority to CN202111326347.1A priority Critical patent/CN114138186B/en
Publication of CN114138186A publication Critical patent/CN114138186A/en
Application granted granted Critical
Publication of CN114138186B publication Critical patent/CN114138186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the data caching technology, and discloses a caching method and a caching device capable of being dynamically adjusted, wherein the method comprises the following steps of; partitioning data, namely partitioning cached data into three cache areas with adjustable sizes; processing the newly loaded data in a partition mode, and placing the newly loaded data into different buffer areas; and judging the cache areas, and respectively searching and judging the cache data of the three cache areas. The three cache areas are respectively a hot data area, a warm data area and a cold data area; the hot data area is the latest and most commonly used data storage area; the temperature data area is a common data area; the cold data area is a less-used data storage area; by the strategy of the invention for mutually converting the cache and the long-term storage, the existing cache is realized and the hard disk storage (namely the long-term storage) is not fully filled.

Description

Caching method and device capable of being dynamically adjusted
Technical Field
The present invention relates to data caching technology, and in particular, to a dynamically adjustable caching method and apparatus.
Background
In the development process, a caching mechanism is generally adopted to cache data to a certain extent, so as to improve the access speed. In particular, in mobile development, in order to reduce network consumption caused by interface requests, a cache-to-local policy is generally adopted to improve interaction smoothness. The local caching strategy can be said to optimize and improve human-computer experience by means of space speed change, so that cache files are continuously redundant, dirty data are formed, if the caching mechanism is abused, storage is stressed, and most APP on the market has a functional module of clearing cache. However, such a deletion of a brain is against the original purpose of the cache design, and results in a cache mechanism similar to a dummy one, which does not bring about fundamental improvement, and does not achieve a proper and balanced processing manner in terms of time and space.
The cleaning of the data cache in the prior art is performed by the LRU algorithm, which is least recently used (if the data is accessed recently, the probability of being accessed later is also higher).
For example, in the prior art, the patent application number is: CN 202010309879.3; the patent name is: caching method and equipment for distributed storage system, patent application date: 2020-04-20. The patent discloses a caching method and equipment of a distributed storage system, comprising the following steps: if the corresponding file to be read is stored in the cache equipment, reading the file to be read from the cache equipment, and adjusting an LRU index stack; if the cache equipment does not have the corresponding file to be read, reading the file to be read from the bottom storage system based on the guide of the file to be read in the request, storing the file to be read into the cache equipment, and respectively adjusting the LRU index stack and the LFU index stack.
For example, in the prior art, the patent application number is: CN201810238126.0; the patent name is: a data caching method and device; patent application date: 2018-03-22.
Disclosure of Invention
Aiming at the problem that the memory occupied by the cleaning of the data cache in the prior art is large by using an LRU algorithm (least recently used, if the data is accessed recently, the probability of being accessed later is higher, and the cleaning of the data cache is carried out), the invention provides a dynamically adjustable cache method and a dynamically adjustable cache device.
In order to solve the technical problems, the invention is solved by the following technical scheme:
a dynamically adjustable caching method is applied to mobile storage equipment and comprises the following steps of;
partitioning data, namely partitioning cached data into three cache areas with adjustable sizes;
the newly loaded cache data is processed in a partitioned mode, and different cache areas are arranged for the newly loaded cache data;
and judging the buffer area, and judging the buffer area loaded with new data.
Preferably, the three buffer areas are a hot data area, a warm data area and a cold data area respectively; the hot data area is the latest and most commonly used data storage area; the temperature data area is a common data area; the cold data area is a less-used data storage area.
Preferably, the data feedback processing unit supports the extra operation of the developer on caching and recycling to the greatest extent.
Preferably, the method further comprises judging whether the newly added data overflows or not, and the newly added data is hot data; each buffer area carries out overflow judgment on hot data, and if the data of the hot data area overflows, the overflowed data is directly reduced to a warm data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered receipts are automatically deleted in the cache area and fed back to the developer;
processing of the newly entered data.
Preferably, the size-adjustable buffer area is realized by setting recovery mechanisms with different buffer threshold values, and the method for realizing the recovery mechanisms comprises the steps of inputting buffer data and buffering the data into a set; degrading the cached data, and deleting the data from the collection; and recovering the cache data, wherein the data is removed from the cache area when the cache data is recovered.
In order to solve the technical problems, the invention also provides a dynamically adjustable caching device which is applied to mobile storage equipment and comprises,
the data partitioning unit partitions the cached data into three cache areas with adjustable sizes;
the newly loaded cache data partition processing unit is used for placing the newly loaded cache data into different cache areas;
and the judging unit of the buffer area is used for searching and judging the buffer area loaded with new data.
Preferably, the three buffer areas are a hot data area, a warm data area and a cold data area respectively; the hot data area is the latest common data area; the temperature data area is a common data area; the cold data area is a less used data storage area.
Preferably, the newly loaded data partition processing comprises a data cache processing unit and a data feedback processing unit;
the data caching processing unit caches the newly loaded data into the hot data area; for the overflow of the data in the hot data area, the overflowed data is directly reduced to the warm data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered receipts are automatically deleted in the cache area and fed back to the developer;
the data feedback processing unit searches the newly entered data in the hot data area, returns the value cached in the hot data area if the newly entered data exists in the hot data area, and updates the data in the hot data area; otherwise, searching the temperature data area of the next cache area, returning the value of the cache temperature data area if the value exists, and updating the data of the temperature data area; otherwise, searching the cold data area of the next cache area, if so, returning the value cached in the cold data area, and updating the data of the cold data area.
Preferably, the method further comprises a newly added data overflow judging unit, wherein the newly added data is scalding data; each buffer area carries out overflow judgment on hot data, and if the data of the hot data area overflows, the overflowed data is directly reduced to a warm data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered receipts are automatically deleted in the cache area and fed back to the developer;
processing of the newly entered data.
Preferably, the size-adjustable buffer area comprises a reclamation mechanism unit for setting different buffer threshold values, and the reclamation mechanism unit comprises a buffer data input unit for buffering data into a set;
the cache data degradation unit is used for deleting cache data from the set; and the cache data recovery unit is used for removing the cache data from the cache area.
The invention has the remarkable technical effects due to the adoption of the technical scheme:
the invention sets the recovery mechanism units with different cache threshold values; the recovered data overflows by itself, so that automatic management and control of the cache data are realized, and the counting module is not needed to count to judge which is the old data;
aiming at the cache data of the invention, the cache data is fetched after unpacking, and the deployment of the invention can be completed only by realizing a defined interface (IRecycle).
In the IRECycle interface, the method for caching and deleting old data is convenient for a developer to convert between caching and long-term storage, if the developer wants to store certain data for a long time, the developer operates in the caching method, if the old data is changed into the old data, the developer can call back to the recycling method, and the developer can delete the data stored for a long time in the method, so that the conversion between caching and long-term storage is realized, and therefore the hard disk storage (namely long-term storage) is not fully filled but uncontrollable.
Drawings
Fig. 1 is a flow chart of embodiment 1 of the present invention.
FIG. 2 is a flow chart of the data caching process of the present invention.
Fig. 3 is a flow chart of the data feedback process of the present invention.
FIG. 4 is a diagram of an exemplary cache of the present invention
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Example 1
A dynamically adjustable caching method is applied to mobile storage equipment and comprises the following steps of;
partitioning data, namely partitioning cached data into three cache areas with adjustable sizes;
the newly loaded cache data is processed in a partitioned mode, and different cache areas are arranged for the newly loaded cache data;
and judging the buffer area, and judging the buffer area loaded with new data.
The three cache areas are respectively a hot data area, a warm data area and a cold data area; the hot data area is the latest and most commonly used data storage area; the temperature data area is a common data area; the cold data area is a less used data storage area.
The newly loaded data partition processing comprises data caching processing and data feedback processing;
data caching, namely caching the newly loaded data into a hot data area; for the overflow of the data in the hot data area, the overflowed data is directly reduced to the warm data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered receipts are automatically deleted in the cache area and fed back to the developer;
data feedback processing;
searching the newly entered data in the hot data area, returning the value cached in the hot data area if the newly entered data exists in the hot data area, and updating the data in the hot data area; otherwise, searching the temperature data area of the next cache area, returning the value of the cache temperature data area if the value exists, and updating the data of the temperature data area; otherwise, searching the cold data area of the next cache area, if so, returning the value cached in the cold data area, and updating the data of the cold data area. And a callback method for further operation of the developer is provided in the searching process so that the developer can perform more detailed follow-up operation (such as cache data persistence), and callback feedback is provided for the data overflowed from the cold data, namely the recovered data.
The method also comprises the step of judging whether the newly added data overflows or not, and the newly added data is hot data; each buffer area carries out overflow judgment on hot data, and if the data of the hot data area overflows, the overflowed data is directly reduced to a warm data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered receipts are automatically deleted in the cache area and fed back to the developer; processing the newly entered data; the size-adjustable cache area is realized by a recycling mechanism with different cache threshold values, and the method for realizing the recycling mechanism comprises the steps of inputting cache data and caching the data into a set; degrading the cached data, and deleting the data from the collection; and recovering the cache data, wherein the data is removed from the cache area when the cache data is recovered.
Example 2
On the basis of the embodiment 1, the dynamically adjustable caching device is realized by a dynamically adjustable caching method, and is applied to a mobile storage device, and comprises,
the data partitioning unit partitions the cached data into three cache areas with adjustable sizes;
the newly loaded cache data partition processing unit is used for placing the newly loaded cache data into different cache areas;
and the judging unit of the buffer area judges the buffer area loaded with new data.
The three cache areas are respectively a hot data area, a warm data area and a cold data area; the hot data area is the latest and most commonly used data storage area; the temperature data area is a common data area; the cold data area is a less used data storage area.
The newly loaded data partition processing comprises a data cache processing unit and a data feedback processing unit;
the data caching processing unit caches the newly loaded data into the hot data area; for the overflow of the data in the hot data area, the overflowed data is directly reduced to the warm data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered receipts are automatically deleted in the cache area and fed back to the developer; for voice data, the feedback information includes the network address of voice, the local path of voice to the local, the voice size and the hash value of the voice object.
A data feedback processing unit; searching the newly entered data in the hot data area, returning the value cached in the hot data area if the newly entered data exists in the hot data area, and updating the data in the hot data area; otherwise, searching the temperature data area of the next cache area, returning the value of the cache temperature data area if the value exists, and updating the data of the temperature data area; otherwise, searching the cold data area of the next cache area, if so, returning the value cached in the cold data area, and updating the data of the cold data area. And a callback method for further operation of the developer is provided in the searching process so that the developer can perform more detailed follow-up operation (such as cache data persistence), and callback feedback is provided for the data overflowed from the cold data, namely the recovered data.
The system also comprises a newly added data overflow judging unit, wherein the newly added data is hot data; each buffer area carries out overflow judgment on hot data, and if the data of the hot data area overflows, the overflowed data is directly reduced to a warm data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered receipts are automatically deleted in the cache area and fed back to the developer;
processing of the newly entered data.
The size-adjustable cache region comprises a recovery mechanism unit for setting different cache threshold values, wherein the recovery mechanism unit comprises a cache data input unit for caching data into a set;
the cache data degradation unit is used for deleting cache data from the set; and the cache data recovery unit is used for removing the cache data from the cache area.
Example 3
In accordance with the above-described embodiments of the present invention,
and sequentially carrying out cache hit on the hot data area (H), the warm data area (W) and the cold data area (C) according to the sequence, hitting data in the cache, synchronously updating the data of each area, and placing the latest data at the tail of the queue and the oldest data at the head of the queue.
The data type adopted in the embodiment is plastic and is convenient to demonstrate and use, but is not limited to the data type. Data of all data types are supported, and of course, various data types occupy memory and need to be considered, such as: the file type can be optimized in consideration of the form of the save path string.
Depending on the particular business scenario, the developer may define the cache as either a global or a local variable. The physical object can be obtained in two ways:
mode one: the default defaultmechanism recovery mechanism is realized, and the specific realization can be checked by turning over codes in the annex; mode two: is an implementation of a custom reclamation mechanism.
The cache hit and store operation is performed by a cache.find (xxx) method, and the IRECycler interface is mainly defined as follows:
/**
* Entering the cache, and updating the latest value into the first-level cache by default
**/
void onCache(@NonNull LruLinkedHashMap<Integer,V>map,int key,@NonNull V value);
/**
* Element demotion in cache
**/
void onDemotion(@NonNull LruLinkedHashMap<Integer,V>map,int key,@NonNull V value);
/**
* Cold data recovery
**/
void onRecycle(@NonNull V value);
/**
* First level cache capacity
*@return int capacity of level 1
**/
default int CACHE_LEVEL_CAPACITY_1(){return 10;}
/**
* Second level buffer capacity
*@return int capacity of level 2
**/
default int CACHE_LEVEL_CAPACITY_2(){return 20;}
/**
* Three-level buffer capacity
*@return int capacity of level 3
**/
default int CACHE_LEVEL_CAPACITY_3(){return 10;}
Example 4
Based on the above embodiments, the present embodiment provides a three-level caching strategy for a developer to dynamically adjust the cache size according to a specific service scenario, and since the underlying data structure adopts the LinkHashMap, the time complexity is O (1).
In a typical cache implementation, it may be understood as a level one cache. After the processing of the invention, the three sections of buffer sizes are respectively set as follows: m, n, k; the following table clearly shows the hit rate and the efficiency improvement versus table as table 1:
table 1 hit ratio comparison table
If m=10, n=20, k=10, then the primary cache query speed will be increased by 75%, and the secondary cache query speed will be increased by 25%. It can be seen that the size of the level one cache has a substantial impact on overall query speed. If used in a network download scenario, the time consuming network requests can be omitted directly, which would be a great improvement.
For the reclamation mechanism defaultmechanism, the developer may inherit such or implement the irec interface to rewrite the reclamation mechanism. Generally, the default implementation can be adapted to most service scenarios, only the buffer size needs to be adjusted, and the default three-level buffer sizes are respectively: 10. 20, 10 (CACHE_LEVEL_CAPACITY_1, CACHE_LEVEL_CAPACITY_2, CACHE_LEVEL_CAPACITY_3).
When the cache is pressed in, the data is stored in the Map set (onCache); removing data from Map when degraded (onDemotion); when the data is recovered, the data can be automatically removed from the memory, no further operation is performed, the cache operation is performed through an entrance cache, a recovery mechanism (TestRecycler) which is realized by a user definition of a developer can be transmitted into the construction method of the data, the default recovery mechanism is a default recovery mechanism which is realized internally, the cache is searched and stored through a find (xxx) method, if the return is not null, the data in the cache is returned, otherwise, the cache has no changed data, and the data can be further processed in an IRECycler interface realized by the developer.

Claims (6)

1. A dynamically adjustable caching method is applied to mobile storage equipment and is characterized by comprising the following steps of;
partitioning data, namely partitioning cached data into three cache areas with adjustable sizes;
the newly loaded cache data is processed in a partitioned mode, and different cache areas are arranged for the newly loaded cache data;
judging the buffer area, and judging the buffer area loaded with new data; the three cache areas are respectively a hot data area, a warm data area and a cold data area; the hot data area is the latest and most commonly used data storage area; the temperature data area is a common data area; the cold data area is a less-used data storage area; buffer memory partition threshold adjustment processing, newly loaded data partition processing including data buffer memory processing and data feedback processing;
cache partition threshold adjustment processing:
the partition sizes of three cache areas can be flexibly configured during deployment;
and (3) data caching:
caching the newly loaded data into a hot data area; the data overflowed from the hot data area is directly cooled to the warm data area; for the data overflowed from the warm data area, directly reducing the data to the cold data area; for the data overflowed from the cold data area, the recovered data is automatically deleted in the cache area and information is fed back to a developer as recovered data;
and (3) data feedback processing:
searching the newly entered data in the hot data area, returning the value cached in the hot data area if the newly entered data exists in the hot data area, and updating the data in the hot data area; otherwise, searching the temperature data area of the next cache area, returning the value of the cache temperature data area if the value exists, and updating the data of the temperature data area; otherwise, searching the cold data area of the next cache area, if so, returning the value cached in the cold data area, and updating the data of the cold data area.
2. The method of claim 1, further comprising determining whether newly added data overflows, wherein the newly added data is hot data; each buffer area judges overflow of the buffer area for adding hot data, and if the data of the hot data area overflows, the overflowed data is directly cooled to the temperature data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered receipts are automatically deleted in the cache area and fed back to the developer;
processing of the newly entered data.
3. The method for dynamically adjustable buffer according to claim 1, wherein the buffer with adjustable size is realized by a reclamation mechanism with different buffer threshold values, and the reclamation mechanism comprises the steps of inputting buffer data and buffering the data into a collection; degrading the cached data, and deleting the data from the collection; and recovering the cache data, wherein the data is removed from the cache area when the cache data is recovered.
4. A dynamically adjustable buffer device is applied to a mobile storage device, and is characterized in that,
the data partitioning unit partitions the cached data into three cache areas with adjustable sizes;
the newly loaded cache data partition processing unit is used for placing the newly loaded cache data into a hot data cache region, if data overflows, automatically degrading the next cache region, namely degrading the overflowed data in the hot data region to a warm data region, degrading the overflowed data in the warm data region to a cold data region, recovering the overflowed data in the cold data region, and feeding back the recovered data information to a developer;
the judging unit of the buffer area searches and judges the buffer area loaded with new data; the three cache areas are respectively a hot data area, a warm data area and a cold data area; the hot data area is the latest and most commonly used data storage area; the temperature data area is a common data area; the cold data area is a less-used data storage area;
the newly loaded data partition processing comprises a data cache processing unit and a data feedback processing unit;
the data caching processing unit caches the newly loaded data into the hot data area; for the overflow of the data in the hot data area, the overflowed data is directly reduced to the warm data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered receipts are automatically deleted in the cache area and fed back to the developer;
the data feedback processing unit searches the newly entered data in the hot data area, returns the value cached in the hot data area if the newly entered data exists in the hot data area, and updates the data in the hot data area; otherwise, searching the temperature data area of the next cache area, returning the value of the cache temperature data area if the value exists, and updating the data of the temperature data area; otherwise, searching the cold data area of the next cache area, if so, returning the value cached in the cold data area, and updating the data of the cold data area.
5. A dynamically adjustable cache apparatus according to claim 4, wherein,
the system also comprises a newly added data overflow judging unit, wherein the newly added data is hot data; each buffer area carries out overflow judgment on hot data, and if the data of the hot data area overflows, the overflowed data is directly reduced to a warm data area; for data overflow of the warm data area, the overflowed data is directly reduced to the cold data area; for the overflow of the data in the cold data area, the overflowed data is recovered data, and the recovered data is automatically deleted in the cache area and feeds back information to a developer;
processing of the newly entered data.
6. A dynamically adjustable cache apparatus according to claim 4, wherein,
the size-adjustable cache region comprises a recovery mechanism unit for setting different cache threshold values, wherein the recovery mechanism unit comprises a cache data input unit for caching data into a set;
the cache data degradation unit is used for deleting cache data from the set; and the cache data recovery unit is used for removing the cache data from the cache area.
CN202111326347.1A 2021-11-10 2021-11-10 Caching method and device capable of being dynamically adjusted Active CN114138186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111326347.1A CN114138186B (en) 2021-11-10 2021-11-10 Caching method and device capable of being dynamically adjusted

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111326347.1A CN114138186B (en) 2021-11-10 2021-11-10 Caching method and device capable of being dynamically adjusted

Publications (2)

Publication Number Publication Date
CN114138186A CN114138186A (en) 2022-03-04
CN114138186B true CN114138186B (en) 2024-02-23

Family

ID=80393477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111326347.1A Active CN114138186B (en) 2021-11-10 2021-11-10 Caching method and device capable of being dynamically adjusted

Country Status (1)

Country Link
CN (1) CN114138186B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6631446B1 (en) * 2000-10-26 2003-10-07 International Business Machines Corporation Self-tuning buffer management
CN104145252A (en) * 2012-03-05 2014-11-12 国际商业机器公司 Adaptive cache promotions in a two level caching system
CN109521961A (en) * 2018-11-13 2019-03-26 深圳忆联信息系统有限公司 A kind of method and its system promoting solid state disk read-write performance
CN111159066A (en) * 2020-01-07 2020-05-15 杭州电子科技大学 Dynamically-adjusted cache data management and elimination method
CN111737170A (en) * 2020-05-28 2020-10-02 苏州浪潮智能科技有限公司 Cache data management method, system, terminal and storage medium
CN113268440A (en) * 2021-05-26 2021-08-17 上海哔哩哔哩科技有限公司 Cache elimination method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11693570B2 (en) * 2021-04-29 2023-07-04 EMC IP Holding Company LLC Machine learning to improve caching efficiency in a storage system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6631446B1 (en) * 2000-10-26 2003-10-07 International Business Machines Corporation Self-tuning buffer management
CN104145252A (en) * 2012-03-05 2014-11-12 国际商业机器公司 Adaptive cache promotions in a two level caching system
CN109521961A (en) * 2018-11-13 2019-03-26 深圳忆联信息系统有限公司 A kind of method and its system promoting solid state disk read-write performance
CN111159066A (en) * 2020-01-07 2020-05-15 杭州电子科技大学 Dynamically-adjusted cache data management and elimination method
CN111737170A (en) * 2020-05-28 2020-10-02 苏州浪潮智能科技有限公司 Cache data management method, system, terminal and storage medium
CN113268440A (en) * 2021-05-26 2021-08-17 上海哔哩哔哩科技有限公司 Cache elimination method and system

Also Published As

Publication number Publication date
CN114138186A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
US5305389A (en) Predictive cache system
CN108763110B (en) Data caching method and device
US8868831B2 (en) Caching data between a database server and a storage system
CN108710639B (en) Ceph-based access optimization method for mass small files
CN103907100B (en) High-speed buffer storage data storage system and the method for storing padding data to it
US20050097278A1 (en) System and method for providing a cost-adaptive cache
KR102437775B1 (en) Page cache device and method for efficient mapping
CN109446117B (en) Design method for page-level flash translation layer of solid state disk
US20120054444A1 (en) Evicting data from a cache via a batch file
CN104915319A (en) System and method of caching information
CN108108089A (en) A kind of picture loading method and device
US11593268B2 (en) Method, electronic device and computer program product for managing cache
CN111737261B (en) LSM-Tree-based compressed log caching method and device
CN101236564A (en) Mass data high performance reading display process
CN107562806B (en) Self-adaptive sensing acceleration method and system of hybrid memory file system
CN107766258B (en) Memory storage method and device and memory query method and device
CN113094392A (en) Data caching method and device
CN112799590B (en) Differentiated caching method for online main storage deduplication
CN114138186B (en) Caching method and device capable of being dynamically adjusted
CN115774699B (en) Database shared dictionary compression method and device, electronic equipment and storage medium
US7254681B2 (en) Cache victim sector tag buffer
CN108156249B (en) Network cache updating method based on approximate Markov chain
CN102902735A (en) Search caching method and system for internet protocol television (IPTV)
CN115712388A (en) Data storage method, device and equipment of solid-state disk and storage medium
CN109582233A (en) A kind of caching method and device of data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant