CN111159232A - Data caching method and system - Google Patents

Data caching method and system Download PDF

Info

Publication number
CN111159232A
CN111159232A CN201911291963.0A CN201911291963A CN111159232A CN 111159232 A CN111159232 A CN 111159232A CN 201911291963 A CN201911291963 A CN 201911291963A CN 111159232 A CN111159232 A CN 111159232A
Authority
CN
China
Prior art keywords
data
partition
cache module
caching
cached
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911291963.0A
Other languages
Chinese (zh)
Inventor
张军
方杰
陆海琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Supcon Software Co ltd
Zhejiang Supcon Technology Co Ltd
Original Assignee
Zhejiang Supcon Software Co ltd
Zhejiang Supcon Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Supcon Software Co ltd, Zhejiang Supcon Technology Co Ltd filed Critical Zhejiang Supcon Software Co ltd
Priority to CN201911291963.0A priority Critical patent/CN111159232A/en
Publication of CN111159232A publication Critical patent/CN111159232A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1021Hit rate improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/16General purpose computing application
    • G06F2212/163Server or database system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to the field of data caching, in particular to a data caching method and a data caching system, wherein the method comprises the following steps: partitioning the main cache module according to time granularity, wherein each partition corresponds to a corresponding quota ratio; caching the data in a partition with corresponding time granularity according to the data length; when the data cached in any partition reaches the quota ratio, caching the data exceeding the quota ratio in the secondary cache module. By using the present invention, the following effects can be achieved: the data are classified from the time dimension, so that a flat data structured mode is managed, the corresponding visit amount is calculated in real time by taking the time granularity as a unit, the use frequency of each time granularity partition is quantized, and a scientific basis is provided for subsequent cache cleaning.

Description

Data caching method and system
Technical Field
The present invention relates to the field of data caching, and in particular, to a data caching method and system.
Background
Production process data is valuable for enterprises, but due to the characteristics of large data volume, high real-time requirement and the like, a real-time database is often adopted to store a large amount of historical data, but when the data is analyzed, the data cannot be distinguished from other data to be useful, so that all data are stored to prevent the required information from being lost. However, the cache capacity is limited, and the cache capacity is increased by the cache method at a very high cost, and the cache hit rate of useful data is low due to the fact that all data are cached.
Disclosure of Invention
In order to solve the above problems, the present invention provides a data caching method and system.
A data caching method, comprising:
partitioning the main cache module according to time granularity, wherein each partition corresponds to a corresponding quota ratio;
caching the data in a partition with corresponding time granularity according to the data length;
when the data cached in any partition reaches the quota ratio, caching the data exceeding the quota ratio in the secondary cache module.
Preferably, when the data query is performed according to the query condition:
if the data is cached in the main cache module, updating the weight of the data;
if a part or all of the data is cached in the auxiliary cache module, migrating the data cached in the auxiliary cache module into the main cache module;
and if the data is not cached in the main cache module or the auxiliary cache module, reading the data from the real-time database, updating the weight of the data, and caching the data in the main cache module or the auxiliary cache module.
Preferably, if a part or all of the data is cached in the secondary cache module, migrating the data cached in the secondary cache module into the primary cache module includes:
determining a partition A of a time granularity corresponding to the main cache module according to the data length;
judging whether the partition A has enough cache space;
if yes, directly caching the data in the partition A;
if not, the space of the partition with lower use frequency than the partition A is released, the released space is allocated to the partition A, and the data is cached.
Preferably, the releasing the space of the partition less frequently used than the partition a, allocating the released space to the partition a, and caching the data includes:
and sequentially deleting the data with the lowest weight in the partitions with the lower use frequency than the partition A, allocating the free space after the data are deleted to the partition A, and if at least two sections of the data with the lowest weight exist, sequentially deleting the data with the longest access time from the last time until the queried data can be cached, and caching the queried data.
Preferably, the releasing the space of the partition less frequently used than the partition a, allocating the released space to the partition a, and caching the data further includes:
if all the available space of the partition with the frequency lower than that of the partition A is released and the inquired data cannot be cached, intercepting and caching the inquired data.
Preferably, if the data is not cached in the primary cache module or the secondary cache module, reading the data from the real-time database, and updating the weight of the data, and the caching of the data in the primary cache module or the secondary cache module includes:
determining a partition B of the time granularity corresponding to the main cache module according to the data length;
judging whether the partition B has enough cache space;
if yes, directly caching the data in the partition B;
if not, the use frequency of the partition B is judged, if the use frequency is greater than or equal to a set threshold value, the space of the partition lower than the use frequency of the partition B is released, the released space is allocated to the partition B, the data is cached, and if the use frequency is less than the set threshold value, the inquired data is placed in a secondary cache.
Preferably, if the free space of the secondary cache is not enough to cache the queried data, the cache migration work is started, and the data with high weight in the secondary cache module is migrated to the primary cache module.
A data caching system comprising:
the main cache module is used for partitioning the main cache module according to time granularity, and each partition corresponds to a corresponding quota ratio; caching the data in a partition with corresponding time granularity according to the data length;
and the secondary cache module is used for caching the data exceeding the quota ratio when the data cached in any partition reaches the quota ratio.
Preferably, the main cache module calculates the weight of each data according to the access frequency of the data, and calculates the use frequency of different partitions according to the weight of each data.
Preferably, when the data query is performed according to the query condition:
if the data is cached in the main cache module, updating the weight of the data;
if a part or all of the data is cached in the auxiliary cache module, migrating the data cached in the auxiliary cache module into the main cache module;
and if the data is not cached in the main cache module or the auxiliary cache module, reading the data from the real-time database, updating the weight of the data, and caching the data in the main cache module or the auxiliary cache module.
By using the present invention, the following effects can be achieved:
1. classifying data from the time dimension so as to manage a flat data structuring mode, and calculating corresponding access amount in real time by taking the time granularity as a unit so as to quantify the use frequency of each time granularity partition and provide scientific basis from subsequent cleaning caches;
2. in this embodiment, the structure of the data is defined, which is convenient for querying and managing the cached data;
3. according to the data query result, the position of the data cache is selected according to the weight of the data and the use frequency of the partition, the weight of the data and the use frequency of the partition are updated, and the cache hit rate is improved.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a schematic block diagram of a caching system for industrial data according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for caching industrial data according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be further described below with reference to the accompanying drawings, but the present invention is not limited to these embodiments.
The basic idea of the invention is to divide the data cache area into a main cache module and an auxiliary cache module, partition the main cache module according to the time granularity, each partition corresponds to a corresponding quota ratio, cache the data in the partition corresponding to the time granularity according to the data length, and cache the data exceeding the quota ratio in the auxiliary cache module when the data cached in any partition reaches the quota ratio.
Based on the above inventive concept, this embodiment proposes a data caching system, as shown in fig. 1, including: the main cache module is used for partitioning the main cache module according to time granularity, and each partition corresponds to a corresponding quota ratio; caching the data in a partition with corresponding time granularity according to the data length; and the secondary cache module is used for caching the data exceeding the quota ratio when the data cached in any partition reaches the quota ratio.
In this embodiment, the data caching system adopts a primary-secondary two-level caching module. The main cache module is divided into 5 time granularities of hour, day, month, season and year according to the data length, cache data are managed in a partition mode according to the time granularities, and attributes such as request times, default quota ratio and minimum quota ratio are defined in each time granularity partition.
It should be noted that, the main cache module and the sub-cache module in this embodiment are designed by using a 64-bit program technology, and the 64-bit program technology breaks through the limitation of a single program 4G memory, so that existing hardware, especially memory capacity, can be fully utilized, and for a large number of existing systems at present, caching of more data can be realized by only increasing relevant hardware configuration without changing a real-time database system, so as to improve the performance of the real-time database system, and at the same time, the existing investment of a user is effectively protected.
The benefits of partitioning at time granularity are: the data are classified from the time dimension, so that a flat data structured mode is managed, the corresponding visit amount is calculated in real time by taking the time granularity as a unit, the use frequency of each time granularity partition is quantized, and a scientific basis is provided for subsequent cache cleaning. Each time granularity partition is associated with a corresponding data area, and each data area is responsible for caching corresponding bit number data and defining a uniform data structure according to the characteristics of the bit number data.
Each time granularity partition is defined as shown in the following table:
Figure BDA0002319421770000051
Figure BDA0002319421770000061
the "number of requests" represents the total number of times that any piece of data in the time granularity is accessed by the outside world in a period of time, the number of requests is increased by 1 every time the outside world calls, and the more the number of requests, the more frequently the outside world accesses the data of the event granularity, and therefore the frequency of use of the time granularity as a whole is represented. Meanwhile, the data cache system can regularly clear 0 request times so as to ensure fairness and objectivity to all time granularities; the "default quota occupation ratio" represents the percentage of the total available memory occupied by the time granularity, and assuming that the total available memory of the data caching device is 10G, the available memory for caching the hour-level data is 3.5G, and so on; the "minimum quota occupation ratio" represents the minimum memory percentage occupied by the current time granularity, and the calculation formula converted into the minimum quota is as follows: for example, the minimum available memory of the hour level is 10G × 0.35 × 0.2 — 700M, and the minimum quota ensures that, in an extreme case, each time granularity partition has a corresponding cache, so that a certain time granularity cache quota is not 0, which may result in data caching failure.
Wherein each meaning inside the data structure is as follows:
Figure BDA0002319421770000062
Figure BDA0002319421770000071
in this embodiment, the structure of the data is defined, the cached data is conveniently queried according to the bit number name, and the cached data is conveniently managed according to the weight and the time of the last access, for example: release the data with low weight, etc. And calculating the size of the data according to the bit number name, the starting time, the ending time, the weight, the last access time, the value, the timestamp, the quality code and the number of the data.
According to the above definition, the size of a segment of data is calculated by the following formula: number of bytes ═ SIZE (number of bits) + SIZE (start time) + SIZE (end time) + SIZE (weight) + SIZE (last access time) + (SIZE (value) + SIZE (timestamp) + SIZE (quality code))) COUNT (number of history data). Therefore, when the data is inquired, the size of the data can be calculated according to the formula.
In this embodiment, the secondary cache module adopts a flat cache structure to simplify logic and reduce complexity.
Correspondingly, the present embodiment further provides a data caching method, as shown in fig. 2, including the following steps:
s1: partitioning the main cache module according to time granularity, wherein each partition corresponds to a corresponding quota ratio;
s2: caching the data in a partition with corresponding time granularity according to the data length;
s3: when the data cached in any partition reaches the quota ratio, caching the data exceeding the quota ratio in the secondary cache module.
When data query is carried out according to query conditions: if the data is cached in the main cache module, updating the weight of the data; if a part or all of the data is cached in the auxiliary cache module, migrating the data cached in the auxiliary cache module into the main cache module; and if the data is not cached in the main cache module or the auxiliary cache module, reading the data from the real-time database, updating the weight of the data, and caching the data in the main cache module or the auxiliary cache module.
If a part or all of the data is cached in the secondary cache module, the method for migrating the data cached in the secondary cache module into the primary cache module comprises the following steps: determining a partition A of a time granularity corresponding to the main cache module according to the data length; judging whether the partition A has enough cache space; if yes, directly caching the data in the partition A; if not, the space of the partition with lower use frequency than the partition A is released, the released space is allocated to the partition A, and the data is cached.
Specifically, the method of releasing the space of the partition with a lower frequency of use than the partition a, allocating the released space to the partition a, and caching the data includes:
and sequentially deleting the data with the lowest weight in the partitions with the lower use frequency than the partition A, allocating the free space after the data are deleted to the partition A, and if at least two sections of the data with the lowest weight exist, sequentially deleting the data with the longest access time from the last time until the queried data can be cached, and caching the queried data.
In an extreme case, if all the available space of the partition which is less frequently used than the partition a is released and the data of the query cannot be cached, the data of the query is intercepted and cached.
If the data is not cached in the main cache module or the auxiliary cache module, reading the data from the real-time database, and updating the weight of the data, wherein the method for caching the data in the main cache module or the auxiliary cache module comprises the following steps: determining a partition B of the time granularity corresponding to the main cache module according to the data length; judging whether the partition B has enough cache space; if yes, directly caching the data in the partition B; if not, the use frequency of the partition B is judged, if the use frequency is greater than or equal to a set threshold value, the space of the partition lower than the use frequency of the partition B is released, the released space is allocated to the partition B, the data is cached, and if the use frequency is less than the set threshold value, the inquired data is placed in a secondary cache.
In another case, if the free space of the secondary cache is not enough to cache the queried data, the cache migration work is started, and the data with high weight in the secondary cache module is migrated to the primary cache module. It should be noted that, the method for migrating the data cached in the secondary cache module into the primary cache module is described in the foregoing embodiment, and therefore, the description is not repeated.
In the embodiment, for the query result of the data, the position of the data cache is selected according to the weight of the data and the use frequency of the partition, and the weight of the data and the use frequency of the partition are updated, so that the problem of cache hit rate reduction caused by sporadic and periodic batch operations is well solved.
Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (10)

1. A method for caching data, comprising:
partitioning the main cache module according to time granularity, wherein each partition corresponds to a corresponding quota ratio;
caching the data in a partition with corresponding time granularity according to the data length;
when the data cached in any partition reaches the quota ratio, caching the data exceeding the quota ratio in the secondary cache module.
2. A data caching method according to claim 1,
when data query is carried out according to query conditions:
if the data is cached in the main cache module, updating the weight of the data;
if a part or all of the data is cached in the auxiliary cache module, migrating the data cached in the auxiliary cache module into the main cache module;
and if the data is not cached in the main cache module or the auxiliary cache module, reading the data from the real-time database, updating the weight of the data, and caching the data in the main cache module or the auxiliary cache module.
3. The method according to claim 2, wherein if part or all of the data is cached in the secondary cache module, migrating the data cached in the secondary cache module to the primary cache module comprises:
determining a partition A of a time granularity corresponding to the main cache module according to the data length;
judging whether the partition A has enough cache space;
if yes, directly caching the data in the partition A;
if not, the space of the partition with lower use frequency than the partition A is released, the released space is allocated to the partition A, and the data is cached.
4. A data caching method according to claim 3, wherein said releasing space of a partition that is less frequently used than the partition a and allocating the released space to the partition a, and caching the data comprises:
and sequentially deleting the data with the lowest weight in the partitions with the lower use frequency than the partition A, allocating the free space after the data are deleted to the partition A, and if at least two sections of the data with the lowest weight exist, sequentially deleting the data with the longest access time from the last time until the queried data can be cached, and caching the queried data.
5. A data caching method according to claim 4, wherein said freeing space of a partition that is less frequently used than the partition A and allocating the freed space to the partition A, and caching the data further comprises:
if all the available space of the partition with the frequency lower than that of the partition A is released and the inquired data cannot be cached, intercepting and caching the inquired data.
6. The data caching method according to claim 2, wherein if the data is not cached in the primary cache module or the secondary cache module, reading the data from the real-time database, and updating the data weight, and the caching the data in the primary cache module or the secondary cache module comprises:
determining a partition B of the time granularity corresponding to the main cache module according to the data length;
judging whether the partition B has enough cache space;
if yes, directly caching the data in the partition B;
if not, the use frequency of the partition B is judged, if the use frequency is greater than or equal to a set threshold value, the space of the partition lower than the use frequency of the partition B is released, the released space is allocated to the partition B, the data is cached, and if the use frequency is less than the set threshold value, the inquired data is placed in a secondary cache.
7. The data caching method according to claim 6, wherein if the free space of the secondary cache is not enough to cache the queried data, a cache migration operation is started to migrate the data with high weight in the secondary cache module to the primary cache module.
8. A data caching system, comprising:
the main cache module is used for partitioning the main cache module according to time granularity, and each partition corresponds to a corresponding quota ratio; caching the data in a partition with corresponding time granularity according to the data length;
and the secondary cache module is used for caching the data exceeding the quota ratio when the data cached in any partition reaches the quota ratio.
9. A data caching system according to claim 8,
and the main cache module calculates the weight of each data according to the access frequency of the data and calculates the use frequency of different partitions according to the weight of each data.
10. A data caching system according to claim 8,
when data query is carried out according to query conditions:
if the data is cached in the main cache module, updating the weight of the data;
if a part or all of the data is cached in the auxiliary cache module, migrating the data cached in the auxiliary cache module into the main cache module;
and if the data is not cached in the main cache module or the auxiliary cache module, reading the data from the real-time database, updating the weight of the data, and caching the data in the main cache module or the auxiliary cache module.
CN201911291963.0A 2019-12-16 2019-12-16 Data caching method and system Pending CN111159232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911291963.0A CN111159232A (en) 2019-12-16 2019-12-16 Data caching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911291963.0A CN111159232A (en) 2019-12-16 2019-12-16 Data caching method and system

Publications (1)

Publication Number Publication Date
CN111159232A true CN111159232A (en) 2020-05-15

Family

ID=70557269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911291963.0A Pending CN111159232A (en) 2019-12-16 2019-12-16 Data caching method and system

Country Status (1)

Country Link
CN (1) CN111159232A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113760950A (en) * 2021-03-15 2021-12-07 北京京东振世信息技术有限公司 Index data query method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101320353A (en) * 2008-07-18 2008-12-10 四川长虹电器股份有限公司 Design method of embedded type browser caching
JP2013174997A (en) * 2012-02-24 2013-09-05 Mitsubishi Electric Corp Cache control device and cache control method
CN104731864A (en) * 2015-02-26 2015-06-24 国家计算机网络与信息安全管理中心 Data storage method for mass unstructured data
CN104834607A (en) * 2015-05-19 2015-08-12 华中科技大学 Method for improving distributed cache hit rate and reducing solid state disk wear
CN105279163A (en) * 2014-06-16 2016-01-27 Tcl集团股份有限公司 Buffer memory data update and storage method and system
CN106407191A (en) * 2015-07-27 2017-02-15 中国移动通信集团公司 Data processing method and server
CN106776043A (en) * 2017-01-06 2017-05-31 郑州云海信息技术有限公司 A kind of is the method and its device of client distribution caching quota based on file
CN110019361A (en) * 2017-10-30 2019-07-16 北京国双科技有限公司 A kind of caching method and device of data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101320353A (en) * 2008-07-18 2008-12-10 四川长虹电器股份有限公司 Design method of embedded type browser caching
JP2013174997A (en) * 2012-02-24 2013-09-05 Mitsubishi Electric Corp Cache control device and cache control method
CN105279163A (en) * 2014-06-16 2016-01-27 Tcl集团股份有限公司 Buffer memory data update and storage method and system
CN104731864A (en) * 2015-02-26 2015-06-24 国家计算机网络与信息安全管理中心 Data storage method for mass unstructured data
CN104834607A (en) * 2015-05-19 2015-08-12 华中科技大学 Method for improving distributed cache hit rate and reducing solid state disk wear
CN106407191A (en) * 2015-07-27 2017-02-15 中国移动通信集团公司 Data processing method and server
CN106776043A (en) * 2017-01-06 2017-05-31 郑州云海信息技术有限公司 A kind of is the method and its device of client distribution caching quota based on file
CN110019361A (en) * 2017-10-30 2019-07-16 北京国双科技有限公司 A kind of caching method and device of data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113760950A (en) * 2021-03-15 2021-12-07 北京京东振世信息技术有限公司 Index data query method and device, electronic equipment and storage medium
CN113760950B (en) * 2021-03-15 2023-09-05 北京京东振世信息技术有限公司 Index data query method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
TWI684099B (en) Profiling cache replacement
US7284096B2 (en) Systems and methods for data caching
US7096321B2 (en) Method and system for a cache replacement technique with adaptive skipping
JP4445160B2 (en) EVENT MEASUREMENT DEVICE AND METHOD, EVENT MEASUREMENT PROGRAM, COMPUTER-READABLE RECORDING MEDIUM CONTAINING THE PROGRAM, AND PROCESSOR SYSTEM
US20130091331A1 (en) Methods, apparatus, and articles of manufacture to manage memory
WO2019102189A1 (en) Multi-tier cache placement mechanism
US8819074B2 (en) Replacement policy for resource container
US20160103765A1 (en) Apparatus, systems, and methods for providing a memory efficient cache
US20240061789A1 (en) Methods, apparatuses, and electronic devices for evicting memory block in cache
CN109117088B (en) Data processing method and system
CN115168247B (en) Method for dynamically sharing memory space in parallel processor and corresponding processor
CN115757203B (en) Access policy management method and device, processor and computing equipment
CN111159232A (en) Data caching method and system
US20080276045A1 (en) Apparatus and Method for Dynamic Cache Management
JP2017162194A (en) Data management program, data management device, and data management method
CN115080459A (en) Cache management method and device and computer readable storage medium
CN117009389A (en) Data caching method, device, electronic equipment and readable storage medium
CN110825732A (en) Data query method and device, computer equipment and readable storage medium
JP3301359B2 (en) List management system, method and storage medium
CN110569261B (en) Method and device for updating resources stored in cache region
Banerjee et al. A New Proposed Hybrid Page Replacement Algorithm (HPRA) in Real Time Systems.
CN112445794A (en) Caching method of big data system
CN113282585B (en) Report calculation method, device, equipment and medium
EP3876104B1 (en) Method for evicting data from memory
CN116483739B (en) KV pair quick writing architecture based on hash calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200515