CN116756190A - Data cache management method, device, terminal equipment and storage medium - Google Patents

Data cache management method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN116756190A
CN116756190A CN202310665168.3A CN202310665168A CN116756190A CN 116756190 A CN116756190 A CN 116756190A CN 202310665168 A CN202310665168 A CN 202310665168A CN 116756190 A CN116756190 A CN 116756190A
Authority
CN
China
Prior art keywords
data
cache
cache database
database
management method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310665168.3A
Other languages
Chinese (zh)
Inventor
刘奇
陈晓
周刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Merchants Bank Co Ltd
Original Assignee
China Merchants Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Merchants Bank Co Ltd filed Critical China Merchants Bank Co Ltd
Priority to CN202310665168.3A priority Critical patent/CN116756190A/en
Publication of CN116756190A publication Critical patent/CN116756190A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/275Synchronous replication

Abstract

The application discloses a data cache management method, a device, terminal equipment and a storage medium, wherein the data cache management method comprises the following steps: when a query request is received, determining a corresponding target interface; detecting whether a target interface exists or not through a preset access filter to obtain a detection result; and inquiring the data of the preset multi-level cache database based on the detection result to obtain target data. Based on the scheme of the application, the data of the high-flow service scene is cached by the dimension of the interface, the high-flow API interface can be cached preferentially, and the multi-level cache database is arranged, so that the cache database of each level has different performance and data capacity for data caching, the technical problems of poor data caching performance and low flexibility in the high-flow service scene are solved, and the overall performance and flexibility of the data caching system are improved.

Description

Data cache management method, device, terminal equipment and storage medium
Technical Field
The present application relates to the field of data management technologies, and in particular, to a data cache management method, device, terminal equipment, and storage medium.
Background
There are many instantaneous high-traffic scenarios currently exist. The existing strategy for the high-traffic service scenario generally uses the same single cache structure for all services, and has the following problems:
on one hand, the request amount of high-flow service may exceed the processing capacity of a single cache structure, so that the cache hit rate is reduced, and the performance and response time of the system are affected; on the other hand, different services may have different requirements on the capacity of the cache, the data consistency requirement and the like, but a single cache structure is difficult to flexibly allocate cache resources according to the characteristics of the services.
Disclosure of Invention
The application mainly aims to provide a data cache management method, a device, terminal equipment and a storage medium, which aim to solve the technical problems of poor data cache performance and low flexibility in a high-traffic service scene.
In order to achieve the above object, the present application provides a data cache management method, including:
when a query request is received, determining a corresponding target interface;
detecting whether the target interface exists or not through a preset access filter to obtain a detection result;
and inquiring data of a preset multi-level cache database based on the detection result to obtain target data.
Optionally, the multi-level cache database includes a first-level cache database, a second-level cache database, and a third-level cache database, and the step of querying the data of the preset multi-level cache database based on the detection result includes:
if the target interface does not exist, querying the data of the secondary cache database;
if the data of the secondary cache database does not exist, querying the data of the tertiary cache database;
if the data of the third-level cache database exists, pulling the data of the third-level cache database through the second-level cache database;
if the target interface exists, inquiring the data of the primary cache database;
and if the data of the primary cache database does not exist, executing the step of inquiring the data of the secondary cache database.
Optionally, after the step of querying the data of the secondary cache database, the method further includes:
detecting whether the target data exists or not through the admission filter;
and if the target data exist, pushing the target data to the primary cache database through the secondary cache database.
Optionally, the data cache management method further includes the following steps:
Deleting the data of the primary cache database when the changed data is detected;
asynchronously writing the change data into the secondary cache database and the tertiary cache database;
pushing the change data to the first-level cache database through the second-level cache database.
Optionally, the data cache management method further includes the following steps:
respectively acquiring the key number of the secondary cache database and the key number of the tertiary cache database;
when the fact that the number of keys of the secondary cache database is unequal to the number of keys of the tertiary cache database is detected, pulling data of the tertiary cache database through the secondary cache database, and deleting the data of the primary cache database.
Optionally, the data cache management method further includes the following steps:
synchronizing data in a preset data source to the third-level cache database and the second-level cache database based on a preset timer; and/or
And based on a preset monitoring strategy, carrying out data synchronization to the third-level cache database and the second-level cache database through the data source.
Optionally, before the step of detecting whether the target interface exists through a preset admission filter and obtaining a detection result, the method further includes:
Based on a preset sliding window, counting the request times of a plurality of service interfaces;
sequencing the request times to obtain an interface sequencing list of the plurality of service interfaces;
determining an admission proportion according to preset local resources and data quantity;
and determining the admission filter based on the admission proportion and the interface ordered list.
The embodiment of the application also provides a data cache management device, which comprises:
the acquisition module is used for determining a corresponding target interface when receiving a query request;
the admission module is used for detecting whether the target interface exists or not through a preset admission filter to obtain a detection result;
and the query module is used for querying the data of the preset multi-level cache database based on the detection result to obtain target data.
The embodiment of the application also provides a terminal device, which comprises a memory, a processor and a data cache management program stored on the memory and capable of running on the processor, wherein the data cache management program realizes the steps of the data cache management method when being executed by the processor.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a data cache management program, and the data cache management program realizes the steps of the data cache management method when being executed by a processor.
The data cache management method, the device, the terminal equipment and the storage medium provided by the embodiment of the application determine the corresponding target interface when receiving the query request; detecting whether the target interface exists or not through a preset access filter to obtain a detection result; and inquiring data of a preset multi-level cache database based on the detection result to obtain target data. Based on the scheme of the application, the data of the high-flow service scene is cached by the dimension of the interface, the high-flow API interface can be cached preferentially, and the multi-level cache database is arranged, so that the cache database of each level has different performance and data capacity for data caching, the technical problems of poor data caching performance and low flexibility in the high-flow service scene are solved, the overall performance and flexibility of the data caching system are improved, and the load of the system is further reduced.
Drawings
FIG. 1 is a schematic diagram of functional modules of a terminal device to which a data cache management device of the present application belongs;
FIG. 2 is a flowchart illustrating a data cache management method according to a first exemplary embodiment of the present application;
FIG. 3 is a multi-level cache architecture of a multi-level cache database of the data cache management method of the present application;
FIG. 4 is a flowchart illustrating a second exemplary embodiment of a data cache management method according to the present application;
FIG. 5 is a schematic diagram of data query of the data cache management method of the present application;
FIG. 6 is a flowchart illustrating a third exemplary embodiment of a data cache management method according to the present application;
FIG. 7 is a flowchart illustrating a fourth exemplary embodiment of a data cache management method according to the present application;
FIG. 8 is a schematic diagram of self-healing of cached data according to the data cache management method of the present application;
FIG. 9 is a flowchart illustrating a fifth exemplary embodiment of a data cache management method according to the present application;
FIG. 10 is a diagram illustrating data synchronization of a data cache management method according to the present application;
FIG. 11 is a flowchart illustrating a sixth exemplary embodiment of a data cache management method according to the present application;
fig. 12 is a flow chart of admission judgment in the data buffer management method of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The main solutions of the embodiments of the present application are: when a query request is received, determining a corresponding target interface; detecting whether the target interface exists or not through a preset access filter to obtain a detection result; and inquiring data of a preset multi-level cache database based on the detection result to obtain target data. Based on the scheme of the application, the data of the high-flow service scene is cached by the dimension of the interface, the high-flow API interface can be cached preferentially, and the multi-level cache database is arranged, so that the cache database of each level has different performance and data capacity for data caching, the technical problems of poor data caching performance and low flexibility in the high-flow service scene are solved, the overall performance and flexibility of the data caching system are improved, and the load of the system is further reduced.
The embodiment of the application considers that in the existing data caching strategy, the way of caching all services has some limitations, and particularly when facing to a high-flow service scene, the following problems exist:
1. the cache architecture is single, and the performance is limited: different services may have different requirements for cache capacity, expiration time, data consistency requirements, etc. However, in the case of a single cache structure, it is difficult to perform personalized cache setting for different service scenarios, so that resource allocation is unbalanced, and cache resources cannot be optimally utilized.
2. The existing strategy is that all services use cache and cannot be dynamically adjusted: when all services use the same cache structure, high traffic services may place a significant strain on the cache system. When all traffic uses the same cache structure, the data of low traffic may be overwhelmed by the data of high traffic, resulting in a decrease in cache hit rate.
Therefore, the scheme of the embodiment of the application starts from the practical problem of improving the performance and flexibility of the data cache in the high-traffic service scene, combines the self-adaptive allocation capacity of the multi-level cache architecture to the data cache and the analysis capacity of the admission filter to the traffic priority, designs a multi-level cache self-adaptive method based on the traffic priority, solves the technical problems of poor data cache performance and low flexibility in the high-traffic service scene, and improves the overall performance and flexibility of the data cache system.
Specifically, referring to fig. 1, fig. 1 is a schematic functional block diagram of a terminal device to which the data cache management apparatus of the present application belongs. The data buffer management device may be a device independent of the terminal device, capable of performing data buffer management, and may be carried on the terminal device in a form of hardware or software. The terminal equipment can be an intelligent mobile terminal with a data processing function such as a mobile phone and a tablet personal computer, and can also be a fixed terminal equipment or a server with a data processing function.
In this embodiment, the terminal device to which the data buffer management apparatus belongs at least includes an output module 110, a processor 120, a memory 130, and a communication module 140.
The memory 130 stores an operating system and a data cache management program, and the data cache management device can determine a corresponding target interface when receiving a query request; detecting whether a target interface exists or not through a preset access filter, and obtaining a detection result; inquiring data of a preset multi-level cache database based on the detection result, and storing the obtained target data in the memory 130 based on information such as the multi-level cache database created by the first-level cache database, the second-level cache database and the third-level cache database; the output module 110 may be a display screen or the like. The communication module 140 may include a WIFI module, a mobile communication module, a bluetooth module, and the like, and communicates with an external device or a server through the communication module 140.
Wherein the data cache management program in the memory 130 when executed by the processor performs the steps of:
when a query request is received, determining a corresponding target interface;
detecting whether the target interface exists or not through a preset access filter to obtain a detection result;
and inquiring data of a preset multi-level cache database based on the detection result to obtain target data.
Further, the data cache management program in the memory 130, when executed by the processor, further performs the steps of:
if the target interface does not exist, querying the data of the secondary cache database;
if the data of the secondary cache database does not exist, querying the data of the tertiary cache database;
if the data of the third-level cache database exists, pulling the data of the third-level cache database through the second-level cache database;
if the target interface exists, inquiring the data of the primary cache database;
and if the data of the primary cache database does not exist, executing the step of inquiring the data of the secondary cache database.
Further, the data cache management program in the memory 130, when executed by the processor, further performs the steps of:
Detecting whether the target data exists or not through the admission filter;
and if the target data exist, pushing the target data to the primary cache database through the secondary cache database.
Further, the data cache management program in the memory 130, when executed by the processor, further performs the steps of:
deleting the data of the primary cache database when the changed data is detected;
asynchronously writing the change data into the secondary cache database and the tertiary cache database;
pushing the change data to the first-level cache database through the second-level cache database.
Further, the data cache management program in the memory 130, when executed by the processor, further performs the steps of:
respectively acquiring the key number of the secondary cache database and the key number of the tertiary cache database;
when the fact that the number of keys of the secondary cache database is unequal to the number of keys of the tertiary cache database is detected, pulling data of the tertiary cache database through the secondary cache database, and deleting the data of the primary cache database.
Further, the data cache management program in the memory 130, when executed by the processor, further performs the steps of:
Synchronizing data in a preset data source to the third-level cache database and the second-level cache database based on a preset timer; and/or
And based on a preset monitoring strategy, carrying out data synchronization to the third-level cache database and the second-level cache database through the data source.
Further, the data cache management program in the memory 130, when executed by the processor, further performs the steps of:
based on a preset sliding window, counting the request times of a plurality of service interfaces;
sequencing the request times to obtain an interface sequencing list of the plurality of service interfaces;
determining an admission proportion according to preset local resources and data quantity;
and determining the admission filter based on the admission proportion and the interface ordered list.
According to the scheme, the corresponding target interface is determined particularly when a query request is received; detecting whether the target interface exists or not through a preset access filter to obtain a detection result; and inquiring data of a preset multi-level cache database based on the detection result to obtain target data. Based on the scheme of the application, the data of the high-flow service scene is cached by the dimension of the interface, the high-flow API interface can be cached preferentially, and the multi-level cache database is arranged, so that the cache database of each level has different performance and data capacity for data caching, the technical problems of poor data caching performance and low flexibility in the high-flow service scene are solved, the overall performance and flexibility of the data caching system are improved, and the load of the system is further reduced.
The method embodiment of the application is proposed based on the above-mentioned terminal equipment architecture but not limited to the above-mentioned architecture. It should be noted that the embodiments of the method according to the present application may be combined with each other to form a complete solution.
Referring to fig. 2, fig. 2 is a flowchart illustrating a data cache management method according to a first exemplary embodiment of the present application. The data cache management method comprises the following steps:
step S210, when a query request is received, a corresponding target interface is determined;
the execution main body of the method of the embodiment may be a data cache management device, or may be a data cache management terminal device or a server, and the embodiment uses the data cache management device as an example, where the data cache management device may be integrated on a terminal device such as a smart phone, a tablet computer, and the like with a data processing function.
The embodiment starts from the practical problem of improving the performance and flexibility of the data cache in the high-flow service scene, combines the self-adaptive allocation capacity of the multi-level cache architecture to the data cache and the analysis capacity of the admission filter to the flow priority, designs a multi-level cache self-adaptive method based on the flow priority, mainly realizes the management of the data cache, particularly the data cache, solves the technical problems of poor data cache performance and low flexibility in the high-flow service scene, and improves the overall performance and flexibility of the data cache system.
Step S220, detecting whether the target interface exists or not through a preset access filter, and obtaining a detection result;
specifically, the admission filter is configured to quickly determine whether an interface exists in an interface ordered list (API list), where the detection result indicates whether the target interface can admit, that is, can process a request with high traffic preferentially according to the priority of the traffic. Wherein the admission filter can divide the traffic into different priorities according to traffic demand and traffic characteristics. For example, the priority of traffic may be determined based on access frequency, traffic importance, data heat, and the like.
The method includes the steps that a corresponding query request is generated when a user performs data query, the query request can call a corresponding interface, the corresponding access request times in a period of time can be obtained through a plurality of access requests of a plurality of users in the period of time, and then the access requests are ordered according to the access request times of the APIs through a local cache access filter, so that an API list from high to low according to the request times can be obtained. Then, the APIs of the front admission ratio N in the API list are fetched and stored in the bloom filter as a local cache admission filter.
Step S230, based on the detection result, inquiring the data of the preset multi-level cache database to obtain target data.
Specifically, the target data is data of a request corresponding to the query request; the local cache is used for providing low-delay and high-throughput data access, the second level cache is used for providing distributed cache storage and management, and the third level cache is used for providing backup and persistence of data. In a multi-level cache architecture, different levels of caches may be configured for different levels of traffic, depending on the prioritization of the traffic. High priority traffic may access the faster local cache directly, medium priority traffic may access the faster intermediate cache (e.g., redis), and low priority traffic may access the more powerful but relatively slower responding backup cache (e.g., ES).
Illustratively, in this embodiment, a multi-level cache database is used to manage cache data, and fig. 3 is a multi-level cache architecture of the multi-level cache database according to the data cache management method of the present application, where the multi-level cache architecture includes: mongoDB is a data source, caffeine is used as a primary cache (local cache), redis is used as a secondary cache, elasticsearch (ES) is used as a tertiary cache to serve as a data spam scheme of Redis, and high availability of data query services is ensured.
Wherein, first level cache (local cache): caffeine is a Java-based high performance cache library that is used as a local cache for clients. It stores frequently used data in the application memory, making the data read very fast. The benefit of local caching is that data can be read directly from memory without the need for network transmission or disk access, thus having low latency and high throughput.
Second level cache (Redis): redis is a memory-based data storage system that is widely used as a distributed cache. In a multi-level cache architecture, redis is used as a secondary cache, and the requirement of high concurrency read-write and data sharing which cannot be met by a local cache is met. Redis has fast read-write speed and rich data structure support, and can store and manage a large amount of service data, thereby reducing the load pressure of a database.
Three-level cache (ES): the elastiscearch is a distributed search and analysis engine with high scalability and powerful query functionality. In a multi-level cache architecture, ES is used as a three-level cache as a data spam scheme for Redis. When Redis fails to hit the cache, the system will fetch the data from the ES and cache it into Redis for subsequent quick access. The design of the three-level cache ensures high availability and durability of data, and even if Redis fails or data is lost, the data can still be recovered from the ES.
Through a multi-level cache architecture, the system can fully utilize the characteristics and advantages of caches of different levels, and achieve higher performance and expandability. This hierarchical caching architecture enables the system to maintain efficient data query services under different scenarios and loads.
According to the scheme, the corresponding target interface is determined particularly when a query request is received; detecting whether the target interface exists or not through a preset access filter to obtain a detection result; and inquiring data of a preset multi-level cache database based on the detection result to obtain target data. Based on the scheme of the application, the data of the high-flow service scene is cached by the dimension of the interface, the high-flow API interface can be cached preferentially, and the multi-level cache database is arranged, so that the cache database of each level has different performance and data capacity for data caching, the technical problems of poor data caching performance and low flexibility in the high-flow service scene are solved, the overall performance and flexibility of the data caching system are improved, and the load of the system is further reduced.
Referring to fig. 4, fig. 4 is a flowchart illustrating a second exemplary embodiment of a data cache management method according to the present application. Based on the embodiments shown in fig. 2 and 3, the multi-level cache database includes a first-level cache database, a second-level cache database, and a third-level cache database, and step S230 includes querying, based on the detection result, data of a preset multi-level cache database:
Step S410, if the target interface does not exist, inquiring the data of the secondary cache database;
specifically, the detection result is used to indicate whether an interface exists in an admission filter (bloom filter).
Further, in step S410, if the target interface does not exist, after querying the data of the secondary cache database, the method further includes:
detecting whether the target data exists or not through the admission filter;
and if the target data exist, pushing the target data to the primary cache database through the secondary cache database.
Specifically, the result of the Redis query is judged by the bloom filter, and if the result exists in the bloom filter, the data is indicated to be cached, and the target data can be directly returned. If the target data does not exist in the bloom filter, storing the target data in a local cache, and returning the result to the client.
Step S420, if the data of the secondary cache database does not exist, inquiring the data of the tertiary cache database;
specifically, at the time of query, it is first determined whether an API to be queried exists in the bloom filter. If not, the Redis database is directly queried. If present in the bloom filter, the local cache is queried first. If the data exists in the local cache, the target data is returned. If the data does not exist in the local cache, continuously inquiring the Redis database, judging the target data through a bloom filter, and storing the target data into the local cache.
Step S430, if the data of the third-level cache database exists, pulling the data of the third-level cache database through the second-level cache database;
step S440, if the target interface exists, inquiring the data of the primary cache database;
step S450, if the data of the primary cache database does not exist, executing the step of querying the data of the secondary cache database.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a data query of the data cache management method according to the present application. Firstly, counting the request times of the API of the current service in a certain time window through a sliding window algorithm, and sequencing according to the request times; then, determining an API admission ratio N allowed by the local cache according to the local resources and the data volume; then, sorting according to the access times of the APIs to obtain a list, and storing the APIs with the previous admission ratio N into a bloom filter to serve as a local cache admission filter; then, judging whether a current interface exists in the bloom filter during inquiry, and directly inquiring redis if the current interface does not exist; the redis can not inquire the data, and then inquire the ES; the current interface queries the local cache, the local cache data does not exist, and then the rediss are queried; finally, the redis target data is judged by a bloom filter and then stored in a local cache, and the target data is returned.
According to the scheme, compared with single-stage caching, the multi-stage caching can be more flexible and has higher performance by a multi-stage caching strategy with self-adaptive flow characteristics, and the data consistency is ensured; by carrying out data cache management of traffic priority with interface dimension, the cache hit rate and concurrency capability of the system can be improved, and then the problem of redisio bottleneck of instantaneous high traffic is solved.
Referring to fig. 6, fig. 6 is a flowchart illustrating a third exemplary embodiment of a data cache management method according to the present application. Based on the embodiments shown in fig. 2 and 4, the data cache management method further includes the following steps:
step S610, deleting the data of the primary cache database when the changed data is detected;
specifically, firstly, when local cache data is changed, a cache operation is written, the local cache is deleted, then redis is written and asynchronously written into an ES, and data modification in the redis pushes the changed data to a first-level cache (local cache) for data update through pub/sub of the redis; then, updating data, and pushing the changed data to a local cache through a pub/sub mode of redis; then, the data pushed to the local cache is firstly judged whether the current interface is in a bloom filter, and if the current interface is not in the bloom filter, the change is ignored; otherwise, the change data is stored in a local cache, and the expiration time is set.
Step S620, asynchronously writing the change data into the secondary cache database and the tertiary cache database;
illustratively, when a data change occurs, a write cache operation is first performed. This operation first deletes the corresponding data in the local cache to ensure that the most up-to-date data can be loaded. Then, the updated data is written into Redis and asynchronously written into ES. This ensures that the data in Redis is up to date and asynchronous writing into ES can achieve persistence and backup of data. Meanwhile, a pub/sub mechanism of Redis is used for pushing the information of data change to a first-level cache (local cache) to update the data. Real-time data updating can be realized, and the data in the local cache and the data source are ensured to keep synchronous.
Step S630, pushing the change data to the primary cache database through the secondary cache database.
Specifically, before pushing data to the local cache, it is first determined whether the current interface is in the bloom filter. The bloom filter is used for quickly judging whether an element exists in the set. If the current interface is not in the bloom filter, indicating that the interface is not within the cache admission range, this change operation will be ignored. If the current interface exists in the bloom filter, the changed data is saved to the local cache and an appropriate expiration time is set. The pushed data is judged and screened through the bloom filter, so that the data meeting the conditions is stored in the local cache, and the data updating efficiency and instantaneity are improved.
The use mode of the buffer memory can be dynamically adjusted according to the real-time change of the flow and the adjustment of the priority. For example, when the priority of a certain traffic is changed, the usage policy of the buffer may be readjusted according to the new priority. Or monitoring the flow in real time according to the cache hit rate and the performance index, and dynamically adjusting the parameters such as the cache capacity and the like.
It should be noted that the configuration details may be adjusted according to the cache system and the data source used. Meanwhile, the parameter setting of the bloom filter is also required to be optimized according to actual conditions so as to ensure accuracy and performance.
According to the scheme, the latest data can be ensured to be loaded into the cache through the write cache operation, and the data change and update of the local cache can be realized; and the changed data is pushed to the local cache through a pub/sub mechanism of Redis, so that the instantaneity of the cached data and the data consistency of the multi-level cache are ensured.
Referring to fig. 7, fig. 7 is a flowchart illustrating a fourth exemplary embodiment of a data cache management method according to the present application. Based on the embodiments shown in fig. 2 and 4, the data cache management method further includes the following steps:
Step S710, respectively obtaining the number of keys of the secondary cache database and the number of keys of the tertiary cache database;
specifically, referring to fig. 8, fig. 8 is a schematic diagram of self-healing of cached data according to the data cache management method of the present application. If the local cache is changed, the corresponding data in the local cache is deleted directly, and the consistency of the data in the local cache and the data source is ensured.
If the data in the Redis are changed, dynamically monitoring the data change through the watchdog task, comparing whether the key numbers in the Redis and the ES are equal, and if the key numbers are equal, continuing monitoring.
Step S720, when it is detected that the number of keys of the secondary cache database is not equal to the number of keys of the tertiary cache database, pulling the data of the tertiary cache database through the secondary cache database, and deleting the data of the primary cache database.
Specifically, if the number of keys in the redis and the ES is not equal, a data synchronization mechanism is triggered, data in the ES is actively pulled to be synchronized into the redis, and the data in the local cache is emptied. If the change triggers the data synchronization, configuring the maximum number of times of synchronization every day, and updating the number of times of synchronization every early morning. And synchronizing data from a data source to an ES (ES) and then to a redis every day in the early morning, monitoring the data of the data source, and synchronizing the changed data to the ES and the redis in an incremental mode when the data is changed, wherein the redis synchronizes the update information to a local cache through a push mode in a push-pull mode.
The data change in Redis is dynamically monitored through a watchdog task. The task will compare whether the number of keys in Redis and ES are equal. If the number of keys is equal, indicating that the data in Redis and ES remain consistent, the task continues to snoop changes. If the number of keys is not equal, the data synchronization operation is triggered when the data is inconsistent.
Redis synchronizes update information to a local cache through push mode (pub/sub). When the data in Redis is changed, the changed information is pushed to a local cache subscribed to the information, so that the data in the local cache can keep real-time.
According to the scheme, the self-healing capacity of the cache system can be achieved through the self-healing repair task. And maintaining the consistency of the cached data and the data source through the periodical full synchronization and the incremental synchronization operation. Real-time data updating is achieved by monitoring data change and triggering synchronous operation. Meanwhile, the data consistency of the multi-level cache and the abnormal situation restoration are ensured through a push mode and a pull mode.
Referring to fig. 9, fig. 9 is a flowchart illustrating a fifth exemplary embodiment of a data cache management method according to the present application. Based on the embodiments shown in fig. 2 and 4, the data cache management method further includes the following steps:
Step S910, based on a preset timer, synchronizing data in a preset data source to the third-level cache database and the second-level cache database;
specifically, referring to fig. 10, fig. 10 is a schematic diagram illustrating synchronization of buffered data according to the data buffer management method of the present application. In an embodiment of the present application, synchronization includes full synchronization as well as incremental synchronization. The full synchronization strategy can be that data is synchronized from a data source MongoDB database to ES to Redis from 3 am every day.
Illustratively, all data is completely synchronized once to the target cache (ES and dis) from the data source (mongo db database) at a fixed point in time (e.g., 3 a.m. each day). That is, all data is re-fetched and written into the cache to ensure that the data in the cache is consistent with the data of the data source. This synchronization approach is applicable to scenarios where periodic updates of the cache are required, such as where the data changes significantly every day.
And/or step S920, based on a preset monitoring strategy, performing data synchronization to the third-level cache database and the second-level cache database through the data source.
Specifically, the incremental update policy may be that the data in the MongoDB is dynamically updated by the data change in the ES and Redis through the snoop policy.
Illustratively, incremental updating refers to updating changed data to target caches (ES and dis) in real time as the data in the data source (mongdb) changes through a snoop policy. The method avoids the spending of full synchronization, and only the changed data is updated into the cache so as to keep the real-time property of the cache. Incremental updates are typically implemented by subscribing to change events of the database or using mechanisms such as message queues. The synchronous mode is suitable for scenes with higher requirements on data change and real-time response.
Wherein, when the number of keys in Redis and ES are found to be unequal, a data synchronization operation is triggered. Actively pulling data from the ES and synchronizing the data into the Redis to ensure that the data in the Redis is consistent with the ES. Meanwhile, the operation of synchronizing the data to the local cache is not triggered, the data in the local cache is emptied, and the use of expired or wrong local cache data under the condition of inconsistent data can be avoided.
According to the scheme, the performance and real-time requirements of data synchronization can be balanced by combining full-volume synchronization and incremental update, and the full-volume synchronization ensures that all data are synchronized into a cache at a fixed time point, so that a consistent data view and an initial state of data loading are provided. The incremental update maintains real-time synchronization of the cache with the data source, providing timely response and lower synchronization delay.
Referring to fig. 11, fig. 11 is a flowchart illustrating a sixth exemplary embodiment of a data cache management method according to the present application. Based on the embodiment shown in fig. 2, step S220, detecting whether the target interface exists through a preset admission filter, and before obtaining the detection result, further includes:
step S1110, based on a preset sliding window, counting the number of requests of a plurality of service interfaces;
specifically, the sliding window algorithm is an algorithm for counting the number of requests within a certain time window. It records the number of requests per API within a fixed length time window by setting it. Over time, the window will slide, new requests will be recorded, and old requests will be discarded. Thus, the request frequency of the API can be counted in real time, and the ranking is carried out according to the request times.
Referring to fig. 12, fig. 12 is a flow chart of admission judgment in the data buffer management method of the present application. Counting the access times of the APIs through a time window algorithm, adding a certain percentage of APIs to the bloom filter through a hash algorithm to serve as an access condition for local cache, and resetting data in the bloom filter in the next time window.
Step S1120, sequencing the request times to obtain an interface sequencing list of the service interfaces;
specifically, when a user performs data query, a corresponding interface is called, the corresponding access request times in a period can be obtained through a plurality of access requests of a plurality of users in a period, and then the access requests are ordered according to the access request times of the APIs through a local cache access filter, so that an API list from high to low according to the request times can be obtained.
Step S1130, determining an admission ratio according to the preset local resources and data volume;
specifically, the local cache admission ratio (N) is determined based on the local resources and the amount of data requesting the number of accesses. It represents the proportion of APIs that are allowed to be cached in the overall API list. For example, if the admission ratio is set to 30%, the API with the highest number of API requests of the first 30% may be put into the local cache.
Step S1140 determines the admission filter based on the admission ratio and the interface ordered list.
Specifically, an API is added to the bloom filter as a local cache admission condition: according to a certain percentage, the API is added to the bloom filter using a hash algorithm. Adding a percentage of APIs to the bloom filter means that these APIs can be cached and meet admission conditions.
It should be noted that after each time window is completed, the data in the bloom filter is reset. That is, each time window has its own bloom filter for recording the APIs that are allowed to cache within the current time window. Resetting the bloom filter may clear the data of the previous time window and prepare for the next time window. The length of the time window, the admission percentage and the configuration parameters of the bloom filter are adjusted according to actual requirements and system performance.
According to the scheme, the access times of the API are counted in each time window; selecting a portion of the API to add to the bloom filter according to the set percentage; the APIs in the bloom filter represent APIs that can be cached, meeting the admission condition; after the time window is finished, resetting bloom filter data, preparing statistics and admission judgment of the next time window, sorting according to the request times of the APIs, and dynamically determining which APIs should be cached. The bloom filter can reduce the access times to Redis and improve the query efficiency. Meanwhile, the resource waste caused by excessive caching can be avoided by limiting the admission proportion of the local caching.
In addition, an embodiment of the present application further provides a data cache management apparatus, where the data cache management apparatus includes:
the acquisition module is used for determining a corresponding target interface when receiving a query request;
the admission module is used for detecting whether the target interface exists or not through a preset admission filter to obtain a detection result;
and the query module is used for querying the data of the preset multi-level cache database based on the detection result to obtain target data.
The principle and implementation process of the data cache management are implemented in this embodiment, please refer to the above embodiments, and are not repeated here.
In addition, the embodiment of the application also provides a terminal device, which comprises a memory, a processor and a data cache management program stored on the memory and capable of running on the processor, wherein the data cache management program realizes the steps of the data cache management method when being executed by the processor.
Because the data cache management program is executed by the processor and adopts all the technical schemes of all the embodiments, the data cache management program at least has all the beneficial effects brought by all the technical schemes of all the embodiments and is not described in detail herein.
In addition, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a data cache management program, and the data cache management program realizes the steps of the data cache management method when being executed by a processor.
Because the data cache management program is executed by the processor and adopts all the technical schemes of all the embodiments, the data cache management program at least has all the beneficial effects brought by all the technical schemes of all the embodiments and is not described in detail herein.
Compared with the prior art, the data cache management method, the device, the terminal equipment and the storage medium provided by the embodiment of the application determine the corresponding target interface when receiving the query request; detecting whether the target interface exists or not through a preset access filter to obtain a detection result; and inquiring data of a preset multi-level cache database based on the detection result to obtain target data. Based on the scheme of the application, the data of the high-flow service scene is cached by the dimension of the interface, the high-flow API interface can be cached preferentially, and the multi-level cache database is arranged, so that the cache database of each level has different performance and data capacity for data caching, the technical problems of poor data caching performance and low flexibility in the high-flow service scene are solved, the overall performance and flexibility of the data caching system are improved, and the load of the system is further reduced.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The data cache management method is characterized by comprising the following steps of:
when a query request is received, determining a corresponding target interface;
detecting whether the target interface exists or not through a preset access filter to obtain a detection result;
and inquiring data of a preset multi-level cache database based on the detection result to obtain target data.
2. The data cache management method as claimed in claim 1, wherein the multi-level cache database comprises a first-level cache database, a second-level cache database, and a third-level cache database, and the step of querying the data of the preset multi-level cache database based on the detection result comprises:
if the target interface does not exist, querying the data of the secondary cache database;
if the data of the secondary cache database does not exist, querying the data of the tertiary cache database;
If the data of the third-level cache database exists, pulling the data of the third-level cache database through the second-level cache database;
if the target interface exists, inquiring the data of the primary cache database;
and if the data of the primary cache database does not exist, executing the step of inquiring the data of the secondary cache database.
3. The data cache management method as recited in claim 2, further comprising, after said step of querying said secondary cache database data:
detecting whether the target data exists or not through the admission filter;
and if the target data exist, pushing the target data to the primary cache database through the secondary cache database.
4. The data cache management method as recited in claim 2, wherein the data cache management method further comprises the steps of:
deleting the data of the primary cache database when the changed data is detected;
asynchronously writing the change data into the secondary cache database and the tertiary cache database;
pushing the change data to the first-level cache database through the second-level cache database.
5. The data cache management method as recited in claim 2, wherein the data cache management method further comprises the steps of:
respectively acquiring the key number of the secondary cache database and the key number of the tertiary cache database;
when the fact that the number of keys of the secondary cache database is unequal to the number of keys of the tertiary cache database is detected, pulling data of the tertiary cache database through the secondary cache database, and deleting the data of the primary cache database.
6. The data cache management method as recited in claim 2, wherein the data cache management method further comprises the steps of:
synchronizing data in a preset data source to the third-level cache database and the second-level cache database based on a preset timer; and/or
And based on a preset monitoring strategy, carrying out data synchronization to the third-level cache database and the second-level cache database through the data source.
7. The method for managing data cache as recited in claim 1, wherein before the step of detecting whether the target interface exists through a preset admission filter to obtain a detection result, further comprising:
Based on a preset sliding window, counting the request times of a plurality of service interfaces;
sequencing the request times to obtain an interface sequencing list of the plurality of service interfaces;
determining an admission proportion according to preset local resources and data quantity;
and determining the admission filter based on the admission proportion and the interface ordered list.
8. A data cache management apparatus, characterized in that the data cache management apparatus comprises:
the acquisition module is used for determining a corresponding target interface when receiving a query request;
the admission module is used for detecting whether the target interface exists or not through a preset admission filter to obtain a detection result;
and the query module is used for querying the data of the preset multi-level cache database based on the detection result to obtain target data.
9. A terminal device comprising a memory, a processor and a data cache management program stored on the memory and executable on the processor, the data cache management program when executed by the processor implementing the steps of the data cache management method according to any of claims 1-7.
10. A computer readable storage medium, wherein a data cache management program is stored on the computer readable storage medium, the data cache management program implementing the steps of the data cache management method according to any one of claims 1-7 when executed by a processor.
CN202310665168.3A 2023-06-06 2023-06-06 Data cache management method, device, terminal equipment and storage medium Pending CN116756190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310665168.3A CN116756190A (en) 2023-06-06 2023-06-06 Data cache management method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310665168.3A CN116756190A (en) 2023-06-06 2023-06-06 Data cache management method, device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116756190A true CN116756190A (en) 2023-09-15

Family

ID=87948868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310665168.3A Pending CN116756190A (en) 2023-06-06 2023-06-06 Data cache management method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116756190A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117785949A (en) * 2024-02-28 2024-03-29 云南省地矿测绘院有限公司 Data caching method, electronic equipment, storage medium and device
CN117785949B (en) * 2024-02-28 2024-05-10 云南省地矿测绘院有限公司 Data caching method, electronic equipment, storage medium and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117785949A (en) * 2024-02-28 2024-03-29 云南省地矿测绘院有限公司 Data caching method, electronic equipment, storage medium and device
CN117785949B (en) * 2024-02-28 2024-05-10 云南省地矿测绘院有限公司 Data caching method, electronic equipment, storage medium and device

Similar Documents

Publication Publication Date Title
KR100791628B1 (en) Method for active controlling cache in mobile network system, Recording medium and System thereof
CN111464615B (en) Request processing method, device, server and storage medium
US8516114B2 (en) Method and apparatus for content pre-fetching and preparation
CN105630819B (en) A kind of data cached method for refreshing and device
CN108055302B (en) Picture caching processing method and system and server
CN106230997B (en) Resource scheduling method and device
CN108804234B (en) Data storage system and method of operation thereof
EP2541423A1 (en) Replacement policy for resource container
CN110222073B (en) Data query method and related device
CN109446222A (en) A kind of date storage method of Double buffer, device and storage medium
CN111159219B (en) Data management method, device, server and storage medium
CN105159845A (en) Memory reading method
CN110990439A (en) Cache-based quick query method and device, computer equipment and storage medium
CN112084206A (en) Database transaction request processing method, related device and storage medium
WO2023081233A1 (en) Memory cache entry management with pinned cache entries
CN111221469A (en) Method, device and system for synchronizing cache data
CN113031864B (en) Data processing method and device, electronic equipment and storage medium
CN114741335A (en) Cache management method, device, medium and equipment
US11269784B1 (en) System and methods for efficient caching in a distributed environment
CN110413689B (en) Multi-node data synchronization method and device for memory database
CN116756190A (en) Data cache management method, device, terminal equipment and storage medium
CN113885801A (en) Memory data processing method and device
CN117539915B (en) Data processing method and related device
US11954039B2 (en) Caching system and method
US11294853B1 (en) Archiver for data stream service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination