CN113420052A - Multi-level distributed cache system and method - Google Patents

Multi-level distributed cache system and method Download PDF

Info

Publication number
CN113420052A
CN113420052A CN202110772457.4A CN202110772457A CN113420052A CN 113420052 A CN113420052 A CN 113420052A CN 202110772457 A CN202110772457 A CN 202110772457A CN 113420052 A CN113420052 A CN 113420052A
Authority
CN
China
Prior art keywords
data
cache
redis
target data
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110772457.4A
Other languages
Chinese (zh)
Other versions
CN113420052B (en
Inventor
汪瀛寰
沈忱
席尧磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pudong Development Bank Co Ltd
Original Assignee
Shanghai Pudong Development Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pudong Development Bank Co Ltd filed Critical Shanghai Pudong Development Bank Co Ltd
Priority to CN202110772457.4A priority Critical patent/CN113420052B/en
Publication of CN113420052A publication Critical patent/CN113420052A/en
Application granted granted Critical
Publication of CN113420052B publication Critical patent/CN113420052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a multi-level distributed cache system. Wherein: after receiving the data query request, the local cache returns the target data, or sends the data query request to the Redis query cache; the Redis query cache receives the data query request, returns target data, and stores the target data in a local cache, or sends the data query request to the Redis loading cache; the Redis loading cache loads data to be cached in the database in advance according to the cache configuration table, returns target data according to the data query request, and simultaneously caches the target data to the Redis query cache and the local cache or sends the data query request to the database. And returning the target data after the database receives the data query request, or feeding back that the target data does not exist in the database, and then caching the feedback content to a Redis query cache and a local cache. The scheme improves the concurrency performance of the server, accelerates the response speed of the data request, optimizes the network performance and improves the user experience.

Description

Multi-level distributed cache system and method
Technical Field
The embodiment of the application relates to the technical field of data analysis, in particular to a multi-level distributed cache system and a method.
Background
With the rapid development and popularization of mobile internet and intelligent devices, the pressure of processing data by a server applied to the internet is increasing, and meanwhile, the requirement of a user on request response time is also becoming more severe. Under the background that network delay optimization and hardware performance improvement reach certain bottlenecks, a cache technology has become one of the main means for improving the concurrency performance and response speed of a server.
At present, there are two mainstream distributed cache system implementation modes, which are a Memcached-based distributed cache system and a Redis-based distributed cache system. However, Redis has the advantages of supporting various data types, supporting data persistence and high expansibility, and the like, so that a distributed cache system based on Redis becomes a universal choice, but the method has the following two problems: 1. when data is subjected to addition, deletion and modification operations, the problem of data consistency between a Redis load cache and a database needs to be solved; 2. in the process of data request, if a large number of data access requests occur in a short time, problems of cache penetration, cache breakdown, cache avalanche and the like may occur, so that the instantaneous pressure of the database is too high and even the database is down.
Disclosure of Invention
The embodiment of the application provides a multi-level distributed cache system and a method, which can access a data query request through a multi-level distributed cache system in a step-by-step cache manner, reduce the data volume of the data query request directly docked by a database, improve the data query speed and ensure the stable operation of the cache system.
In a first aspect, an embodiment of the present application provides a multi-level distributed cache system, where the system includes: the remote cache comprises a local cache, a remote cache and a database, wherein the remote cache comprises a Redis loading cache and a Redis inquiry cache;
the local cache is used for returning the target data if the target data exists in the local cache when receiving the data query request, and sending the data query request to the Redis query cache if the target data does not exist in the local cache;
the Redis query cache is used for receiving the data query request and determining a historical query result of the data query request; if the historical query result of the data query request is that target data do not exist in the database, returning the recorded result that the target data do not exist in the database; if the historical query result matched with the data query request exists, returning target data, and storing the target data to a local cache; if no historical query result matched with the data query request exists, sending the data query request to a Redis load cache;
the Redis load cache is used for pre-loading data to be cached in a database according to a cache configuration table, performing target data query based on a received data query request, returning target data if the target data is queried, caching the target data to the Redis query cache, storing the target data to a local cache, and sending the target data to the database for query if the target data does not exist;
the database is used for querying target data according to the received data query request, returning the target data if the query is received, caching the target data into the Redis query cache, storing the target data into a local cache, feeding back that the target data does not exist in the database if the target data does not exist, and caching feedback contents into the Redis query cache.
In a second aspect, an embodiment of the present application provides a multi-level distributed caching method, where the method includes:
when a data query request is received, if target data exist in a local cache, returning the target data, and if the target data do not exist in the local cache, sending the data query request to a Redis query cache;
receiving the data query request, and determining a historical query result of the data query request; if the historical query result of the data query request is that target data do not exist in the database, returning the recorded result that the target data do not exist in the database; if the historical query result matched with the data query request exists, returning target data, and storing the target data to a local cache; if no historical query result matched with the data query request exists, sending the data query request to a Redis load cache;
pre-loading data to be cached in a database according to a cache configuration table, performing target data query based on a received data query request, returning target data if the data to be cached is queried, caching the target data to the Redis query cache, storing the target data to a local cache, and sending the target data to the database for query if the target data does not exist;
and performing target data query according to the received data query request, if the target data is queried, returning the target data, caching the target data into the Redis query cache, storing the target data into a local cache, and if the target data does not exist, feeding back that the target data does not exist in the database, and caching the feedback content into the Redis query cache.
An embodiment of the present application provides a multi-level distributed cache system, where the system includes: the remote cache comprises a local cache, a remote cache and a database, wherein the remote cache comprises a Redis loading cache and a Redis inquiry cache;
the local cache is used for returning the target data if the target data exists in the local cache when receiving the data query request, and sending the data query request to the Redis query cache if the target data does not exist in the local cache; the Redis query cache is used for receiving the data query request and determining a historical query result of the data query request; if the historical query result of the data query request is that target data do not exist in the database, returning the recorded result that the target data do not exist in the database; if the historical query result matched with the data query request exists, returning target data, and storing the target data to a local cache; if no historical query result matched with the data query request exists, sending the data query request to a Redis load cache; the Redis load cache is used for pre-loading data to be cached in a database according to a cache configuration table, performing target data query based on a received data query request, returning target data if the target data is queried, caching the target data to the Redis query cache, storing the target data to a local cache, and sending the target data to the database for query if the target data does not exist; the database is used for querying target data according to the received data query request, returning the target data if the query is received, caching the target data into the Redis query cache, storing the target data into a local cache, feeding back that the target data does not exist in the database if the target data does not exist, and caching feedback contents into the Redis query cache.
The multi-level distributed cache system improves the concurrency performance of the server, accelerates the response speed of data requests, optimizes the network performance and improves the user network use experience.
Drawings
Fig. 1 is a block diagram of a multi-level distributed system according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating an overall architecture of a distributed cache according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a multi-level distributed caching method according to a second embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a structural block diagram of a multi-level distributed system according to an embodiment of the present disclosure, where the embodiment is applicable to a scenario where a large number of operation requests are encountered in a short time during a network system operation process.
As shown in fig. 1, the multi-stage distributed system provided in the embodiment of the present application includes:
the system comprises a local cache 110, a remote cache 120 and a database 130, wherein the remote cache 120 comprises a Redis load cache 121 and a Redis query cache 122;
the local cache 110 is configured to, when receiving a data query request, return the target data if the target data exists in the local cache 110, and send the data query request to the Redis query cache 122 if the target data does not exist in the local cache 110;
the Redis query cache 122 is configured to receive the data query request and determine a historical query result of the data query request; if the historical query result of the data query request is that target data does not exist in the database 130, returning the recorded result that the target data does not exist in the database 130; if the historical query result matched with the data query request exists, returning the target data, and storing the target data to the local cache 110; if no historical query result matched with the data query request exists, sending the data query request to a Redis load cache 121;
the Redis load cache 121 is configured to pre-load data to be cached in the database 130 according to a cache configuration table, perform target data query based on a received data query request, return target data if the target data is queried, cache the target data in the Redis query cache 122, store the target data in the local cache 110, and send the target data to the database 130 for query if the target data does not exist;
the database 130 is configured to perform target data query according to the received data query request, return target data if the target data is queried, cache the target data in the Redis query cache 122, and store the target data in the local cache 110, and feed back that the target data does not exist in the database if the target data does not exist, and cache the feedback content in the Redis query cache 122.
Wherein the received data query request is issued by the client. For example, a web page is opened on a computer, at this time, a client sends a data request to a server, the server finds page data from a cache or a database, and displays the web page on an interface, that is, responds to a data query request of the client.
Further, a buffer is a buffer for data exchange, and is a temporary place for storing data (frequently used data). When a user inquires data, the central processing unit firstly searches the data in the cache, and if the data is found, the data is directly returned. If not, the data is searched in the database.
Further, the local cache 110 and the remote cache 120 are distinguished by different locations of storage. For example, the storage location of the local cache 110 may be a local computer and the storage location of the remote cache 120 may be other servers.
The local cache 110, in addition to being able to respond to the received data query request, may also receive target data returned by the Redis query cache 122, the Redis load cache 121, and the server. In addition, in this scheme, a certain validity period may be set for the target data cached by the local cache 110, for example, 30 seconds may be set. According to the scheme, the query performance of the database table with low requirement on strong consistency can be ensured through the setting.
Further, the cache has different levels, such as a first level cache, a second level cache, and the like. The different levels of caching differ in the speed at which data is returned, and it will be appreciated that the first level cache returns data at a higher speed than the second level cache.
In this embodiment, the local cache 110 is used as a first-level cache, and optionally, the local cache 110 may be implemented based on Caffeine. Among them, Caffeine is a high-performance Java cache library. Further, in this embodiment, the remote cache 120 is implemented based on Redis as a secondary cache. Redis is essentially a non-relational memory database, and a key-value storage system is arranged inside Redis. Where key-value is a key-value pair, in which key is a key and value is a value, e.g., "firstName": Brett "," firstName "is a key and" Brett "is a value. The value types that Redis supports storage include string, list, set, zset, and hash (hash type, similar to map in Java).
Further, the database 130 is a warehouse for organizing, storing and managing data according to a data structure, and is an organized, sharable, and uniformly managed collection of large amounts of data stored in a computer for a long time. The database 130 may be divided into a relational database such as Mysql, SqlServer, etc., and a non-relational database such as MongoDB, Redis, Memcache, etc. The present embodiment does not limit the type of the database 130.
It can be understood that, when a client sends a data query request, if the target data is not stored in any of the local cache 110, the Redis query cache 122, and the Redis load cache 121, the data query request is sent to the database 130. If the database 130 stores the target data, returning the target data, caching the target data in the Redis query cache 122, and storing the target data in the local cache 110; if the database 130 does not store the target data, returning that the target data does not exist in the database, and caching the feedback result into the Redis query cache 122.
Further, the target data refers to data of the query request, and may be parameter data in a banking system, for example.
Further, the Redis query cache 122 is a cache of the Redis load cache 121 and the database 130. Stored in Redis query cache 122 are the results of historical queries returned by database 130 or Redis load cache 121. The historical query result refers to a query result of a historical user when the historical user requests to query the target data. The historical query result may be that no target data exists in the database 130, and at this time, the Redis query cache 122 returns that no target data exists in the database 130; the historical query result may also be target data returned by the Redis cache 121 and the database 130, and at this time, the Redis query cache 122 returns the target data, and stores the target data to the local cache 110. And if the query result of the target data does not exist, sending the data query request to the Redis load cache 121.
Further, the Redis load cache 121 loads data to be cached from the database 130 based on the cache configuration table. The cache configuration table includes table information of a frequently accessed database table and other related configuration information of the Redis load cache 121, such as a cache manner, an effective time, an initialization time, and the like, and these data need to be loaded from the database 130 into the Redis load cache 121. The data to be cached refers to data to be loaded from the database 130 into the Redis load cache 121.
In this embodiment, optionally, the cache configuration table includes: table name, table primary key, and table index;
the table name and the table primary key are used for determining a key and a unique target data record in the Redis load cache 121;
the table name and the table index are used to determine a key in the Redis load cache 121 and at least one data record satisfying an index condition.
It can be understood that the data is stored in the database 130 in the form of a table, and the key in the Redis load cache 121 is determined according to the table name, the primary key and the value of the index in the cache configuration. Further, illustratively, the table name, the table primary key, and the table index are the regions to which the currency type parameter table, the currency type name, and the currency type belong, respectively, then the primary key in the Redis load cache 121 is "currency type parameter-renminbi", and the corresponding value is a data record of the currency type name of renminbi in the currency type parameter table; the index key in the Redis loading cache 121 is "currency parameter-asia", and the corresponding main key in the asian region of the value, such as "currency parameter-renminbi", "currency parameter-yen", "currency parameter-korean element", and the like, is determined to be the data record of all currencies of asian in the currency parameter table according to the main key. The primary key is a key in the Redis load cache 121 determined by the table name and the table primary key, and the index key is a key in the Redis load cache 121 determined by the table name and the table index.
According to the embodiment, the required target data can be quickly positioned by configuring the cache configuration table and setting the table name, the table main key and the table index.
In this embodiment, optionally, the cache configuration table further includes: a library to which the table belongs;
the table belonging library is used for representing the database 130 to which the table to be cached belongs.
It can be understood that what is described in the cache configuration table is configuration information of a database table that needs to be loaded from the database 130 into the Redis load cache 121, and when data is loaded, the data needs to be obtained from the database 130 where the data is located.
In this embodiment, optionally, the cache configuration table further includes: initializing time;
the initialization time is used to determine the execution time of the timed loading task of the Redis loading cache 121.
It is understood that the Redis load cache 121 is timed to load data from the database 130 according to the cache configuration table.
Further, the initialization time may be 6 am each day, or at regular intervals, such as every other hour from 6 am. The initialization time may be fixed or may be set according to actual conditions.
In this embodiment, the server can quickly find the target data from the storage location of the target data by configuring the cache configuration table, the library to which the table belongs, and the initialization time, so as to preload the target data to the Redis load cache 121. It can be understood that the initialization process of the Redis load cache 121, such as preloading some highly heated information every fixed time of day.
In this embodiment, optionally, the database 130 is configured to, when a data change event is received, synchronize the data change event to the Redis load cache 121;
the Redis load cache 121 is configured to determine, according to the data change event, changed data in the Redis load cache 121, and delete the changed data;
the database 130 is further configured to update the data of the database 130 if the data change event submission is received;
the Redis load cache 121 is further configured to perform data synchronization update of the Redis load cache 121 according to the submission result of the data change event.
The data change event is an event in which data is changed. For example, the remaining stock of a certain mobile phone on a certain e-commerce platform is 120, and if a user places an order for the mobile phone at this time, the stock number of the mobile phone is about to change, i.e. a data change event is generated. It can be understood that, after receiving the data change event, the database 130 synchronizes the event to the Redis load cache 121, so that the Redis load cache 121 updates the data in time according to the data change event. Illustratively, when the database 130 receives that the data a is about to change, the database 130 synchronizes the event to the Redis load cache 121, and the Redis load cache 121 deletes the data a stored therein.
Further, database 130 receives data change events, at which time data may not have been changed. For example, according to the example described in the preceding paragraph, the user places an order and the inventory amount is about to change, but not yet. The inventory amount is changed after the user places an order and pays successfully. In this embodiment, an event that data changes is referred to as a data change event submission, at which time the database 130 updates corresponding data, and then the Redis load cache 121 updates corresponding internal data according to the database 130.
The present embodiment may ensure that the cache remains consistent with the data of database 130 after a data change event occurs.
In this embodiment, optionally, the database 130 is configured to not perform data updating if the received data change event is not submitted;
the Redis load cache 121 is further configured to perform a restore operation of deleting changed data according to an uncommitted result of the data change event.
It is understood that if the data is not changed, the database 130 does not update the corresponding data, and the Redis load cache 121 reloads the deleted data from the recycle bin or the database. Wherein, the copy-back operation refers to reloading the deleted data.
The embodiment can ensure that the cache is consistent with the data of the database when the data change event is not submitted due to time-out and the like.
In this embodiment, optionally, the system further includes: loading a task management module at fixed time;
the timed loading task management module is configured to start a timed loading task thread, and perform initialization loading on the data to be cached in the database 130 according to the initialization time.
Because the data in the cache is time-efficient, the data can be deleted after the time in the cache reaches the effective time, and therefore the data to be cached in the database needs to be timely reloaded into the cache. Besides, the database is also loaded with some high-heat data to the cache in a timing mode.
Fig. 2 is a general architecture diagram of a distributed cache provided in this embodiment, as shown in fig. 2, where a local cache is implemented based on Caffeine as a first-level cache; the remote cache is implemented based on a Redis cluster as a secondary cache, wherein the remote cache comprises a Redis load cache and a Redis query cache. When an external request queries data, the first-level cache, namely the local cache, is preferentially accessed, and if the data meeting the conditions are cached in the local cache, the data are directly returned, so that extra network IO (input/output) overhead caused by direct query from a Redis cluster is avoided, and the efficiency of data query is improved. And when the first-level cache does not return a result, the Redis query cache is preferentially accessed, when the result is not found, the Redis load cache is accessed, and finally the database is accessed. Meanwhile, the embodiment provides a fast and non-inductive cache use scheme by using Aspect Oriented Programming (AOP). The proposal provides a local cache application program interface and a remote cache application program interface for encapsulating the basic operations of Caffeine and Redis respectively. Meanwhile, Structured Query Language (SQL) is intercepted by means of the AOP section technology, and cache is quickly and noninductively used to improve Query efficiency and reduce database pressure.
The embodiment provides a multi-level cache system consisting of a local cache and a remote cache. The system comprises: the remote cache comprises a local cache, a remote cache and a database, wherein the remote cache comprises a Redis loading cache and a Redis inquiry cache; the local cache is used for returning the target data if the target data exists in the local cache when receiving the data query request, and sending the data query request to the Redis query cache if the target data does not exist in the local cache; the Redis query cache is used for receiving the data query request and determining a historical query result of the data query request; if the historical query result of the data query request is that target data do not exist in the database, returning the recorded result that the target data do not exist in the database; if the historical query result matched with the data query request exists, returning target data, and storing the target data to a local cache; if no historical query result matched with the data query request exists, sending the data query request to a Redis load cache; the Redis load cache is used for pre-loading data to be cached in a database according to a cache configuration table, performing target data query based on a received data query request, returning target data if the target data is queried, caching the target data to the Redis query cache, storing the target data to a local cache, and sending the target data to the database for query if the target data does not exist. The database is used for querying target data according to the received data query request, returning the target data if the query is received, caching the target data into the Redis query cache, storing the target data into a local cache, feeding back that the target data does not exist in the database if the target data does not exist, and caching feedback contents into the Redis query cache.
The cache system provided by the embodiment improves the concurrency performance of the server, accelerates the response speed of the data request, optimizes the network performance, and improves the user network use experience: the method can adapt to the cluster environment, has higher performance than a single Redis loading cache, and simultaneously better relieves the pressure of the database; when the database is added or deleted, the cache related data is deleted firstly, then the database is updated, and finally the modified data is synchronized to the cache by the submission state of the transaction, so that the final consistency of the Redis load cache and the database is achieved; redis query caching is carried out on the query result, null caching is carried out on the key which is not searched by the database, and the problem of cache penetration is solved; adding a mutual exclusion lock to a key in a Redis loading cache to solve the problem of cache breakdown; and a cache configuration table is provided, different cache loading times and expiration times are configured for a database table using a cache, and the problem of cache avalanche is solved.
Example two
Fig. 3 is a flowchart of a multi-level distributed caching method according to the second embodiment of the present application, which may be applied to a scenario where a large number of operation requests are encountered in a short time during the operation of a network system. The method may be performed by a multi-level distributed cache system provided in an embodiment of the present application, where the multi-level distributed cache system includes: the remote cache comprises a local cache, a remote cache and a database, wherein the remote cache comprises a Redis load cache and a Redis query cache.
As shown in fig. 3, the multi-level distributed caching method provided in the embodiment of the present application may include the following steps:
s310, when a data query request is received, if target data exist in the local cache, returning the target data, and if the target data do not exist in the local cache, sending the data query request to a Redis query cache.
The data query request is sent by the client, and the local cache returns the target data after receiving the data query request.
S320, receiving the data query request and determining a historical query result of the data query request; if the historical query result of the data query request is that target data do not exist in the database, returning the recorded result that the target data do not exist in the database; if the historical query result matched with the data query request exists, returning target data, and storing the target data to a local cache; and if no historical query result matched with the data query request exists, sending the data query request to a Redis load cache.
Wherein, the data query request is sent by the local cache, and the Redis query cache receives the data query request. It can be understood that, if there is no target data in the local cache, the local cache sends a data query request to the Redis query cache.
S330, data to be cached in the database is preloaded according to the cache configuration table, target data query is carried out based on the received data query request, if the data to be cached is queried, the target data are returned, the target data are cached to the Redis query cache, the target data are stored to a local cache, and if the target data do not exist, the target data are sent to the database for query.
The Redis load cache loads data to be cached in the database in advance according to the cache configuration table and receives a data query request sent by the Redis query cache.
S340, performing target data query according to the received data query request, returning target data if the target data query is received, caching the target data into the Redis query cache, storing the target data into a local cache, feeding back that the target data does not exist in the database if the target data does not exist, and caching feedback contents into the Redis query cache.
When a client sends a data query request, if target data are not stored in the local cache, the Redis query cache and the Redis loading cache, the data query request is sent to the database. If the database stores the target data, returning the target data, loading the target data into a Redis query cache, and storing the target data into a local cache; if the target data are not stored in the database, returning that the target data do not exist in the database, and caching the feedback result into a Redis query cache.
In this embodiment, optionally, the method further includes:
receiving a data change event, and synchronizing the data change event to the Redis load cache;
determining changed data in a Redis loading cache according to the data change event, and deleting the changed data;
if the data change event submission is received, updating the data of the database;
and according to the submission result of the data change event, performing data synchronization updating of the Redis load cache.
The data change event is sent by the client, the database receives the data change event, modifies and updates the data according to the data change event, and synchronizes the data change event to the Redis load cache, so that the Redis load cache and the data of the database are kept consistent.
The embodiment can ensure that the cache is consistent with the data of the database after the data change event occurs.
In this embodiment, optionally, the method further includes:
if the received data change event is not submitted, the data of the database is not updated;
and according to the uncommitted result of the data change event, carrying out the restore operation of deleting the changed data.
The embodiment can ensure that the cache is consistent with the data of the database when the data change event is not submitted due to time-out and the like.
The embodiment provides a multi-level distributed caching method, which comprises the following steps: when a data query request is received, if target data exist in a local cache, returning the target data, and if the target data do not exist in the local cache, sending the data query request to a Redis query cache; receiving the data query request, and determining a historical query result of the data query request; if the historical query result of the data query request is that target data do not exist in the database, returning the recorded result that the target data do not exist in the database; if the historical query result matched with the data query request exists, returning target data, and storing the target data to a local cache; if no historical query result matched with the data query request exists, sending the data query request to a Redis load cache; pre-loading data to be cached in a database according to a cache configuration table, performing target data query based on a received data query request, returning target data if the data to be cached is queried, caching the target data to the Redis query cache, storing the target data to a local cache, and sending the target data to the database for query if the target data does not exist; and performing target data query according to the received data query request, if the target data is queried, returning the target data, caching the target data to the Redis query cache, storing the target data to a local cache, and if the target data does not exist, feeding back that the target data does not exist in the database, and caching the feedback content in the Redis query cache.
The caching method provided by the embodiment improves the concurrency performance of the server, accelerates the response speed of the data request, optimizes the network performance, and improves the user network use experience: the method can adapt to the cluster environment, has higher performance than a single Redis loading cache, and simultaneously better relieves the pressure of the database; preloading data to a Redis loading cache by regularly loading tasks; when the database is added or deleted, the cache related data is deleted firstly, then the database is updated, the modified data is synchronized to the cache by means of the transaction submitting state, and the data consistency of the Redis loading cache and the database is maintained; redis query caching is carried out on the query result, null caching is carried out on the key which is not searched by the database, and the problem of cache penetration is solved; adding a mutual exclusion lock to a key in a Redis loading cache to solve the problem of cache breakdown; and a cache configuration table is provided, different cache loading times and expiration times are configured for a database table using a cache, and the problem of cache avalanche is solved.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A multi-level distributed caching system, the system comprising: the remote cache comprises a local cache, a remote cache and a database, wherein the remote cache comprises a Redis loading cache and a Redis inquiry cache;
the local cache is used for returning the target data if the target data exists in the local cache when receiving the data query request, and sending the data query request to the Redis query cache if the target data does not exist in the local cache;
the Redis query cache is used for receiving the data query request and determining a historical query result of the data query request; if the historical query result of the data query request is that target data do not exist in the database, returning the recorded result that the target data do not exist in the database; if the historical query result matched with the data query request exists, returning target data, and storing the target data to a local cache; if no historical query result matched with the data query request exists, sending the data query request to a Redis load cache;
the Redis load cache is used for pre-loading data to be cached in a database according to a cache configuration table, performing target data query based on a received data query request, returning target data if the target data is queried, caching the target data to the Redis query cache, storing the target data to a local cache, and sending the target data to the database for query if the target data does not exist;
the database is used for querying target data according to the received data query request, returning the target data if the query is received, caching the target data into the Redis query cache, storing the target data into a local cache, feeding back that the target data does not exist in the database if the target data does not exist, and caching feedback contents into the Redis query cache.
2. The system of claim 1, wherein the cache configuration table comprises: table name, table primary key, and table index;
the table name and the table main key are used for determining a key and a unique target data record in a Redis loading cache;
the table name and the table index are used for determining a key in a Redis load cache and at least one data record meeting an index condition.
3. The system of claim 2, wherein the cache configuration table further comprises: a library to which the table belongs;
and the table belonging library is used for representing a database to which the table to be cached belongs.
4. The system of claim 2, wherein the cache configuration table further comprises: initializing time;
the initialization time is used for determining the execution time of the timed loading task of the Redis loading cache.
5. The system of claim 1, wherein:
the database is used for synchronizing the data change event to the Redis load cache when receiving the data change event;
the Redis load cache is used for determining the changed data in the Redis load cache according to the data change event and deleting the changed data;
the database is also used for updating the data of the database if the data change event submission is received;
and the Redis load cache is also used for carrying out data synchronization updating of the Redis load cache according to the submission result of the data change event.
6. The system of claim 5, wherein:
the database is used for not updating data if the received data change event is not submitted;
and the Redis load cache is also used for carrying out the restore operation of deleting the changed data according to the uncommitted result of the data change event.
7. The system of claim 5, further comprising: loading a task management module at fixed time;
and the timed loading task management module is used for starting a timed loading task thread and carrying out initialization loading on the data to be cached in the database according to the initialization time.
8. A method for multi-level distributed caching, the method being performed by a multi-level distributed caching system, the multi-level distributed caching system comprising: the remote cache comprises a local cache, a remote cache and a database, wherein the remote cache comprises a Redis loading cache and a Redis inquiry cache; the method comprises the following steps:
when the local cache receives a data query request, if target data exist in the local cache, returning the target data, and if the target data do not exist in the local cache, sending the data query request to a Redis query cache;
the Redis query cache receives the data query request and determines a historical query result of the data query request; if the historical query result of the data query request is that target data do not exist in the database, returning the recorded result that the target data do not exist in the database; if the historical query result matched with the data query request exists, returning target data, and storing the target data to a local cache; if no historical query result matched with the data query request exists, sending the data query request to a Redis load cache;
the Redis load cache pre-loads data to be cached in a database according to a cache configuration table, performs target data query based on a received data query request, returns target data if the data to be cached is queried, caches the target data to the Redis query cache, stores the target data to a local cache, and transmits the target data to the database for query if the target data does not exist;
and the database queries target data according to the received data query request, returns the target data if the query is received, caches the target data in the Redis query cache, stores the target data in a local cache, feeds back that the target data does not exist in the database if the target data does not exist, and caches the feedback content in the Redis query cache.
9. The method of claim 8, further comprising:
receiving a data change event, and synchronizing the data change event to the Redis load cache;
determining changed data in a Redis load cache according to the data change event, and deleting the data related to the change;
if the data change event submission is received, updating the data of the database;
and according to the submission result of the data change event, performing data synchronization updating of the Redis load cache.
10. The method of claim 8, further comprising:
if the received data change event is not submitted, the data of the database is not updated;
and according to the uncommitted result of the data change event, carrying out the restore operation of deleting the change related data.
CN202110772457.4A 2021-07-08 2021-07-08 Multi-level distributed cache system and method Active CN113420052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110772457.4A CN113420052B (en) 2021-07-08 2021-07-08 Multi-level distributed cache system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110772457.4A CN113420052B (en) 2021-07-08 2021-07-08 Multi-level distributed cache system and method

Publications (2)

Publication Number Publication Date
CN113420052A true CN113420052A (en) 2021-09-21
CN113420052B CN113420052B (en) 2023-02-17

Family

ID=77720517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110772457.4A Active CN113420052B (en) 2021-07-08 2021-07-08 Multi-level distributed cache system and method

Country Status (1)

Country Link
CN (1) CN113420052B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822657A (en) * 2021-11-24 2021-12-21 太平金融科技服务(上海)有限公司深圳分公司 Service supervision method and device, computer equipment and storage medium
CN113900840A (en) * 2021-12-08 2022-01-07 浙江新华移动传媒股份有限公司 Distributed transaction final consistency processing method and device
WO2023226682A1 (en) * 2022-05-25 2023-11-30 京东方科技集团股份有限公司 Data processing method and apparatus, server, and storage medium
CN117914944A (en) * 2024-03-20 2024-04-19 暗物智能科技(广州)有限公司 Distributed three-level caching method and device based on Internet of things

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251861A (en) * 2008-03-18 2008-08-27 北京锐安科技有限公司 Method for loading and inquiring magnanimity data
CN102117309A (en) * 2010-01-06 2011-07-06 卓望数码技术(深圳)有限公司 Data caching system and data query method
CN103853727A (en) * 2012-11-29 2014-06-11 深圳中兴力维技术有限公司 Method and system for improving large data volume query performance
US20160371355A1 (en) * 2015-06-19 2016-12-22 Nuodb, Inc. Techniques for resource description framework modeling within distributed database systems
CN108111325A (en) * 2016-11-24 2018-06-01 北京金山云网络技术有限公司 A kind of resource allocation methods and device
CN108205561A (en) * 2016-12-19 2018-06-26 北京国双科技有限公司 data query system, method and device
CN109669960A (en) * 2018-12-25 2019-04-23 钛马信息网络技术有限公司 The system and method for caching snowslide is avoided by multi-level buffer in micro services
CN110069419A (en) * 2018-09-04 2019-07-30 中国平安人寿保险股份有限公司 Multilevel cache system and its access control method, equipment and storage medium
CN111930780A (en) * 2020-10-12 2020-11-13 上海冰鉴信息科技有限公司 Data query method and system
CN112035528A (en) * 2020-09-11 2020-12-04 中国银行股份有限公司 Data query method and device
CN112559560A (en) * 2019-09-10 2021-03-26 北京京东振世信息技术有限公司 Metadata reading method and device, metadata updating method and device, and storage device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251861A (en) * 2008-03-18 2008-08-27 北京锐安科技有限公司 Method for loading and inquiring magnanimity data
CN102117309A (en) * 2010-01-06 2011-07-06 卓望数码技术(深圳)有限公司 Data caching system and data query method
CN103853727A (en) * 2012-11-29 2014-06-11 深圳中兴力维技术有限公司 Method and system for improving large data volume query performance
US20160371355A1 (en) * 2015-06-19 2016-12-22 Nuodb, Inc. Techniques for resource description framework modeling within distributed database systems
CN108111325A (en) * 2016-11-24 2018-06-01 北京金山云网络技术有限公司 A kind of resource allocation methods and device
CN108205561A (en) * 2016-12-19 2018-06-26 北京国双科技有限公司 data query system, method and device
CN110069419A (en) * 2018-09-04 2019-07-30 中国平安人寿保险股份有限公司 Multilevel cache system and its access control method, equipment and storage medium
CN109669960A (en) * 2018-12-25 2019-04-23 钛马信息网络技术有限公司 The system and method for caching snowslide is avoided by multi-level buffer in micro services
CN112559560A (en) * 2019-09-10 2021-03-26 北京京东振世信息技术有限公司 Metadata reading method and device, metadata updating method and device, and storage device
CN112035528A (en) * 2020-09-11 2020-12-04 中国银行股份有限公司 Data query method and device
CN111930780A (en) * 2020-10-12 2020-11-13 上海冰鉴信息科技有限公司 Data query method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822657A (en) * 2021-11-24 2021-12-21 太平金融科技服务(上海)有限公司深圳分公司 Service supervision method and device, computer equipment and storage medium
CN113822657B (en) * 2021-11-24 2022-04-01 太平金融科技服务(上海)有限公司深圳分公司 Service supervision method and device, computer equipment and storage medium
CN113900840A (en) * 2021-12-08 2022-01-07 浙江新华移动传媒股份有限公司 Distributed transaction final consistency processing method and device
WO2023226682A1 (en) * 2022-05-25 2023-11-30 京东方科技集团股份有限公司 Data processing method and apparatus, server, and storage medium
CN117914944A (en) * 2024-03-20 2024-04-19 暗物智能科技(广州)有限公司 Distributed three-level caching method and device based on Internet of things

Also Published As

Publication number Publication date
CN113420052B (en) 2023-02-17

Similar Documents

Publication Publication Date Title
CN113420052B (en) Multi-level distributed cache system and method
US11119997B2 (en) Lock-free hash indexing
US9767131B2 (en) Hierarchical tablespace space management
EP3117348B1 (en) Systems and methods to optimize multi-version support in indexes
US8380702B2 (en) Loading an index with minimal effect on availability of applications using the corresponding table
US8401994B2 (en) Distributed consistent grid of in-memory database caches
AU2002303900B2 (en) Consistent read in a distributed database environment
US10437688B2 (en) Enhancing consistent read performance for in-memory databases
US20160335310A1 (en) Direct-connect functionality in a distributed database grid
CN102779132B (en) Data updating method, system and database server
EP2565806A1 (en) Multi-row transactions
CN104679898A (en) Big data access method
CN101930472A (en) Parallel query method for distributed database
CN102999522A (en) Data storage method and device
US10528590B2 (en) Optimizing a query with extrema function using in-memory data summaries on the storage server
US11567934B2 (en) Consistent client-side caching for fine grained invalidations
US10990571B1 (en) Online reordering of database table columns
US9811560B2 (en) Version control based on a dual-range validity model
US7991798B2 (en) In place migration when changing datatype of column
WO2022127866A1 (en) Data processing method and apparatus, and electronic device and storage medium
US11609934B2 (en) Notification framework for document store
US20170329830A1 (en) Read-optimized database changes
CN114691307A (en) Transaction processing method and computer system
US20240095248A1 (en) Data transfer in a computer-implemented database from a database extension layer
US11514080B1 (en) Cross domain transactions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant