CN113722281A - Service process processing system and method using multi-level data cache - Google Patents

Service process processing system and method using multi-level data cache Download PDF

Info

Publication number
CN113722281A
CN113722281A CN202110972922.9A CN202110972922A CN113722281A CN 113722281 A CN113722281 A CN 113722281A CN 202110972922 A CN202110972922 A CN 202110972922A CN 113722281 A CN113722281 A CN 113722281A
Authority
CN
China
Prior art keywords
redis
data
database
cache unit
service process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110972922.9A
Other languages
Chinese (zh)
Inventor
杨晗琦
赵立才
唐成山
陈军
陈睿进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN202110972922.9A priority Critical patent/CN113722281A/en
Publication of CN113722281A publication Critical patent/CN113722281A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Hardware Redundancy (AREA)

Abstract

The invention provides a service process processing system and method using multi-level data cache, relating to the technical field of cloud computing and distributed storage, wherein the system comprises: the system comprises a local cache unit, a Redis centralized cache unit, a database and a service process processing module; the local cache unit, the Redis centralized cache unit and the database store data in a multi-level data cache mode, and the access priority is from high to low: the system comprises a local cache unit, a Redis centralized cache unit and a database; the service process processing module is used for preferentially accessing the local cache unit when the service process is started; if the data can not be hit in the local cache unit, accessing a Redis centralized cache unit; if the data is hit, reading the data and continuing to perform transaction processing; if the data can not be hit in the Redis centralized cache unit, accessing a database; if the data is hit, the data is read and the transaction processing is continued.

Description

Service process processing system and method using multi-level data cache
Technical Field
The invention relates to the technical field of cloud computing and distributed storage, in particular to a service process processing system and method utilizing multi-level data caching.
Background
At present, the high availability of the distributed cache is realized by data redundancy essentially, and the read-write pressure of a database is dispersed by a mode of storing data by a plurality of copies, so that cache breakdown or cache avalanche is avoided. In the prior art, a client scheme is usually adopted to realize high availability of distributed cache. The client scheme is characterized in that a plurality of cache nodes are configured at the client, and the distributed mode is realized through a cache writing and reading algorithm, so that the cache availability is improved. When data is written, data needing to be written into the cache is dispersed into a plurality of nodes at a client, namely data fragmentation is carried out; when data is read, the fault tolerance is realized by using a master-slave scheme through Memcached, and when the master node is down, the slave node is held at the bottom to avoid cache breakdown.
Although the data fragment write data strategy of the client scheme can relieve the storage and access pressure of the cache node, the use of the cache is more complicated. Data fragmentation generally adopts a consistent hashing algorithm to scatter and write data to different storage nodes, and the possibility of problems is increased due to too many nodes. In addition, consistent hashing algorithms may cause a service process to read dirty data. For example, if the data in the cache a cannot be connected to the client due to a failure during updating, the client will write the data update into the cache B, and after the cache a recovers the connection with the client, the client reads the dirty data from the cache a. Memcached used in the scheme of the client belongs to a memory type database, but the Memcached does not support a master-slave mode, a cache distribution strategy needs to be written in the client, data synchronization is not supported, and partial services can be influenced when a single machine fault occurs in a production environment.
In summary, a technical solution that can overcome the above-mentioned defects, improve the high availability of the distributed cache, and ensure the stability of the operation of the service processing system is needed.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a service process processing system and method using multi-level data cache. The method is established on a three-level storage mode of local cache-Redis centralized cache-database, fully utilizes the high availability of Redis and the database, and can still read parameter data by a service process when an unavailable condition occurs in a Redis cluster or the database, so that most transactions can be stably carried out without being influenced by faults for a period of time, the system stability can be maintained, and time is strived for fault restoration.
In a first aspect of the embodiments of the present invention, a system for processing a service process using multiple levels of data caches is provided, where the system includes: the system comprises a local cache unit, a Redis centralized cache unit, a database and a service process processing module; wherein the content of the first and second substances,
the local cache unit, the Redis centralized cache unit and the database store data in a multi-level data cache mode, and the access priority is from high to low: the system comprises a local cache unit, a Redis centralized cache unit and a database;
the service process processing module is used for preferentially accessing the local cache unit when the service process is started;
if the data can not be hit in the local cache unit, accessing a Redis centralized cache unit; if the data is hit, reading the data and continuing to perform transaction processing;
if the data can not be hit in the Redis centralized cache unit, accessing a database; if the data is hit, the data is read and the transaction processing is continued.
Further, the local cache unit is a first-level cache and is used for caching data with access frequency greater than a set value.
Further, the Redis centralized cache unit is a secondary cache, and is used for caching a certain amount of data in the database.
Further, the system further comprises:
and the data synchronization module is used for regularly caching the multi-level data, and when the data change, the parameter synchronization technology is adopted to keep the data in the local cache, the Redis centralized cache unit and the database consistent.
Further, the Redis centralized cache unit adopts a cluster mode, the cluster mode uses hash slots, and when the number of the hash slots of the storage nodes in the Redis centralized cache unit is deleted or changed, the Redis centralized cache unit is kept available.
Further, in the cluster mode, the Redis centralized cache unit includes a plurality of Redis master nodes, each Redis master node is correspondingly provided with at least one Redis slave node, and when the Redis master node cannot access, access is performed through the Redis slave node.
Furthermore, the database adopts a main-standby multi-copy mode, is provided with a database main node and at least one database standby node, and data are synchronized in real time through a Binlog; and when the database main node is unavailable, automatically switching to the database standby node.
Further, the service process processing module is further configured to:
when the Redis centralized cache unit is unavailable and the service process cannot hit data in the local cache unit, the service process directly reads the data of the database.
Further, the service process processing module is further configured to:
when the database fails, the local cache and the Redis centralized cache unit support service process access and keep the transaction continuously processed.
Further, the service process processing module is further configured to:
when the database and the Redis centralized cache unit cannot be accessed, the local cache supports service process access and keeps the transaction processing continuously.
Further, the Redis centralized cache unit comprises a preset Redis cluster agent component and a plurality of data partitions, and each data partition comprises a plurality of Redis master nodes and Redis slave nodes; wherein the content of the first and second substances,
when a service process initiates an access request to a Redis centralized cache unit, determining a target data partition which is not down through data key values in a Redis cluster agent component and the access request; routing the access request to the target data partition, and processing the data key value to obtain a hash slot number; and routing the data access request to a Redis master node corresponding to the hash slot number in the target data partition, and if the Redis master node cannot access, accessing through a Redis slave node.
Further, a main and standby synchronization unit is arranged in the database; wherein the content of the first and second substances,
the main and standby synchronization unit is used for monitoring whether a database main node generates a new Binlog file; if a new Binlog file is generated, the newly generated Binlog file is packed and compressed and is sent to a database backup node, and if no new Binlog file is generated, monitoring is continued; and the database backup node performs unpacking verification on the received compressed Binlog file to obtain complete data and stores the complete data in a storage area.
In a second aspect of the embodiments of the present invention, a method for processing a service process using multiple levels of data caches is provided, where the method includes:
setting a multi-level data caching mode, wherein in the multi-level data caching mode, the access priority is from high to low: the system comprises a local cache unit, a Redis centralized cache unit and a database;
when the service process is started, the local cache unit is preferentially accessed;
if the data can not be hit in the local cache unit, accessing a Redis centralized cache unit; if the data is hit, reading the data and continuing to perform transaction processing;
if the data can not be hit in the Redis centralized cache unit, accessing a database; if the data is hit, the data is read and the transaction processing is continued.
Further, the method further comprises:
and regularly caching the multi-level data, and when the data are changed, keeping the data in the local cache, the Redis centralized cache unit and the database consistent by adopting a parameter synchronization technology.
Further, the Redis centralized cache unit adopts a cluster mode, the cluster mode uses hash slots, and when the number of the hash slots of the storage nodes in the Redis centralized cache unit is deleted or changed, the Redis centralized cache unit is kept available.
Further, in the cluster mode, the Redis centralized cache unit includes a plurality of Redis master nodes, each Redis master node is correspondingly provided with at least one Redis slave node, and when the Redis master node cannot access, access is performed through the Redis slave node.
Furthermore, the database adopts a main-standby multi-copy mode, is provided with a database main node and at least one database standby node, and data are synchronized in real time through a Binlog; and when the database main node is unavailable, automatically switching to the database standby node.
Further, the method further comprises:
when the Redis centralized cache unit is unavailable and the service process cannot hit data in the local cache unit, the service process directly reads the data of the database.
Further, the method further comprises:
when the database fails, the local cache and the Redis centralized cache unit support service process access and keep the transaction continuously processed.
Further, the method further comprises:
when the database and the Redis centralized cache unit cannot be accessed, the local cache supports service process access and keeps the transaction processing continuously.
Further, the Redis centralized cache unit comprises a preset Redis cluster agent component and a plurality of data partitions, and each data partition comprises a plurality of Redis master nodes and Redis slave nodes; wherein the content of the first and second substances,
when a service process initiates an access request to a Redis centralized cache unit, determining a target data partition which is not down through data key values in a Redis cluster agent component and the access request; routing the access request to the target data partition, and processing the data key value to obtain a hash slot number; and routing the data access request to a Redis master node corresponding to the hash slot number in the target data partition, and if the Redis master node cannot access, accessing through a Redis slave node.
Further, a main and standby synchronization unit is arranged in the database; wherein the content of the first and second substances,
the main and standby synchronization unit is used for monitoring whether a database main node generates a new Binlog file; if a new Binlog file is generated, the newly generated Binlog file is packed and compressed and is sent to a database backup node, and if no new Binlog file is generated, monitoring is continued; and the database backup node performs unpacking verification on the received compressed Binlog file to obtain complete data and stores the complete data in a storage area.
In a third aspect of the embodiments of the present invention, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements a service process processing method using multiple levels of data caches.
In a fourth aspect of embodiments of the present invention, a computer-readable storage medium is provided, which stores a computer program, and when the computer program is executed by a processor, the computer program implements a service process processing method using a multi-level data cache.
The service process processing system and method utilizing the multi-level data cache provided by the invention can keep the service process reading parameter data by setting the multi-level storage schemes of the local cache, the Redis centralized cache and the database, reduce the influence on partial services when a fault occurs in the production environment and maintain the stable operation of the system; the local cache is used as a first-level cache, so that the service process can read most of needed parameters from the local cache, frequent reading of a second-level cache or a database is avoided, and when the Redis centralized cache or the database cannot be served, most of transactions can be stably carried out without being influenced by faults after being insisted for a period of time, so that the transactions can be continuously carried out without direct failure, and high availability is realized; the Redis cache is used as a secondary cache, if the service process can not hit data in the local cache unit, the Redis centralized cache unit can be accessed, and when the service process can not be used, the service process can directly read the database; the Redis cluster and the database have high availability, and a Redis cluster mode of Redis centralized caching adopts a Hash groove algorithm and a master-slave replication model to ensure that the change and downtime of a storage node cannot influence the external service of the whole cluster; the master-backup multi-copy mode of the database can be switched to the slave node of the database when the master node of the database is unavailable, and is transparent to the application, so that the application reads and writes data from the database under the condition of not being influenced; the whole scheme can maintain stable operation of the system and strive for time for fault repair.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of a system architecture for processing a service process using multiple levels of data caching, according to an embodiment of the present invention.
FIG. 2 is a block diagram of a system architecture for processing a service process using multiple levels of data caching, according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an architecture of a Redis centralized cache according to an embodiment of the present invention.
FIG. 4 is a block diagram of a database according to an embodiment of the present invention.
FIG. 5 is a flowchart illustrating a method for processing a service process using multiple levels of data caching according to an embodiment of the present invention.
FIG. 6 is a flowchart illustrating a method for processing a service process using multiple levels of data caching according to an embodiment of the present invention.
FIG. 7 is a flow chart illustrating a method for processing a service process using multiple levels of data caching according to another embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, a service process processing system and method utilizing multi-level data cache are provided, and the system and method relate to the technical field of cloud computing and distributed storage.
In the banking scenario, there is a category of parameters in the banking scenario that has a low frequency of change but is read by each application component with a high frequency, such as currency type, exchange rate, organization address, and the like. In order to enable such parameters to be read quickly and correctly and avoid negative influence on transactions caused by the fact that data cannot be read or data errors are read, the invention provides a service process processing scheme utilizing multi-level data cache. For the requirement of high availability of data reading, a three-level storage mode of local cache-Redis centralized cache-database is established, the high availability of Redis and the database is fully utilized, the parameter data can still be read by a service process when the Redis cluster or the database is unavailable, most transactions can be stably carried out without being influenced by faults for a period of time, the maintenance system stably operates, and time is strived for fault repair.
In the embodiments of the present invention, terms to be described include:
memcached: the memory type database has high-performance reading and writing, single data type, client-side-supporting distributed clustering and consistent hash; the multi-core structure and the multithreading read-write performance are high. However, the disadvantages are no persistence, cache penetration may occur in node failure, client implementation is required in a distributed manner, data synchronization across a machine room is difficult, and the complexity of architecture expansion is high.
Redis: the remote dictionary service is an open source log-type and Key-Value database which is written by using ANSI C language, supports network, can be based on memory and can also be persistent, and provides API of multiple languages. The value types that support storage are more than Memcached, including string, list, set, zset, and hash. The system has the advantages of high performance read-write, multi-data type support, data persistence, high available architecture, user-defined virtual memory support, distributed fragment clustering support and extremely high single-thread read-write performance.
High availability: system and application availability is improved by minimizing downtime due to routine maintenance operations and sudden system crashes.
CAP principle: in a distributed system, at most, the three elements of consistency, availability and partition fault tolerance can be realized at two points at the same time, and the three elements cannot be considered at the same time.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
FIG. 1 is a block diagram of a system architecture for processing a service process using multiple levels of data caching, according to an embodiment of the present invention. As shown in fig. 1, the system includes: a local cache unit 110, a Redis centralized cache unit 120, a database 130 and a service process processing module 140; wherein the content of the first and second substances,
the local cache unit 110, the Redis centralized cache unit 120, and the database 130 store data in a multi-level data cache mode, where access priorities are from high to low: a local cache unit 110, a Redis centralized cache unit 120 and a database 130;
the service process processing module 140 is configured to preferentially access the local cache unit 110 when a service process is started;
if the data cannot be hit in the local cache unit 110, accessing the Redis centralized cache unit 120; if the data is hit, reading the data and continuing to perform transaction processing;
if the data cannot be hit in the Redis centralized cache unit 120, accessing the database 130; if the data is hit, the data is read and the transaction processing is continued.
The multi-level data caching architecture provided by the invention embodies a high-availability design concept. In the aspect of architecture design, three-level storage of a local cache-Redis centralized cache-database supports and supplements each other on different storage levels, and a service process can continuously complete transactions when one-level or two-level storage components are unavailable, so that the robustness and robustness of the system are greatly improved; from a local perspective, the Redis cluster model and the database itself have high availability solutions that can avoid the situation of unavailability itself.
For a clearer explanation of the above-mentioned service process processing system using multi-level data caching, a detailed description will be given below with reference to each part.
In this embodiment, the local cache unit is a first-level cache and is configured to cache data with an access frequency greater than a set value; the local cache unit can ensure that the service process can read most needed parameters from the local cache unit, and frequent reading of a secondary cache (Redis centralized cache) or a database is avoided.
In this embodiment, the Redis centralized cache unit is a secondary cache, and is configured to cache a certain amount of data in the database.
The service process may read the database directly when the Redis centralized cache unit is unavailable, except that the performance and concurrency capabilities of the transaction may be reduced, but still remain available.
When the database is in an extreme state of being unreadable and writable, the local cache and the Redis centralized cache can meet most read parameter requests, only dirty data can be read, and the usability and the partition fault tolerance can be realized at most by considering the CAP theory, which is inevitable.
The more extreme case is that both the Redis cluster and the database are unavailable, and the local cache can support a period of time of data reading requests, so that the system is prevented from being down immediately, and time is provided for troubleshooting and repairing.
Based on the above, the service process processing module 140 is further configured to:
when the Redis centralized cache unit is unavailable and the service process cannot hit data in the local cache unit, the service process directly reads the data of the database.
When the database fails, the local cache and the Redis centralized cache unit support service process access and keep the transaction continuously processed.
When the database and the Redis centralized cache unit cannot be accessed, the local cache supports service process access and keeps the transaction processing continuously.
In this embodiment, the Redis centralized cache unit adopts a cluster mode, the cluster mode uses hash slots, and when the number of hash slots of storage nodes in the Redis centralized cache unit is deleted or changed, the Redis centralized cache unit is kept available.
In the cluster mode, the Redis centralized cache unit comprises a plurality of Redis master nodes, each Redis master node is correspondingly provided with at least one Redis slave node, and when the Redis master node cannot access, access is performed through the Redis slave nodes.
Specifically, the Redis centralized cache unit comprises a preset Redis cluster agent component and a plurality of data partitions, and each data partition comprises a plurality of Redis master nodes and Redis slave nodes; wherein the content of the first and second substances,
when a service process initiates an access request to a Redis centralized cache unit, determining a target data partition which is not down through data key values in a Redis cluster agent component and the access request; routing the access request to the target data partition, and processing the data key value to obtain a hash slot number; and routing the data access request to a Redis master node corresponding to the hash slot number in the target data partition, and if the Redis master node cannot access, accessing through a Redis slave node.
In this embodiment, the database adopts a master-slave multi-copy mode, and is provided with a database master node and at least one database slave node, and data are synchronized in real time through Binlog; and when the database main node is unavailable, automatically switching to the database standby node.
Specifically, the database is provided with a main and standby synchronization unit; wherein the content of the first and second substances,
the main and standby synchronization unit is used for monitoring whether a database main node generates a new Binlog file; if a new Binlog file is generated, the newly generated Binlog file is packed and compressed and is sent to a database backup node, and if no new Binlog file is generated, monitoring is continued; and the database backup node performs unpacking verification on the received compressed Binlog file to obtain complete data and stores the complete data in a storage area.
The service process processing system utilizing the multi-level data cache can directly read the parameter data from the local cache by the service process when the service process is started, the hottest parameter data is stored in the local cache, and the service process is usually ended when the local cache has problems and can not be read and written. In order to improve the reading efficiency of the cache data, the invention arranges a secondary cache, namely Redis centralized cache, between the local cache and the database, supports the local cache to read the Redis centralized cache before the parameter data is read, and avoids the cache penetration when the database is directly read.
Referring to fig. 2, a schematic diagram of a service process processing system architecture utilizing multiple levels of data caching according to an embodiment of the present invention is shown. As shown in fig. 2, the system further includes:
and the data synchronization module is used for regularly caching the multi-level data, and when the data change, the parameter synchronization technology is adopted to keep the data in the local cache, the Redis centralized cache unit and the database consistent.
The three-level storage scheme adopted by the invention realizes high availability on the architecture, and the distributed system parameter synchronization scheme realized by the unit ensures the consistency of data in a local cache, a Redis cache and a database, and avoids the dirty data read by a service process.
From the aspect of architecture, the local cache is used as an object for reading data by the service process, and if the data is unavailable, the service process cannot be performed; the Redis cluster supports a master-slave replication mechanism, can realize read-write separation and disaster recovery backup, and embodies high availability; the high availability of the database is realized in a main/standby multi-copy mode, so that the main node of the database can be switched to the standby node in time when the main node of the database fails, and the phenomenon that the database is completely shut down is prevented. Distributed high availability is represented from local modules to the overall architecture.
It should be noted that although several modules of a service process processing system utilizing multi-level data caching are mentioned in the above detailed description, such partitioning is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the modules described above may be embodied in one module according to embodiments of the invention. Conversely, the features and functions of one module described above may be further divided into embodiments by a plurality of modules.
Compared with the existing client-side scheme, the multi-level cache solution provided by the invention emphasizes on improving the high availability of the distributed cache at the server side, fully utilizes the high availability of the Redis cluster and the database, realizes mutual support of different storage levels on the availability, and improves the availability of the system cache as a whole.
The cache of the present invention comprises three levels: the system comprises a local cache, a Redis centralized cache and a database, wherein parameters read by a service process access the local cache first, if the data cannot be hit, the Redis cache needs to be read upwards, and if the data cannot be hit, the database is accessed.
From the overall architecture, when the database fails, the local cache and the Redis cache can support the service process to access and ensure the transaction to continue;
when the Redis cache is unavailable, the service process can directly access the database to read data;
even if the Redis cache and the database are inaccessible, the access parameter requirements of most transactions can be supported by the local cache because it stores the hottest parameter data.
Fig. 3 is a schematic diagram of an architecture of a Redis centralized cache according to an embodiment of the present invention.
As shown in fig. 3, the high availability of the Redis cache is embodied in that a cluster mode is adopted, and the cluster mode uses hash slots instead of a consistent hash algorithm, which ensures that deleting or changing the number of hash slots of a certain storage node does not cause the unavailability of a cluster, and also makes it easier to delete or add storage nodes.
In addition, the master-slave replication model of cluster mode ensures that the cluster is still available when some nodes fail or most nodes fail to communicate, for example, there are three master nodes A, B and C in the cluster, each master node has one slave node a1, B1 and C1, and when one of the nodes, for example, B, fails, the cluster elects B1 to serve as a new master node, and the whole cluster cannot be unavailable because no slot in B can be found. In summary, the design and implementation of Redis Cluster mode dictates that it has high availability.
Fig. 4 is a schematic diagram of a database architecture according to an embodiment of the present invention.
As shown in fig. 4, the database in the multi-level cache scheme also has high availability, the database may adopt a master-slave multi-copy mode, a master node and at least one slave node (backup 1, backup 2) are set, and when the master node is unavailable, the slave node may be automatically switched to, so that it is ensured that the data in the database can be read when the service process cannot read the data in both the local cache and the Redis centralized cache, and the service process is prevented from ending the data due to the fact that the data cannot be read.
The data of the master node and the slave nodes are synchronized in real time through Binlog. Wherein the content of the first and second substances,
during the Binlog real-time synchronization, the master node must enable the binary log to record any events that modify the database data.
The slave node starts a thread (I/OThread) to play the slave node as a MySQL client, and requests an event in a Binary log file (Binary log) of the master node through the MySQL protocol.
The master node starts a Thread (dump Thread), checks the events in the binary log of the master node, compares the events with the position requested by the opposite party, and if the position parameter is not requested, the master node sends the events from the first event in the first log file to the slave node one by one.
The slave node receives the data sent by the master node and places the data into a Relay log (Relay log) file. And records which location within which particular binary log file of the master node the request is directed to (a plurality of binary files in the master node will be described in detail later).
The slave node starts another Thread (SQL Thread), reads out the event in the Relay log, and executes it again locally.
In summary, the Redis cluster mode adopts a hash slot algorithm and a master-slave replication model, so that the change and downtime of the storage nodes can not affect the external services of the whole cluster.
The master-backup multi-copy mode of the database ensures that the master node can be switched to the slave node when the master node is unavailable, which is transparent to the application, and the application can still read and write data from the database without being influenced.
Both Redis clusters and databases have high availability.
Having described the system of an exemplary embodiment of the present invention, a service process handling method using a multi-level data cache of an exemplary embodiment of the present invention will be described with reference to fig. 5.
The implementation of the service process processing method using the multi-level data cache can refer to the implementation of the system, and repeated details are not described.
Based on the same inventive concept, the present invention further provides a service process processing method using multi-level data caching, as shown in fig. 5, the method includes:
s501, setting a multi-level data caching mode, wherein in the multi-level data caching mode, the access priority is from high to low as: the system comprises a local cache unit, a Redis centralized cache unit and a database;
s502, when the service process is started, the local cache unit is preferentially accessed;
s503, if the data can not be hit in the local cache unit, accessing a Redis centralized cache unit; if the data is hit, reading the data and continuing to perform transaction processing;
s504, if the data can not be hit in the Redis centralized cache unit, accessing a database; if the data is hit, the data is read and the transaction processing is continued.
In this embodiment, referring to fig. 6, a flowchart of a service process processing method using multiple levels of data caches according to an embodiment of the present invention is shown.
As shown in fig. 6, the method further includes:
s601, regularly caching multiple levels of data, and when the data changes, keeping the data in the local cache, the Redis centralized cache unit and the database consistent by adopting a parameter synchronization technology.
In this embodiment, the Redis centralized cache unit adopts a cluster mode, the cluster mode uses hash slots, and when the number of hash slots of storage nodes in the Redis centralized cache unit is deleted or changed, the Redis centralized cache unit is kept available.
In this embodiment, in the cluster mode, the Redis centralized cache unit includes a plurality of Redis master nodes, each Redis master node is correspondingly provided with at least one Redis slave node, and when the Redis master node cannot access, access is performed through the Redis slave node.
In this embodiment, the database adopts a master-slave multi-copy mode, and is provided with a database master node and at least one database slave node, and data are synchronized in real time through Binlog; and when the database main node is unavailable, automatically switching to the database standby node.
Specifically, the Redis centralized cache unit comprises a preset Redis cluster agent component and a plurality of data partitions, and each data partition comprises a plurality of Redis master nodes and Redis slave nodes; wherein the content of the first and second substances,
when a service process initiates an access request to a Redis centralized cache unit, determining a target data partition which is not down through data key values in a Redis cluster agent component and the access request; routing the access request to the target data partition, and processing the data key value to obtain a hash slot number; and routing the data access request to a Redis master node corresponding to the hash slot number in the target data partition, and if the Redis master node cannot access, accessing through a Redis slave node.
In this embodiment, the database is provided with a master-slave synchronization unit; wherein the content of the first and second substances,
the main and standby synchronization unit is used for monitoring whether a database main node generates a new Binlog file; if a new Binlog file is generated, the newly generated Binlog file is packed and compressed and is sent to a database backup node, and if no new Binlog file is generated, monitoring is continued; and the database backup node performs unpacking verification on the received compressed Binlog file to obtain complete data and stores the complete data in a storage area.
In this embodiment, referring to fig. 7, a flowchart of a service process processing method using multiple levels of data caches according to another embodiment of the present invention is shown.
As shown in fig. 7, the method further includes:
s701, when the Redis centralized cache unit is unavailable and the data cannot be hit by the service process in the local cache unit, the service process directly reads the data of the database.
S702, when the database fails, the local cache and the Redis centralized cache unit support the service process access and keep the transaction continuously processed.
And S703, when the database and the Redis centralized cache unit cannot be accessed, the local cache supports the service process access and keeps the transaction to be continuously processed.
In summary, in the service process, the service process reads parameters and accesses the local cache first, and if the data cannot be hit, the Redis cache needs to be read upwards, and if the data cannot be hit, the database is accessed.
Based on the multi-level data cache architecture, when the database fails, the local cache and the Redis cache can support the service process to access and ensure the transaction to continue; when the Redis cache is unavailable, the service process can directly access the database to read data; even if the Redis cache and the database are inaccessible, the access parameter requirements of most transactions can be supported by the local cache because it stores the hottest parameter data.
It should be noted that although the operations of the method of the present invention have been described in the above embodiments and the accompanying drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the operations shown must be performed, to achieve the desired results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Based on the aforementioned inventive concept, as shown in fig. 8, the present invention further provides a computer apparatus 800, which includes a memory 810, a processor 820, and a computer program 830 stored in the memory 810 and operable on the processor 820, wherein the processor 820 implements the aforementioned service process processing method using multi-level data cache when executing the computer program 830.
Based on the foregoing inventive concept, the present invention provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the foregoing service process processing method using multi-level data caching.
The service process processing system and method utilizing the multi-level data cache provided by the invention can keep the service process reading parameter data by setting the multi-level storage schemes of the local cache, the Redis centralized cache and the database, reduce the influence on partial services when a fault occurs in the production environment and maintain the stable operation of the system; the local cache is used as a first-level cache, so that the service process can read most of needed parameters from the local cache, frequent reading of a second-level cache or a database is avoided, and when the Redis centralized cache or the database cannot be served, most of transactions can be stably carried out without being influenced by faults after being insisted for a period of time, so that the transactions can be continuously carried out without direct failure, and high availability is realized; the Redis cache is used as a secondary cache, if the service process can not hit data in the local cache unit, the Redis centralized cache unit can be accessed, and when the service process can not be used, the service process can directly read the database; the Redis cluster and the database have high availability, and a Redis cluster mode of Redis centralized caching adopts a Hash groove algorithm and a master-slave replication model to ensure that the change and downtime of a storage node cannot influence the external service of the whole cluster; the master-backup multi-copy mode of the database can be switched to the slave node of the database when the master node of the database is unavailable, and is transparent to the application, so that the application reads and writes data from the database under the condition of not being influenced; the whole scheme can maintain stable operation of the system and strive for time for fault repair.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (24)

1. A system for processing a service process using multiple levels of data caching, the system comprising: the system comprises a local cache unit, a Redis centralized cache unit, a database and a service process processing module; wherein the content of the first and second substances,
the local cache unit, the Redis centralized cache unit and the database store data in a multi-level data cache mode, and the access priority is from high to low: the system comprises a local cache unit, a Redis centralized cache unit and a database;
the service process processing module is used for preferentially accessing the local cache unit when the service process is started;
if the data can not be hit in the local cache unit, accessing a Redis centralized cache unit; if the data is hit, reading the data and continuing to perform transaction processing;
if the data can not be hit in the Redis centralized cache unit, accessing a database; if the data is hit, the data is read and the transaction processing is continued.
2. The system of claim 1, wherein the local cache unit is a level one cache for caching data with access frequency greater than a predetermined value.
3. The system according to claim 2, wherein the Redis centralized caching unit is a second level cache configured to cache a certain amount of data in the database.
4. The system of claim 3, further comprising:
and the data synchronization module is used for regularly caching the multi-level data, and when the data change, the parameter synchronization technology is adopted to keep the data in the local cache, the Redis centralized cache unit and the database consistent.
5. The system according to claim 2, wherein the Redis centralized cache unit employs a cluster model that uses hash slots to keep the Redis centralized cache unit available when deleting or changing the number of hash slots of storage nodes in the Redis centralized cache unit.
6. The system according to claim 5, wherein in the cluster mode, the Redis centralized cache unit comprises a plurality of Redis master nodes, each Redis master node is correspondingly provided with at least one Redis slave node, and when the Redis master node cannot access, the Redis slave node accesses the service process processing system.
7. The system of claim 1, wherein the database is configured in a master-slave multi-copy mode, and has a database master node and at least one database slave node, and data are synchronized in real time by Binlog; and when the database main node is unavailable, automatically switching to the database standby node.
8. The system of claim 1, wherein the service process processing module is further configured to:
when the Redis centralized cache unit is unavailable and the service process cannot hit data in the local cache unit, the service process directly reads the data of the database.
9. The system of claim 1, wherein the service process processing module is further configured to:
when the database fails, the local cache and the Redis centralized cache unit support service process access and keep the transaction continuously processed.
10. The system of claim 1, wherein the service process processing module is further configured to:
when the database and the Redis centralized cache unit cannot be accessed, the local cache supports service process access and keeps the transaction processing continuously.
11. The system according to claim 6, wherein the Redis centralized cache unit comprises a predetermined Redis cluster agent component and a plurality of data partitions, each data partition comprising a plurality of Redis master nodes and Redis slave nodes; wherein the content of the first and second substances,
when a service process initiates an access request to a Redis centralized cache unit, determining a target data partition which is not down through data key values in a Redis cluster agent component and the access request; routing the access request to the target data partition, and processing the data key value to obtain a hash slot number; and routing the data access request to a Redis master node corresponding to the hash slot number in the target data partition, and if the Redis master node cannot access, accessing through a Redis slave node.
12. The system according to claim 7, wherein the database is provided with a master/slave synchronization unit; wherein the content of the first and second substances,
the main and standby synchronization unit is used for monitoring whether a database main node generates a new Binlog file; if a new Binlog file is generated, the newly generated Binlog file is packed and compressed and is sent to a database backup node, and if no new Binlog file is generated, monitoring is continued; and the database backup node performs unpacking verification on the received compressed Binlog file to obtain complete data and stores the complete data in a storage area.
13. A method for processing a service process by using a multi-level data cache is characterized by comprising the following steps:
setting a multi-level data caching mode, wherein in the multi-level data caching mode, the access priority is from high to low: the system comprises a local cache unit, a Redis centralized cache unit and a database;
when the service process is started, the local cache unit is preferentially accessed;
if the data can not be hit in the local cache unit, accessing a Redis centralized cache unit; if the data is hit, reading the data and continuing to perform transaction processing;
if the data can not be hit in the Redis centralized cache unit, accessing a database; if the data is hit, the data is read and the transaction processing is continued.
14. The method of claim 13, further comprising:
and regularly caching the multi-level data, and when the data are changed, keeping the data in the local cache, the Redis centralized cache unit and the database consistent by adopting a parameter synchronization technology.
15. The method according to claim 13, wherein the Redis centralized cache unit employs a cluster model, the cluster model uses hash slots, and the Redis centralized cache unit is kept available when the number of hash slots of the storage nodes in the Redis centralized cache unit is deleted or changed.
16. The method according to claim 15, wherein in the cluster mode, the Redis centralized cache unit includes a plurality of Redis master nodes, each Redis master node is correspondingly provided with at least one Redis slave node, and when the Redis master node cannot access, the Redis slave node accesses the service process.
17. The method according to claim 13, wherein the database is in a master-slave multi-copy mode, and is provided with a database master node and at least one database slave node, and data are synchronized in real time by Binlog; and when the database main node is unavailable, automatically switching to the database standby node.
18. The method of claim 13, further comprising:
when the Redis centralized cache unit is unavailable and the service process cannot hit data in the local cache unit, the service process directly reads the data of the database.
19. The method of claim 13, further comprising:
when the database fails, the local cache and the Redis centralized cache unit support service process access and keep the transaction continuously processed.
20. The method of claim 13, further comprising:
when the database and the Redis centralized cache unit cannot be accessed, the local cache supports service process access and keeps the transaction processing continuously.
21. The method according to claim 16, wherein the Redis centralized cache unit comprises a predetermined Redis cluster agent component and a plurality of data partitions, each data partition comprising a plurality of Redis master nodes and Redis slave nodes; wherein the content of the first and second substances,
when a service process initiates an access request to a Redis centralized cache unit, determining a target data partition which is not down through data key values in a Redis cluster agent component and the access request; routing the access request to the target data partition, and processing the data key value to obtain a hash slot number; and routing the data access request to a Redis master node corresponding to the hash slot number in the target data partition, and if the Redis master node cannot access, accessing through a Redis slave node.
22. The method according to claim 17, wherein the database is provided with a master/slave synchronization unit; wherein the content of the first and second substances,
the main and standby synchronization unit is used for monitoring whether a database main node generates a new Binlog file; if a new Binlog file is generated, the newly generated Binlog file is packed and compressed and is sent to a database backup node, and if no new Binlog file is generated, monitoring is continued; and the database backup node performs unpacking verification on the received compressed Binlog file to obtain complete data and stores the complete data in a storage area.
23. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 13 to 22 when executing the computer program.
24. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any of claims 13 to 22.
CN202110972922.9A 2021-08-24 2021-08-24 Service process processing system and method using multi-level data cache Pending CN113722281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110972922.9A CN113722281A (en) 2021-08-24 2021-08-24 Service process processing system and method using multi-level data cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110972922.9A CN113722281A (en) 2021-08-24 2021-08-24 Service process processing system and method using multi-level data cache

Publications (1)

Publication Number Publication Date
CN113722281A true CN113722281A (en) 2021-11-30

Family

ID=78677542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110972922.9A Pending CN113722281A (en) 2021-08-24 2021-08-24 Service process processing system and method using multi-level data cache

Country Status (1)

Country Link
CN (1) CN113722281A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780636A (en) * 2022-04-01 2022-07-22 北京卓视智通科技有限责任公司 Storage system and method based on block chain and data mart
CN115658743A (en) * 2022-12-26 2023-01-31 北京滴普科技有限公司 Method, device and medium for improving local cache hit rate of OLAP analysis database
CN117851456A (en) * 2024-01-05 2024-04-09 迪爱斯信息技术股份有限公司 Method, system and server for sharing data in cluster

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780636A (en) * 2022-04-01 2022-07-22 北京卓视智通科技有限责任公司 Storage system and method based on block chain and data mart
CN115658743A (en) * 2022-12-26 2023-01-31 北京滴普科技有限公司 Method, device and medium for improving local cache hit rate of OLAP analysis database
CN117851456A (en) * 2024-01-05 2024-04-09 迪爱斯信息技术股份有限公司 Method, system and server for sharing data in cluster

Similar Documents

Publication Publication Date Title
CN113722281A (en) Service process processing system and method using multi-level data cache
US11520745B2 (en) Distributed storage device and data management method in distributed storage device
US8725951B2 (en) Efficient flash memory-based object store
US20210004355A1 (en) Distributed storage system, distributed storage system control method, and storage medium
US11841844B2 (en) Index update pipeline
US7987158B2 (en) Method, system and article of manufacture for metadata replication and restoration
US8868487B2 (en) Event processing in a flash memory-based object store
US9785525B2 (en) High availability failover manager
US9047351B2 (en) Cluster of processing nodes with distributed global flash memory using commodity server technology
US20120158650A1 (en) Distributed data cache database architecture
US11347600B2 (en) Database transaction log migration
US7702757B2 (en) Method, apparatus and program storage device for providing control to a networked storage architecture
US8726261B2 (en) Zero downtime hard disk firmware update
CN107430585B (en) System and method for remote direct memory access
JP2010079886A (en) Scalable secondary storage system and method
US11403176B2 (en) Database read cache optimization
US20160342490A1 (en) Method and apparatus to virtualize remote copy pair in three data center configuration
EP4307137A1 (en) Transaction processing method, distributed database system, cluster, and medium
JP6652647B2 (en) Storage system
CN112748865B (en) Method, electronic device and computer program product for storage management
US11307944B2 (en) Automated failover for asynchronous remote copy
US10210060B2 (en) Online NVM format upgrade in a data storage system operating with active and standby memory controllers
US20160036653A1 (en) Method and apparatus for avoiding performance decrease in high availability configuration
CN115955488B (en) Distributed storage copy cross-machine room placement method and device based on copy redundancy
US10191690B2 (en) Storage system, control device, memory device, data access method, and program recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination