CN115470026A - Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system - Google Patents

Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system Download PDF

Info

Publication number
CN115470026A
CN115470026A CN202211086091.6A CN202211086091A CN115470026A CN 115470026 A CN115470026 A CN 115470026A CN 202211086091 A CN202211086091 A CN 202211086091A CN 115470026 A CN115470026 A CN 115470026A
Authority
CN
China
Prior art keywords
cache
service data
data
service
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211086091.6A
Other languages
Chinese (zh)
Inventor
郑起
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN202211086091.6A priority Critical patent/CN115470026A/en
Publication of CN115470026A publication Critical patent/CN115470026A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0715Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a system implementing multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/073Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a memory management context, e.g. virtual memory or cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0745Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in an input/output transactions management context
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions

Abstract

The embodiment of the specification provides a data caching and caching disaster recovery method, a system and a caching system, on one hand, the physical validity period of business data in the caching system is set to be permanently valid, so that the business data for realizing caching bottom entry always exists in the caching system, and the caching disaster recovery is realized; on the other hand, when the service data logic in the cache system is out of date, the latest service data is requested from the remote end, so as to ensure that the service data acquired by the service system is the service data within the logic validity period under the condition that the remote end data can be acquired.

Description

Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system
The application is a divisional application with the application date of 2018, 6 and 25 months and the application number of CN201810662821.X, and the invention and creation name of a data cache and cache disaster recovery method, a system and a cache system.
Technical Field
The present disclosure relates to the field of risk control technologies, and in particular, to a data caching method, a data caching system, a data caching disaster recovery method, a data caching system, and a data caching disaster recovery system.
Background
In practical applications, a performance bottleneck sometimes occurs in the remote server, or an exception occurs, for example, a service process is abnormally killed, a database is down, or the number of connections is full. Therefore, some disaster tolerance schemes need to be designed to ensure the normal operation of the service.
Disclosure of Invention
Based on this, the embodiments of the present specification provide a data caching method, a data caching system, a cache disaster recovery method, and a cache system.
According to a first aspect of embodiments herein, there is provided a data caching method, the method including: searching service data in a cache system when receiving a service data request sent by a service system; if the service data is found, acquiring the logic validity period of the service data; wherein the logical validity period is a validity period of the service data corresponding to a service; if the logic validity period is over, the service data is cached to the cache system from a remote end, and the physical validity period of the service data in the cache system is set to be permanently valid.
Optionally, the method further comprises: and if the service data is not found in the cache system, caching the service data from a remote end to the cache system.
Optionally, the method further comprises: and if the service data is not searched in the cache system, searching the service data in the cache system again after the cache data lock is obtained, and if the service data is not searched yet, caching the service data from a remote end to the cache system.
Optionally, the method further comprises: if the cache data lock is acquired overtime, the process is ended.
Optionally, the step of caching the service data from a remote end to the cache system includes: and caching the first service data with high importance degree and the second service data with low importance degree into different areas which are isolated from each other in the cache system respectively.
Optionally, before caching the service data from the remote end to the cache system, the method further includes: and if the cache unit for storing the second business data is full, the second business data with the longest cache time is evicted from the cache.
Optionally, the method further comprises: and returning the service data in the cache system to the service system.
Optionally, the method further comprises: if the logic validity period is over, judging whether the service data request carries a cache data lock; if yes, returning to the step of caching the service data from the remote end to the cache system.
Optionally, the method further comprises: and if the logic validity period is not over, returning the service data searched in the cache system to the service system.
Optionally, the physical validity period of the service data in the cache system is permanently valid.
According to a second aspect of embodiments herein, there is provided a cache disaster recovery method, including: if the remote service data is overtime, searching the service data in the cache system; sending the searched service data to a service system; wherein, the service data is cached to the cache system according to the data caching method of any embodiment.
According to a third aspect of embodiments herein, there is provided a data caching apparatus, the apparatus including: the first searching module is used for searching the service data in the cache system when receiving a service data request sent by the service system; the first acquisition module is used for acquiring the logic validity period of the service data if the service data is found; wherein the logical validity period is a validity period of the service data corresponding to a service; the first cache module is used for caching the service data from a remote end to the cache system if the logic validity period is over, and setting the physical validity period of the service data in the cache system as permanent validity.
Optionally, the apparatus further comprises: and the second cache module is used for caching the service data from a remote end to the cache system if the service data is not found in the cache system.
Optionally, the apparatus further comprises: the third searching module is used for searching the service data in the cache system again after the cache data lock is obtained; and the third cache module is used for caching the service data from a far end to the cache system when the third searching module does not search the service data yet.
Optionally, the apparatus further comprises: and the overtime waiting module is used for ending the process if the cache data lock is acquired overtime.
Optionally, the first cache module includes: and the cache unit is used for caching the first service data with high importance degree and the second service data with low importance degree into different areas which are isolated from each other in the cache system respectively.
Optionally, the apparatus further comprises: and the data eviction module is used for evicting the second business data with the longest cache time from the cache if the cache unit for storing the second business data is full.
Optionally, the apparatus further comprises: and the first sending module is used for returning the service data in the cache system to the service system.
Optionally, the apparatus further comprises: the judging module is used for judging whether the service data request carries a cache data lock or not if the logic validity period is over; and if so, returning to execute the function of the first cache module.
Optionally, the apparatus further comprises: and the second sending module is used for returning the service data searched in the cache system to the service system if the logic validity period does not expire.
Optionally, the physical validity period of the service data in the cache system is permanently valid.
According to a fourth aspect of embodiments herein, there is provided a cache disaster recovery apparatus, including: the second searching module is used for searching the service data in the cache system if the remote service data is overtime; the sending module is used for sending the searched service data to the service system; wherein, the service data is cached to the cache system according to the data caching method of any embodiment.
According to a fifth aspect of embodiments herein, there is provided a computer readable storage medium having a computer program stored thereon, wherein the program when executed by a processor implements the method of any of the embodiments.
According to a sixth aspect of the embodiments of the present specification, there is provided a cache system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of the embodiments when executing the program.
Optionally, the cache system includes a general cache unit and a white list cache unit; the white list cache unit is used for storing first service data with high importance degree, and the general cache unit is used for storing second service data with low importance degree; and when the general cache unit is full, the first cached second business data is evicted from the cache.
By applying the scheme of the embodiment of the description, on one hand, the physical validity period of the business data in the cache system is set to be permanently valid, so that the business data for realizing cache bottom holding always exist in the cache system, and the cache disaster tolerance is realized; on the other hand, when the service data logic in the cache system is out of date, the latest service data is requested from the remote end, so as to ensure that the service data acquired by the service system is the service data within the logic validity period under the condition that the remote end data can be acquired.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with this specification and, together with the description, serve to explain the principles of the embodiments of the specification.
FIG. 1 is a schematic diagram of the distal end and proximal end interaction of one embodiment of the present description.
Fig. 2 is a flowchart of a data caching method according to an embodiment of the present disclosure.
Fig. 3 is a program flow diagram of a data caching and disaster recovery method according to an embodiment of the present disclosure.
Fig. 4 is a flowchart of a cache disaster recovery method according to an embodiment of the present disclosure.
Fig. 5 is a block diagram of a data caching apparatus according to an embodiment of the present specification.
Fig. 6 is a block diagram of a cache disaster recovery apparatus according to an embodiment of the present specification.
Fig. 7 is a schematic structural diagram of a cache system according to an embodiment of the present specification.
FIG. 8 is a block diagram of a computer device for implementing methods of embodiments of the present description, according to an embodiment of the present description.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the examples of this specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the embodiments of the specification, as detailed in the claims that follow.
The terminology used in the embodiments of the present specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present specification. As used in the specification examples and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the embodiments herein. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
Fig. 1 is a schematic diagram illustrating the interaction between the distal end and the proximal end of one embodiment of the present disclosure. Therein, the proximal end 102 may communicate with the distal end 104 to obtain desired data from the distal end 104. The near end 102 may include a service system 102a and a cache system 102b for executing a service, where the service system 102a may run a corresponding service according to remote data acquired from the far end 104, and in order to improve the efficiency of the entire system and reduce data interaction with the far end 104, the near end 102 may further store data in the cache system 102b after acquiring the remote data. The cache system 102b may be a same-process memory, a cross-process memory, a memory of a virtual machine different from the physical machine, or a storage medium such as a hard disk. In some practical scenarios, it is often necessary to ensure high availability of services. For example, for a pay gateway scenario, the gateway has a strong dependency on the configuration data in the cache system 102b, and if the cache data in the cache system 102b fails, the back-end service cannot acquire the configuration data, which will result in failure of the entire service link.
Based on this, the embodiments of the present specification provide a data caching method. As shown in fig. 2, the method may include:
step 202: when a service data request sent by a service system is received, searching service data in a cache system;
step 204: if the service data is found, acquiring the logic validity period of the service data; wherein the logical validity period is a validity period of the service data corresponding to a service;
step 206: if the logic validity period is over, the service data is cached to the cache system from a remote end, and the physical validity period of the service data in the cache system is set to be permanently valid.
In step 204, the logical validity period of the service data refers to the validity period of the service data required for executing a certain service relative to the service. After the logic validity period, the service data is considered as out-of-date data relative to the service, but in order to realize cache disaster tolerance, the out-of-date service data is still stored in the cache system, and the out-of-date data is not deleted, so that a cache bottom is realized. Therefore, the logical validity period may be understood as the validity period of the service data. In contrast, the physical validity period is the period of time during which the service data is stored in the cache system. The business data that exceeds the physical validity period will be evicted from the cache.
The logical validity period of the service data may be set in advance, for example, to 1 hour. The time mark can be set for the cache system, the time mark carries the logic validity information, the time can be started to be counted from the service data stored in the cache system, and when the counted time reaches the logic validity, the logic validity of the service data is considered to be passed.
In step 206, if the logic validity period of the service data is over, an attempt may be made to acquire the latest service data from the remote end, and if the latest service data is acquired, the latest service data is cached in the cache system, and the service data with the logic expiration is covered, so as to ensure the timeliness of the service data. The remote end can be an upstream application system, a database and the like.
On one hand, in the embodiment of the present description, the physical validity period of the business data in the cache system is set as permanent validity, so that the business data for implementing cache bottom entry always exists in the cache system, so as to implement cache disaster tolerance; on the other hand, when the service data logic in the cache system is out of date, the latest service data is requested from the remote end, so that the service data acquired by the service system is the service data within the logic validity period under the condition that the remote end data can be acquired.
In an embodiment, if the service data is not found in the cache system, the service data may be directly cached from a remote end to the cache system. If the business data is not found in the cache system, it indicates that the cache system has not cached the business data, or even if the business data is cached, the business data is evicted from the cache because the space of the cache system is full, and the like. Therefore, the business data can be requested from a remote end and cached in the cache system, so as to realize the cache bottom.
In one embodiment, if the service data is not found in the cache system, after the cache data lock is obtained, the service data is found in the cache system again; and if the service data is not found, caching the service data from a remote end to the cache system.
In this embodiment, each cache data has a cache data lock, the service data request may carry a "key" value, and the key may be regarded as a key value of the cache system and is equivalent to a unique identifier of the service data in the cache. And when the service data request is received, judging whether the service data request acquires a cache data lock. And only the service data request of the cache data lock is acquired, the remote data is acquired for the service data request, so that the data interaction pressure of the remote end is reduced in a high concurrency scene. The reason why the service data is searched in the cache system again is that when a service data request gets to the cache data lock, it is possible that other threads have already acquired the service data from the remote end and stored the service data in the cache system. Therefore, unnecessary data interaction with the remote end can be avoided by the secondary determination.
Further, in this embodiment, a timeout mechanism for acquiring the cache data lock may be provided. If the cache data lock is acquired overtime, the process is directly ended. The time-out mechanism is to prevent unnecessary waiting and thus wasting resources, and to prevent the request queue from waiting in extreme cases, and overload and system avalanche from occurring.
In an embodiment, when the service data is cached from a remote end to the cache system, the first service data with a high importance level and the second service data with a low importance level may be cached in different areas isolated from each other in the cache system, respectively. The first service data may be service data corresponding to a service requiring high availability, such as payment service data. The tags may be set for the service data in advance, and the service data with different tags may be cached in different cache regions, respectively.
Further, the cache region for caching the first service data may be set to never evict data, and the cache region for caching the second service data may be set to evict the service data with the longest cache time. The method comprises the steps of isolating the key data from the near-end cache of the general data in a key data isolation mode, performing full cache on the key data, and performing eviction on the general data according to a cache strategy, wherein if a cache system is not enough to support the full data, the method can ensure that the key data (such as business data of key merchants) can be disaster-tolerant to a certain extent, and the long tail data can be disaster-tolerant to a certain extent.
In one embodiment, if the logic validity period of the service data is over, it may be determined whether the service data request carries a cache data lock; if yes, returning to the step of caching the service data from the remote end to the cache system. In this embodiment, for the service data with expired logic, the remote data may also be acquired through the cache data lock. Unlike the previous embodiment in which the cache data lock is set, the present embodiment may not set the timeout. Because the business data is stored in the cache system, the business data with the logic expiration can still be returned to the business although the logic expiration is finished, so that the high availability of the business is ensured. The purpose of obtaining the cache data lock without setting the timeout is to reduce the response time of the service data request and improve the performance of the system under the condition of physical storage hit.
If the logic validity period is not over, the service data searched in the cache system can be directly returned to the service system.
The physical validity period of the service data in the cache system is permanent. It should be noted that "permanently effective" here means that the buffering time exceeds a certain time threshold, which may be a very long time (e.g. 1 year) as long as it is guaranteed that the buffering disaster tolerance is achieved.
The near-end cache logic of this embodiment may be encapsulated in one cache component, and when the service needs to acquire data, the service directly acquires the data through the cache component, and as for whether the data is acquired from the cache or acquired from the remote server side, the service does not need to sense any cache and disaster recovery logic. The program flow diagram of fig. 3 shows how the near-end caching component caches data and disaster recovery in case of an exception. The method specifically comprises the following steps:
step 302: and the service system acquires the cache data through the cache component.
Step 304: the cache component searches the service data from the cache system, and if the cache is broken down, step 320 is executed; if the cache hits (i.e., business data is found in the cache), step 306 is performed.
Step 306: the service data request carries key values (keys) of the service data, each key value corresponds to a unique cache data lock, and only one key value is used for obtaining the cache data lock at each time point to obtain the data at a far end. After the cache is broken down in step 304, the cache data lock corresponding to the key value is obtained, and then step 308 is performed.
Step 308: if the cache data lock is acquired, go to step 310; otherwise, step 318 is performed.
Step 310: and if the cache data lock is acquired, judging whether the cache is hit again, if not, executing step 312, and if so, returning the data to the service, and ending the process.
Step 312: the remote data is acquired and then step 314 is performed.
Step 314: the acquired far-end data is written into a near-end cache system, wherein the data written into the near-end cache is never outdated on physical storage, the data is triggered to be physically deleted only when the cache is full and data eviction is needed, and the logic expiration time of business is set. Step 316 is then performed.
Step 316: and returning the service data in the cache system to the service system. In this step, if the remote end has problems such as network abnormality and database crash, the mechanism is used to store the data in the cache system to ensure that the service can always acquire the data, thereby ensuring high availability of the service.
Step 318: if the acquired cache data lock is overtime, the process is directly ended.
Step 320: when the business data exists in the near-end cache system, firstly judging whether the logic of the cache data is expired, if so, performing step 322, otherwise, directly returning the cache data to the business system, and ending the process.
Step 322: similar to step 306, the cache data lock corresponding to the key value is obtained, and then step 324 is performed.
Step 324: if a cache data lock is acquired, go to step 326; otherwise, returning the service data and ending the process.
Step 326: the remote service data is acquired and step 328 is performed.
Step 328: if the acquisition is successful, go to step 314; if the acquisition fails (when the remote server is abnormal), step 316 is executed.
The scheme of the embodiment of the specification has at least the following advantages:
(1) Under the condition of disaster, the data hit physically is used as disaster recovery data to carry out bottom reception, so that the high availability of the business is ensured.
(2) By setting the cache data lock, the remote end can be protected in a high concurrency scene.
(3) By setting a timeout mechanism, unnecessary waiting and resource waste are prevented, and the condition that a request queue always waits in an extreme condition and overload occurs to cause system avalanche is prevented.
(4) The first business data with higher importance degree and the second business data with lower importance degree are separately cached, the first business data is never evicted from the cache, and the first cached second business data is evicted only when the corresponding cache region is full, so that disaster tolerance of key businesses is ensured.
As shown in fig. 4, an embodiment of the present specification further provides a cache disaster recovery method, where the method includes:
step 402: if the remote service data is overtime, searching the service data in the cache system;
step 404: sending the searched service data to a service system;
wherein, the service data is cached to the cache system according to the data caching method of any embodiment.
Corresponding to the embodiment of the method, the embodiment of the specification also provides an embodiment of a device, a computing storage medium and a cache system.
As shown in fig. 5, fig. 5 is a block diagram of a data caching apparatus according to an exemplary embodiment shown in this specification, where the apparatus includes:
a first searching module 502, configured to search service data in a cache system when receiving a service data request sent by a service system;
a first obtaining module 504, configured to obtain a logic validity period of the service data if the service data is found; wherein the logical validity period is a validity period of the service data corresponding to a service;
a first caching module 506, configured to cache the service data from a remote location to the caching system if the logical validity period has passed, and set a physical validity period of the service data in the caching system as permanent validity.
As shown in fig. 6, fig. 6 is a block diagram of a cache disaster recovery apparatus according to an exemplary embodiment shown in this specification, where the apparatus includes:
a second searching module 602, configured to search for the service data in the cache system if the remote service data is acquired overtime;
a sending module 604, configured to send the found service data to a service system;
wherein, the service data is cached to the cache system according to the data caching method of any embodiment.
The specific details of the implementation process of the functions and actions of each module in the device are referred to the implementation process of the corresponding step in the method, and are not described herein again.
The embodiments of the apparatus of the present specification can be applied to a computer device, such as a server or a terminal device. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor in which the file processing is located. From a hardware aspect, as shown in fig. 7, a hardware structure diagram of a computer device in which the apparatus of this specification is located is shown, and besides the processor 702, the memory 704, the network interface 706, and the nonvolatile memory 708 shown in fig. 7, a server or an electronic device in which the apparatus is located in an embodiment may also include other hardware according to an actual function of the computer device, which is not described again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement without inventive effort.
Accordingly, the embodiments of the present specification also provide a computer storage medium, in which a program is stored, and the program, when executed by a processor, implements the method in any of the above embodiments.
Accordingly, an embodiment of the present specification further provides a cache system, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of the above embodiments when executing the program.
Further, as shown in fig. 8, the cache system includes a general cache unit 802 and a white list cache unit 804; the white list caching unit 804 is configured to store first service data with a high importance degree, and the general caching unit 802 is configured to store second service data with a low importance degree; when the general cache unit 802 is full, the first cached second service data is evicted from the cache.
This application may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (12)

1. A method of data caching, the method comprising:
searching service data in a cache system when receiving a service data request sent by a service system; each service data corresponds to a unique cache data lock, only one service data request in a plurality of service data requests requesting the same service data at each time point acquires the cache data lock, and the service data request acquiring the cache data lock can acquire the service data corresponding to the cache data lock from a remote end;
if the business data is not searched in the cache system, after the cache data lock corresponding to the business data is obtained, searching the business data in the cache system again; and if the service data is not found, utilizing a cache data lock corresponding to the service data to acquire the service data from a remote end and cache the service data in the cache system.
2. The method according to claim 1, wherein if the service data is found in a cache system, obtaining a logic validity period of the service data; wherein the logical validity period is a validity period of the service data corresponding to a service;
if the logic validity period is over, caching the service data from a remote end to the cache system, and setting the physical validity period of the service data in the cache system as permanent validity;
and if the logic validity period is not over, returning the service data searched in the cache system to the service system.
3. The method according to claim 1 or 2, the step of caching the traffic data from a remote end to the caching system comprising:
and caching the first service data with high importance degree and the second service data with low importance degree into different areas which are isolated from each other in the cache system respectively.
4. The method of claim 3, prior to caching the traffic data from a remote location to the caching system, further comprising:
and if the cache unit for storing the second business data is full, the second business data with the longest cache time is evicted from the cache.
5. The method of claim 1, further comprising:
and returning the service data in the cache system to the service system.
6. The method of claim 2, wherein caching the service data from the remote location to the cache system if the logical validity period has passed comprises:
if the logic validity period is over, judging whether the service data request carries a cache data lock;
and if so, acquiring the service data from a remote end by using the cache data lock and caching the service data to the cache system.
7. A cache disaster recovery method, the method comprising:
if the remote service data is overtime, searching the service data in the cache system;
sending the searched service data to a service system;
wherein the service data is cached to the caching system according to the method of any one of claims 1 to 6.
8. A data caching apparatus, the apparatus comprising:
the searching module is used for searching the service data in the cache system when receiving a service data request sent by the service system; each service data corresponds to a unique cache data lock, only one service data request in a plurality of service data requests requesting the same service data at each time point acquires the cache data lock, and the service data request acquiring the cache data lock can acquire the service data corresponding to the cache data lock from a remote end;
the cache module is used for searching the service data in the cache system again after the cache data lock corresponding to the service data is obtained if the service data is not searched in the cache system; and if the service data is not found, utilizing a cache data lock corresponding to the service data to acquire the service data from a remote end and cache the service data in the cache system.
9. A cache disaster recovery apparatus, the apparatus comprising:
the searching module is used for searching the service data in the cache system if the remote service data is overtime;
the sending module is used for sending the searched service data to a service system;
wherein the service data is cached to the caching system according to the method of any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
11. A cache system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 7 when executing the program.
12. The cache system of claim 11, the cache system comprising a general cache unit and a whitelist cache unit;
the white list cache unit is used for storing first service data with high importance degree, and the general cache unit is used for storing second service data with low importance degree;
and when the general cache unit is full, the first cached second business data is evicted from the cache.
CN202211086091.6A 2018-06-25 2018-06-25 Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system Pending CN115470026A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211086091.6A CN115470026A (en) 2018-06-25 2018-06-25 Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810662821.XA CN109062717B (en) 2018-06-25 2018-06-25 Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system
CN202211086091.6A CN115470026A (en) 2018-06-25 2018-06-25 Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201810662821.XA Division CN109062717B (en) 2018-06-25 2018-06-25 Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system

Publications (1)

Publication Number Publication Date
CN115470026A true CN115470026A (en) 2022-12-13

Family

ID=64821508

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211086091.6A Pending CN115470026A (en) 2018-06-25 2018-06-25 Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system
CN201810662821.XA Active CN109062717B (en) 2018-06-25 2018-06-25 Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201810662821.XA Active CN109062717B (en) 2018-06-25 2018-06-25 Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system

Country Status (1)

Country Link
CN (2) CN115470026A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935490B (en) * 2019-05-13 2023-12-05 深圳市茁壮网络股份有限公司 Live broadcast recording and streaming disaster recovery processing method and system
CN111125175B (en) * 2019-12-20 2023-09-01 北京奇艺世纪科技有限公司 Service data query method and device, storage medium and electronic device
CN111813792A (en) * 2020-06-22 2020-10-23 上海悦易网络信息技术有限公司 Method and equipment for updating cache data in distributed cache system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100825721B1 (en) * 2005-12-08 2008-04-29 한국전자통신연구원 System and method of time-based cache coherency maintenance in user file manager of object-based storage system
CN101090401B (en) * 2007-05-25 2011-05-18 金蝶软件(中国)有限公司 Data buffer store method and system at duster environment
CN105302840B (en) * 2014-07-31 2019-11-15 阿里巴巴集团控股有限公司 A kind of buffer memory management method and equipment
CN105373369A (en) * 2014-08-25 2016-03-02 北京皮尔布莱尼软件有限公司 Asynchronous caching method, server and system
CN105338088B (en) * 2015-11-04 2019-09-03 国家电网公司 A kind of mobile P 2 P network buffer replacing method
US10353822B2 (en) * 2016-03-25 2019-07-16 Home Box Office, Inc. Cache map with sequential tracking for invalidation

Also Published As

Publication number Publication date
CN109062717A (en) 2018-12-21
CN109062717B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN107943594B (en) Data acquisition method and device
CN109062717B (en) Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system
US11250395B2 (en) Blockchain-based transaction processing methods and apparatuses and electronic devices
EP2710477B1 (en) Distributed caching and cache analysis
CN111125175B (en) Service data query method and device, storage medium and electronic device
KR20150048861A (en) Network service system and method with off-heap caching
CN108111325B (en) Resource allocation method and device
CN111475519B (en) Data caching method and device
US9380127B2 (en) Distributed caching and cache analysis
CN104104582A (en) Data storage path management method, client and server
US9317432B2 (en) Methods and systems for consistently replicating data
JPH07239808A (en) Distributed data managing system
US20170286440A1 (en) Method, business processing server and data processing server for storing and searching transaction history data
CN110233843B (en) User request processing method and device
CN111367921A (en) Data object refreshing method and device
CN107704596B (en) Method, device and equipment for reading file
CN106940660B (en) Method and device for realizing cache
CN109783499A (en) A kind of data cache method, device and server
CN113268518B (en) Flow statistics method and device and distributed flow statistics system
CN113779052A (en) Data updating method, device, equipment and storage medium
CN116991333B (en) Distributed data storage method, device, electronic equipment and storage medium
CN112860746B (en) Cache reduction-based method, equipment and system
CN111784527B (en) Method and device for updating rights and interests through block chain
CN116069439A (en) Data processing method, device, system and device
CN115344804A (en) Cache data acquisition method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination