CN111782698A - Cache updating method and device and electronic equipment - Google Patents

Cache updating method and device and electronic equipment Download PDF

Info

Publication number
CN111782698A
CN111782698A CN202010631956.7A CN202010631956A CN111782698A CN 111782698 A CN111782698 A CN 111782698A CN 202010631956 A CN202010631956 A CN 202010631956A CN 111782698 A CN111782698 A CN 111782698A
Authority
CN
China
Prior art keywords
cache
level cache
request
updating
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010631956.7A
Other languages
Chinese (zh)
Inventor
黄江南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tiantu Network Technology Co ltd
Original Assignee
Guangzhou Tiantu Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tiantu Network Technology Co ltd filed Critical Guangzhou Tiantu Network Technology Co ltd
Priority to CN202010631956.7A priority Critical patent/CN111782698A/en
Publication of CN111782698A publication Critical patent/CN111782698A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/273Asynchronous replication or reconciliation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a cache updating method and device and electronic equipment. The method comprises the following steps: acquiring an access data request initiated by a request terminal; and after response data corresponding to the access data request is acquired from the cache and sent to the request end, asynchronous updating is performed on the cache. The scheme provided by the disclosure can improve the response efficiency, reduce the occurrence of the card pause problem caused by cache updating, improve the capability of the server for processing the concurrent request, and enhance the user experience.

Description

Cache updating method and device and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a cache updating method and apparatus, and an electronic device.
Background
The cache is an important component in a distributed system, mainly solves the performance problem of data access in a high-concurrency and big-data scene, and provides high-performance data quick access.
In the related art, a cache may be set at the server, and after the server receives the access request from the request end, if the server queries that data corresponding to the access request exists in the cache, the data in the cache is directly returned, so as to improve the response speed. The cache frames in the related art are all synchronous update caches, and data are returned to a request end after the cache update is completed, but the process of synchronously updating the caches is very slow, so that the response efficiency is influenced, and if the update operation is abnormal or the time is too long, the synchronous update caches are very easy to be blocked, so that the whole system is blocked or even crashed.
Therefore, the related art cache update method is yet to be improved.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a cache updating method, device and electronic device, which can improve response efficiency and reduce the occurrence of the stuck problem caused by cache updating.
According to a first aspect of the present disclosure, there is provided a cache updating method, including:
acquiring an access data request initiated by a request terminal;
and after response data corresponding to the access data request is acquired from the cache and sent to the request end, asynchronous updating is performed on the cache.
In an embodiment, after obtaining response data corresponding to the access data request from a cache and sending the response data to the request end, performing asynchronous update on the cache includes:
under the condition that a first-level cache is arranged and a second-level cache is arranged, response data corresponding to the access data request are obtained from the first-level cache and sent to the request end, and then asynchronous updating is performed on the second-level cache;
and the first-level cache acquires data from the asynchronously updated second-level cache for updating.
In one embodiment, the step of obtaining data from the asynchronously updated second level cache for updating includes:
after the second-level cache performs asynchronous updating, storing keys of the second-level cache into a message queue;
and the first-level cache acquires data from the second-level cache for updating according to the key of the second-level cache in the message queue.
In an embodiment, after obtaining response data corresponding to the access data request from a cache and sending the response data to the request end, performing asynchronous update on the cache includes:
under the condition that a first-level cache is arranged and a second-level cache is not arranged, response data are obtained from the first-level cache and sent to the request end, and then asynchronous updating is carried out on the first-level cache.
In an embodiment, after obtaining response data corresponding to the access data request from a cache and sending the response data to the request end, performing asynchronous update on the cache includes:
and acquiring response data corresponding to the access data request from a cache, sending the response data to the request terminal, and performing asynchronous updating on the cache after judging that the preset updating time is reached.
In one embodiment, the method further comprises:
under the condition that a first-level cache is not arranged and a second-level cache is not arranged, response data corresponding to the access data request are obtained from the destination of the access data request and are sent to the request end;
and establishing a first-level cache and a second-level cache, and storing the acquired response data into the newly established first-level cache and second-level cache.
According to a second aspect of the present disclosure, there is provided a cache updating apparatus, including:
the request acquisition module is used for acquiring an access data request initiated by a request terminal;
the request response module is used for acquiring response data corresponding to the access data request of the request acquisition module from a cache and sending the response data to the request end;
and the asynchronous updating module is used for performing asynchronous updating on the cache after the request response module sends the response data to the request end.
In one embodiment, the asynchronous update module comprises:
the first updating submodule is used for acquiring response data corresponding to the access data request from the first-level cache and sending the response data to the request end by the request response module under the condition that the first-level cache is arranged and the second-level cache is arranged, and then performing asynchronous updating on the second-level cache; the first-level cache acquires data from the asynchronously updated second-level cache for updating; or the like, or, alternatively,
and the second updating submodule is used for performing asynchronous updating on the first-level cache after the request response module acquires response data from the first-level cache and sends the response data to the request end under the condition that the first-level cache is arranged and the second-level cache is not arranged.
In one embodiment, the first updating submodule stores the key of the second-level cache into a message queue after the second-level cache performs asynchronous updating; and the first-level cache acquires data from the second-level cache for updating according to the key of the second-level cache in the message queue.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method as described above.
According to a fourth aspect of the present disclosure, there is provided a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the method as described above.
The technical scheme provided by the disclosure can comprise the following beneficial effects: in the related technology, the synchronous updating of the cache causes that an interface of a request end waits for the updating of the cache and can return data to the request end only after depending on the synchronous updating of the cache, and the blocking is caused if the synchronous updating of the cache is slow; in the embodiment of the present disclosure, after response data corresponding to the access data request is obtained from a cache and sent to the request end, asynchronous update is performed on the cache. That is to say, the cache performs asynchronous update in the embodiment of the present disclosure, when there is a cache, even if the cache is expired, the cache will preferentially return data, and then perform asynchronous update on the cache, even if the asynchronous update is stuck, the asynchronous update will not affect, because the cache data is obtained first and returned to the request end, the asynchronous update goes to other thread pools, and the stuck asynchronous update process will not affect the cache data to be quickly returned to the request end of the foreground. Therefore, the scheme of the embodiment can improve the response efficiency, reduce the occurrence of the stuck problem caused by cache updating, and enhance the user experience.
According to the scheme of the embodiment of the disclosure, different processing can be executed according to different conditions. For example, under the condition that a first-level cache is arranged and a second-level cache is arranged, response data corresponding to an access data request is obtained from the first-level cache and sent to a request end, and then asynchronous updating is performed on the second-level cache; the first-level cache acquires data from the asynchronously updated second-level cache for updating; or, under the condition that the first-level cache is arranged and the second-level cache is not arranged, response data is obtained from the first-level cache and sent to the request end, and then asynchronous updating is performed on the first-level cache; or under the condition that the first-level cache is not arranged and the second-level cache is not arranged, response data corresponding to the access data request is obtained from the destination of the access data request and is sent to the request end; and establishing a first-level cache and a second-level cache, and storing the acquired response data into the newly established first-level cache and second-level cache.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
FIG. 1 is a flowchart illustrating a cache update method according to an exemplary embodiment of the present disclosure;
FIG. 2 is another flow diagram illustrating a cache update method according to an exemplary embodiment of the present disclosure;
FIG. 3 is another flow diagram illustrating a cache update method according to an exemplary embodiment of the present disclosure;
fig. 4 is a schematic structural diagram illustrating a cache update apparatus according to an exemplary embodiment of the present disclosure;
FIG. 5 is another diagram illustrating a structure of a cache update apparatus according to an exemplary embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
The present disclosure provides a cache updating method, which can improve response efficiency, reduce the occurrence of a stuck problem caused by cache updating, and enhance user experience.
Technical solutions of embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a cache update method according to an exemplary embodiment of the present disclosure.
Referring to fig. 1, the method includes:
in step 101, a request for accessing data initiated by a request end is obtained.
The server receives a request for accessing data initiated by a requesting end, such as a client.
In step 102, after response data corresponding to the access data request is obtained from the cache and sent to the request end, asynchronous update is performed on the cache.
In this step 102, under the condition that a first-level cache and a second-level cache are provided, response data corresponding to the access data request is obtained from the first-level cache and sent to the request end, and then asynchronous updating is performed on the second-level cache; the first-level cache acquires data from the asynchronously updated second-level cache for updating; or, in the case where the first-level cache is provided and the second-level cache is not provided, the asynchronous update may be performed on the first-level cache after the response data is obtained from the first-level cache and sent to the request end.
It can be seen from the embodiment that, in the related art, when the cache performs synchronous update, an interface of the request end waits for the cache to be updated, and data can be returned to the request end only after the cache is updated synchronously, and if the cache updating process is slow, the data is blocked; in the embodiment of the present disclosure, after response data corresponding to the access data request is obtained from a cache and sent to the request end, asynchronous update is performed on the cache. That is to say, the cache performs asynchronous update in the embodiment of the present disclosure, when there is a cache, even if the cache is expired, the cache will preferentially return data, and then perform asynchronous update on the cache, even if the asynchronous update is stuck, the asynchronous update will not affect, because the cache data is obtained first and returned to the request end, the asynchronous update goes to other thread pools, and the stuck asynchronous update process will not affect the cache data to be quickly returned to the request end of the foreground. Therefore, the scheme of the embodiment can improve the response efficiency, reduce the occurrence of the stuck problem caused by cache updating, and enhance the user experience.
Fig. 2 is a flowchart illustrating a cache update method according to an exemplary embodiment of the disclosure. Fig. 2 depicts aspects of the present disclosure in more detail relative to fig. 1.
Referring to fig. 2, the method includes:
in step 201, a request for accessing data initiated by a request end is obtained.
The server receives a request for accessing data initiated by a requesting end, such as a client.
In step 202, under the condition that a first-level cache is provided and a second-level cache is provided, response data corresponding to the access data request is obtained from the first-level cache and sent to a request end, and then asynchronous updating is performed on the second-level cache; and the first-level cache acquires data from the asynchronously updated second-level cache for updating.
The step of obtaining data from the asynchronously updated second level cache by the first level cache for updating may include: after the second-level cache performs asynchronous updating, storing keys of the second-level cache into a message queue; and the first-level cache acquires data from the second-level cache for updating according to the key of the second-level cache in the message queue.
In step 203, under the condition that the first-level cache is provided and the second-level cache is not provided, the response data is obtained from the first-level cache and sent to the request end, and then the asynchronous update is performed on the first-level cache.
If the cache executes synchronous updating, the interface of the request end is caused to wait for the updating of the cache, and when the cache is overdue, the interface speed also depends on the cache synchronization speed; and the cache executes asynchronous updating, when the cache exists, the data can be preferentially returned even if the cache is overdue, and then the asynchronous updating is executed on the cache. Therefore, in the step, under the condition that the first-level cache is arranged and the second-level cache is not arranged, response data is obtained from the first-level cache and sent to the request end, and then asynchronous updating is performed on the first-level cache.
In step 204, under the condition that the first-level cache is not arranged and the second-level cache is not arranged, response data corresponding to the access data request is obtained from the destination of the access data request and is sent to the request end; and establishing a first-level cache and a second-level cache, and storing the acquired response data into the newly established first-level cache and second-level cache.
For the situation that the first-level cache and the second-level cache are not available, after response data are returned to the request end, the returned data are also put into the cache, at this time, the first-level cache and the second-level cache are simultaneously established, a copy of data can be stored in the second-level cache, and then a copy of data can be stored in the first-level cache.
It should be noted that there is no necessary sequence relationship between the above steps 202-204, and the processing procedure corresponds to different situations.
It can be seen from this embodiment that the embodiment of the present disclosure can provide processing of various different situations, and is more flexible in application, and better meets different requirements of users.
Fig. 3 is a flowchart illustrating a cache update method according to an exemplary embodiment of the present disclosure. Fig. 3 depicts aspects of the present disclosure in more detail with respect to fig. 1 and 2. The embodiment of the disclosure can greatly improve the concurrency and request throughput of the system by a mode of a second-level cache, an asynchronous update cache and a synchronous first-level cache of a message queue.
Embodiments of the present disclosure relate to a first level cache and a second level cache. The first-level cache is a heap cache, and has the advantages of high reading speed, limited size, memory overflow caused by no limitation, and failure in Java instance restart. The second-level cache is a Remote Dictionary service (Redis) cache which is an open-source supporting network, can be based on a log type and a Key-Value database with a memory and can also be persistent, and has the advantages that the reading speed is slower than that of a heap cache, the reading speed is far higher than that of a disk database, the cache size is not limited, the cache is shared, and the condition of cache inconsistency does not exist. Generally, the cache data volume of a certain service is very small, the change frequency is very low, and only a first-level cache can be established; if the volume of certain service data is large, only a second-level cache can be established, and the first-level cache and the second-level cache have no relation at this time; when a certain amount of service data is required but the amount is not large, the updating frequency is high and the real-time requirement is not high, a first-level cache and a second-level cache are required to be established at the same time, and when the first-level cache is insufficient, the second-level cache is used, so that the first-level cache has better performance and larger capacity.
Referring to fig. 3, the method includes:
in step 301, a request for accessing data initiated by a request end is obtained.
As exemplified by the example of a Java web backend server, the server receives a request for access to data initiated by a requesting end, such as a client.
In step 302, it is determined whether a first level cache is provided, and if so, the process proceeds to step 303, and if not, the process proceeds to step 310.
In this step, the server first determines whether a first-level cache is provided, and if so, the process goes to step 303, and if not, the process goes to step 310.
In step 303, response data corresponding to the access data request is obtained from the first-level cache and sent to the requesting end, and the process proceeds to step 304.
When the request end initiates a data access request, response data corresponding to the data access request is obtained from the first-level cache and returned to the request end, so that the request of the request end can be met first, the request end is prevented from waiting for the data all the time, and the response efficiency is improved.
In step 304, it is determined whether a second level cache is provided, and if so, the process proceeds to step 305, and if not, the process proceeds to step 308.
After response data corresponding to the access data request is obtained from the first-level cache and sent to the request end, the step continues to judge whether a second-level cache is set, if the second-level cache is set, the step 305 is performed, and if the second-level cache is not set, the step 308 is performed.
It should be noted that, the storage contents of the first-level cache and the second-level cache are generally not different, and when the first-level cache or the second-level cache is stored, the configuration is generally performed during initialization. The content of the first-level cache is the same as that of the second-level cache, but the first-level cache does not necessarily have data, because the first-level cache has a limit on the amount, but the first-level cache can have the same data as the second-level cache once the first-level cache has the data. For example, if some data is little and the change frequency is very low, only a first level cache is configured; if the data volume of some service modules is large, only a second-level cache is set, and at the moment, the service data does not have a first-level cache; some modules have small or not small data amount and certain updating frequency, and can be configured to simultaneously comprise a first-level cache and a second-level cache, the first-level cache is faster, the second-level cache has more data, the new data in the first-level cache can eliminate old data, the second-level cache can be almost unlimited, and the data can be fetched by the second-level cache when the first-level cache does not find the data.
In step 305, it is determined whether the second level cache reaches the preset update time, if so, step 306 is entered, and if not, step 305 is returned.
The embodiment sets the cache update mode as asynchronous update. Through asynchronous updating, response efficiency can be improved, and the occurrence of the jamming problem caused by cache updating can be reduced. A preset update time may be set for the time to asynchronously update the cache, for example, every 30 seconds, and the time of each cache update may be recorded. When the request end accesses, firstly, cache data is obtained and returned to the request end, then whether the last time of the cache updating time is more than 30 seconds is judged, if yes, step 306 is executed to execute asynchronous updating, if not, step 305 is returned to continue monitoring the updating time change. It should be noted that, the example is 30 seconds, but the present invention is not limited to this, and the preset update time may be set to be updated every 40 seconds or every 5 seconds.
In step 306, an asynchronous update is performed to the level two cache, proceeding to step 307.
This step performs an asynchronous update to the secondary cache. If the cache executes synchronous updating, an interface of the request end can wait for the cache updating, and the data can be returned to the request end only after the cache is updated synchronously, and the data is blocked if the cache updating process is slow; when the cache is overdue, the interface speed also depends on the cache synchronization speed; and the cache executes asynchronous updating, when the cache exists, the cache can return data preferentially even if the cache is overdue, and then the cache executes asynchronous updating, namely the asynchronous updating can not generate influence even if the cache is blocked, because the cache data is obtained firstly and returned to the request end, the asynchronous updating walks other thread pools, and the blocking of the asynchronous updating process does not influence the cache data to be quickly returned to the request end of the foreground. Thus, the system request throughput can be greatly improved under the high concurrency condition, and only for the first request for obtaining the cache, the data requested back is older. Normally, the requirement on the real-time performance of most service data is not high, but in the high concurrency condition, the guarantee of the system throughput is more important, so that the real-time performance of a few request return data can be considered to be sacrificed.
When the data is updated asynchronously, the original method can be called to obtain the data from the database. For example, the original interface methods and parameters may be obtained using a Java reflection mechanism, and then the original methods may be executed to obtain the data.
In step 307, the primary cache retrieves data from the asynchronously updated secondary cache for updating.
After the second-level cache performs asynchronous updating, the key of the second-level cache can be stored in a message queue; and the first-level cache acquires data from the second-level cache for updating according to the key of the second-level cache in the message queue. That is, the original method is called to synchronize the second level cache first, and then synchronize the first level cache, wherein the first level cache synchronization is to receive the message queue and then directly fetch data from the second level cache according to the message queue.
Taking a plurality of servers as a plurality of nodes for example, a certain node firstly asynchronously updates the cache to the second-level cache and then utilizes a message queue to inform other nodes, wherein the message content in the message queue is the key of the second-level cache and is not data; after receiving the notification, other nodes can directly fetch the data from the second-level cache according to the key in the message queue and synchronize the data to the first-level cache.
In step 308, it is determined whether the first-level cache reaches the preset update time, if so, the process proceeds to step 309, and if not, the process returns to step 308.
The embodiment sets the cache update mode as asynchronous update. A preset update time may be set for the time to asynchronously update the cache, for example, every 30 seconds, and the time of each cache update may be recorded. When the request end accesses, firstly, cache data is obtained and returned to the request end, then whether the last time of the cache updating time is more than 30 seconds is judged, if yes, step 309 is executed to execute asynchronous updating, and if not, step 308 is returned to continue to monitor the updating time change.
In step 309, an asynchronous update is performed on the level one cache.
For the case where there is a level one cache but no level two cache, this step performs an asynchronous update to the level one cache. If the cache executes synchronous updating, the interface of the request end is caused to wait for the updating of the cache, and when the cache is overdue, the interface speed also depends on the cache synchronization speed; and the cache executes asynchronous updating, when the cache exists, the data can be preferentially returned even if the cache is overdue, and then the asynchronous updating is executed on the cache. Thus, the system request throughput can be greatly improved under the high concurrency condition, and only for the first request for obtaining the cache, the data requested back is older. Normally, the requirement on the real-time performance of most service data is not high, but in the high concurrency condition, the guarantee of the system throughput is more important, so that the real-time performance of a few request return data can be considered to be sacrificed.
In step 310, it is determined whether a second level cache is set, and if the second level cache is set, the process proceeds to step 311, and if the second level cache is not set, the process proceeds to step 312.
For the case of no first-level cache, the step continues to determine whether a second-level cache is provided, if so, step 311 is entered, and if not, step 312 is entered.
In step 311, response data corresponding to the access data request is obtained from the second-level cache and sent to the request end.
For the condition that the first-level cache is not available but the second-level cache is provided, when the request terminal initiates the data access request, response data corresponding to the data access request are obtained from the second-level cache and returned to the request terminal. It should be noted that if there is no first-level cache but there is a second-level cache, asynchronous updating of the cache may not be needed, and data may be directly fetched from the second-level cache and then returned to the request end.
In step 312, response data corresponding to the access data request is acquired from the destination of the access data request, transmitted to the requester, and the process proceeds to step 313.
And for the condition that neither the first-level cache nor the second-level cache exists, normally calling an interface for processing, acquiring response data corresponding to the access data request from the destination of the access data request, and sending the response data to the request end.
In step 313, a first level cache and a second level cache are created, and the acquired response data is stored in the newly created first level cache and second level cache.
For the situation that the first-level cache and the second-level cache are not available, after response data are returned to the request end, the returned data are also put into the cache, at this time, the first-level cache and the second-level cache are simultaneously established, a copy of data can be stored in the second-level cache, and then a copy of data can be stored in the first-level cache.
It should be further noted that the first-level cache is generally a heap cache, for example, Caffeine is a high-performance cache library based on Java 1.8 version, which generally limits the number of cache keys and prevents memory overflow. The heap cache may have a cache eviction mechanism, such as LRU (abbreviation of Least Recently Used, which is a commonly Used page replacement algorithm, which selects a page that is not Used Recently to be evicted), LFU (Least frequently Used, which is a page replacement algorithm) eviction mechanism. The number of cache keys can be set to 10000, for example, and the cache elimination mechanism is related to the number of keys, for example, when the number of caches exceeds 10000, the least used cache is eliminated, which belongs to the self-bring mechanism of the open-source cache framework. The second-level cache is generally a distributed cache, for example, a redis cluster system is used as a second-level cache system, and a cache key is generally not limited, but a cache expiration time is set to prevent the second-level cache from becoming larger. In the embodiment of the present disclosure, it may be set that the first-level cache time is permanent, for example, the number of keys in the first-level cache is set to 10000, and the least recently used cache is eliminated only when the number of keys is full, that is, the number of keys exceeds 1 ten thousand. The secondary cache sets expiration time, mainly to prevent the cold amount storage time from being too long and wasting Redis space.
In addition, in order to maintain cache consistency, when the cache is asynchronously updated, a global lock operation can be added, only one request is sent to asynchronously update the cache for the same request, after the cache is asynchronously updated, each Java instance is notified by using a Redis publish-subscribe function, and after receiving a message, each Java instance synchronizes the Redis cache of the relevant key to a heap cache, so that the consistency of each heap cache is ensured.
Corresponding to the embodiment of the application function implementation method, the disclosure also provides a cache updating device, electronic equipment and a corresponding embodiment.
Fig. 4 is a schematic structural diagram illustrating a cache updating apparatus according to an exemplary embodiment of the present disclosure.
Referring to fig. 4, the cache update apparatus includes: a request acquisition module 401, a request response module 402, and an asynchronous update module 403.
The request obtaining module 401 is configured to obtain a data access request initiated by a request end. The request obtaining module 401 receives a data access request initiated by a requesting end, for example, a client.
A request response module 402, configured to obtain response data corresponding to the access data request of the request obtaining module 401 from the cache, and send the response data to the request end.
An asynchronous update module 403, configured to perform asynchronous update on the cache after the request response module 402 sends the response data to the request end. Under the condition that the first-level cache is provided and the second-level cache is provided, the asynchronous update module 403 may perform asynchronous update on the second-level cache after the request response module 402 first obtains response data corresponding to the access data request from the first-level cache and sends the response data to the request end; the first-level cache acquires data from the asynchronously updated second-level cache for updating; alternatively, the first and second electrodes may be,
in the case where the first-level cache is provided and the second-level cache is not provided, after the request response module 402 first obtains response data from the first-level cache and sends the response data to the request end, the asynchronous update may be performed on the first-level cache.
In the related technology, the synchronous updating of the cache causes that an interface of a request end waits for the updating of the cache and can return data to the request end only after depending on the synchronous updating of the cache, and the blocking is caused if the synchronous updating of the cache is slow; in the embodiment of the present disclosure, after response data corresponding to the access data request is obtained from a cache and sent to the request end, asynchronous update is performed on the cache. That is to say, the cache according to the embodiment of the present disclosure performs asynchronous update, when there is a cache, even if the cache is expired, the cache will preferentially return data, and then perform asynchronous update on the cache, that is, even if the asynchronous update is blocked, the asynchronous update will not affect, because the cache data is obtained first and returned to the request end, the asynchronous update goes to other thread pools, and the blocking of the asynchronous update process will not affect the fast return of the cache data to the request end of the foreground. Therefore, the scheme of the embodiment can improve the response efficiency, reduce the occurrence of the stuck problem caused by cache updating, and enhance the user experience.
Fig. 5 is a schematic structural diagram illustrating a cache update apparatus according to an exemplary embodiment of the present disclosure.
Referring to fig. 5, the cache update apparatus includes: a request acquisition module 401, a request response module 402, and an asynchronous update module 403.
The functions of the request acquisition module 401, the request response module 402 and the asynchronous update module 403 can be referred to the description in fig. 4.
The asynchronous update module 403 may include: a first update sub-module 4031 or a second update sub-module 4032.
A first update sub-module 4031, configured to, in a case where a first-level cache is provided and a second-level cache is provided, after the request response module 402 first obtains response data corresponding to the access data request from the first-level cache and sends the response data to the request end, perform asynchronous update on the second-level cache; and the first-level cache acquires data from the asynchronously updated second-level cache for updating.
A second updating sub-module 4032, configured to, in a case where a first-level cache is provided and a second-level cache is not provided, after the request response module 402 first obtains response data from the first-level cache and sends the response data to the request end, perform asynchronous updating on the first-level cache. If the cache executes synchronous updating, the interface of the request end is caused to wait for the updating of the cache, and when the cache is overdue, the interface speed also depends on the cache synchronization speed; and the cache executes asynchronous updating, when the cache exists, the data can be preferentially returned even if the cache is overdue, and then the asynchronous updating is executed on the cache. Therefore, under the condition that the first-level cache is arranged and the second-level cache is not arranged, response data can be obtained from the first-level cache and sent to the request end, and then asynchronous updating is performed on the first-level cache.
In one embodiment, the first update sub-module 4031 stores the key of the secondary cache into a message queue after the asynchronous update of the secondary cache is performed; and the first-level cache acquires data from the second-level cache for updating according to the key of the second-level cache in the message queue.
It should be further noted that, under the condition that the first-level cache and the second-level cache are not provided, response data corresponding to the access data request can be obtained from the destination of the access data request and sent to the request end; and establishing a first-level cache and a second-level cache, and storing the acquired response data into the newly established first-level cache and second-level cache. For the situation that the first-level cache and the second-level cache are not available, after response data are returned to the request end, the returned data are also put into the cache, at this time, the first-level cache and the second-level cache are simultaneously established, a copy of data can be stored in the second-level cache, and then a copy of data can be stored in the first-level cache.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Referring to fig. 6, the computing device 1000 includes a memory 1010 and a processor 1020.
The Processor 1020 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1010 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions that are needed by the processor 1020 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 1010 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, among others. In some embodiments, memory 1010 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 1010 has stored thereon executable code that, when processed by the processor 1020, may cause the processor 1020 to perform some or all of the methods described above.
The aspects of the present disclosure have been described in detail above with reference to the accompanying drawings. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. Those skilled in the art should also appreciate that the acts and modules referred to in the specification are not necessarily required by the disclosure. In addition, it can be understood that steps in the method of the embodiment of the present disclosure may be sequentially adjusted, combined, and deleted according to actual needs, and modules in the device of the embodiment of the present disclosure may be combined, divided, and deleted according to actual needs.
Furthermore, the method according to the present disclosure may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present disclosure.
Alternatively, the present disclosure may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) that, when executed by a processor of an electronic device (or computing device, server, or the like), causes the processor to perform some or all of the various steps of the above-described method according to the present disclosure.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A cache update method, comprising:
acquiring an access data request initiated by a request terminal;
and after response data corresponding to the access data request is acquired from the cache and sent to the request end, asynchronous updating is performed on the cache.
2. The method according to claim 1, wherein after the response data corresponding to the access data request is obtained from the cache and sent to the request end, performing asynchronous update on the cache, comprises:
under the condition that a first-level cache is arranged and a second-level cache is arranged, response data corresponding to the access data request are obtained from the first-level cache and sent to the request end, and then asynchronous updating is performed on the second-level cache;
and the first-level cache acquires data from the asynchronously updated second-level cache for updating.
3. The method of claim 2, wherein the first level cache obtaining data from the asynchronously updated second level cache for updating comprises:
after the second-level cache performs asynchronous updating, storing keys of the second-level cache into a message queue;
and the first-level cache acquires data from the second-level cache for updating according to the key of the second-level cache in the message queue.
4. The method according to claim 1, wherein after the response data corresponding to the access data request is obtained from the cache and sent to the request end, performing asynchronous update on the cache, comprises:
under the condition that a first-level cache is arranged and a second-level cache is not arranged, response data are obtained from the first-level cache and sent to the request end, and then asynchronous updating is carried out on the first-level cache.
5. The method according to claim 1, wherein after the response data corresponding to the access data request is obtained from the cache and sent to the request end, performing asynchronous update on the cache, comprises:
and acquiring response data corresponding to the access data request from a cache, sending the response data to the request terminal, and performing asynchronous updating on the cache after judging that the preset updating time is reached.
6. The method according to any one of claims 2 to 4, further comprising:
under the condition that a first-level cache is not arranged and a second-level cache is not arranged, response data corresponding to the access data request are obtained from the destination of the access data request and are sent to the request end;
and establishing a first-level cache and a second-level cache, and storing the acquired response data into the newly established first-level cache and second-level cache.
7. A cache update apparatus, comprising:
the request acquisition module is used for acquiring an access data request initiated by a request terminal;
the request response module is used for acquiring response data corresponding to the access data request of the request acquisition module from a cache and sending the response data to the request end;
and the asynchronous updating module is used for performing asynchronous updating on the cache after the request response module sends the response data to the request end.
8. The apparatus of claim 7, wherein the asynchronous update module comprises:
the first updating submodule is used for acquiring response data corresponding to the access data request from the first-level cache and sending the response data to the request end by the request response module under the condition that the first-level cache is arranged and the second-level cache is arranged, and then performing asynchronous updating on the second-level cache; the first-level cache acquires data from the asynchronously updated second-level cache for updating; or the like, or, alternatively,
and the second updating submodule is used for performing asynchronous updating on the first-level cache after the request response module acquires response data from the first-level cache and sends the response data to the request end under the condition that the first-level cache is arranged and the second-level cache is not arranged.
9. The apparatus of claim 8, wherein:
the first updating submodule stores keys of the second-level cache into a message queue after the second-level cache executes asynchronous updating; and the first-level cache acquires data from the second-level cache for updating according to the key of the second-level cache in the message queue.
10. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-6.
CN202010631956.7A 2020-07-03 2020-07-03 Cache updating method and device and electronic equipment Pending CN111782698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010631956.7A CN111782698A (en) 2020-07-03 2020-07-03 Cache updating method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010631956.7A CN111782698A (en) 2020-07-03 2020-07-03 Cache updating method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111782698A true CN111782698A (en) 2020-10-16

Family

ID=72758921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010631956.7A Pending CN111782698A (en) 2020-07-03 2020-07-03 Cache updating method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111782698A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373369A (en) * 2014-08-25 2016-03-02 北京皮尔布莱尼软件有限公司 Asynchronous caching method, server and system
CN109446448A (en) * 2018-09-10 2019-03-08 平安科技(深圳)有限公司 Data processing method and system
CN109684358A (en) * 2017-10-18 2019-04-26 北京京东尚科信息技术有限公司 The method and apparatus of data query
CN110597739A (en) * 2019-06-03 2019-12-20 上海云盾信息技术有限公司 Configuration management method, system and equipment
CN110989939A (en) * 2019-12-16 2020-04-10 中国银行股份有限公司 Data cache processing method, device and equipment and cache component
CN111061654A (en) * 2019-11-11 2020-04-24 支付宝(杭州)信息技术有限公司 Cache refreshing processing method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373369A (en) * 2014-08-25 2016-03-02 北京皮尔布莱尼软件有限公司 Asynchronous caching method, server and system
CN109684358A (en) * 2017-10-18 2019-04-26 北京京东尚科信息技术有限公司 The method and apparatus of data query
CN109446448A (en) * 2018-09-10 2019-03-08 平安科技(深圳)有限公司 Data processing method and system
CN110597739A (en) * 2019-06-03 2019-12-20 上海云盾信息技术有限公司 Configuration management method, system and equipment
CN111061654A (en) * 2019-11-11 2020-04-24 支付宝(杭州)信息技术有限公司 Cache refreshing processing method and device and electronic equipment
CN110989939A (en) * 2019-12-16 2020-04-10 中国银行股份有限公司 Data cache processing method, device and equipment and cache component

Similar Documents

Publication Publication Date Title
US10037302B2 (en) Cache management in RDMA distributed key/value stores based on atomic operations
US8712961B2 (en) Database caching utilizing asynchronous log-based replication
CN111078147B (en) Processing method, device and equipment for cache data and storage medium
EP2681660B1 (en) Universal cache management system
US20130290643A1 (en) Using a cache in a disaggregated memory architecture
US9274963B2 (en) Cache replacement for shared memory caches
US9880744B1 (en) Method for flash-friendly caching for CDM workloads
US20130290636A1 (en) Managing memory
CN115599747B (en) Metadata synchronization method, system and equipment of distributed storage system
CN114051056B (en) Data caching and reading method and data access system
US11520759B2 (en) Processing time series metrics data
US11657066B2 (en) Method, apparatus and medium for data synchronization between cloud database nodes
US11288237B2 (en) Distributed file system with thin arbiter node
CN110908965A (en) Object storage management method, device, equipment and storage medium
US20140089260A1 (en) Workload transitioning in an in-memory data grid
US8341368B2 (en) Automatic reallocation of structured external storage structures
US10324811B2 (en) Opportunistic failover in a high availability cluster
US11269784B1 (en) System and methods for efficient caching in a distributed environment
US20130226997A1 (en) Networked Applications with Client-Caching of Executable Modules
CN111782698A (en) Cache updating method and device and electronic equipment
US10073874B1 (en) Updating inverted indices
JP2023511743A (en) Reducing demands using probabilistic data structures
US11921680B2 (en) Speedup containers in production by ignoring sync to file system
JP2000163294A (en) Method and device for managing database and machine- readable recording medium with program recorded thereon
CN112449017B (en) Service end and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201016