CN113486037A - Cache data updating method, manager and cache server - Google Patents

Cache data updating method, manager and cache server Download PDF

Info

Publication number
CN113486037A
CN113486037A CN202110848905.4A CN202110848905A CN113486037A CN 113486037 A CN113486037 A CN 113486037A CN 202110848905 A CN202110848905 A CN 202110848905A CN 113486037 A CN113486037 A CN 113486037A
Authority
CN
China
Prior art keywords
cache
updating
expiration
data
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110848905.4A
Other languages
Chinese (zh)
Inventor
乔瑞
胡奇
陈斌
河京哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN202110848905.4A priority Critical patent/CN113486037A/en
Publication of CN113486037A publication Critical patent/CN113486037A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking

Abstract

The disclosure provides a method for updating cache data, a manager and a cache server, and relates to the field of computers. The method comprises the following steps: storing keys and expiration time corresponding to cache data in a cache server to a delay queue; when the updating operation time is up, judging whether at least one piece of target information which can initiate the updating operation is acquired based on the delay queue; if at least one piece of target information is acquired, triggering a cache updating action aiming at the at least one piece of target information, and executing the updating operation of cache data and overdue data of the at least one piece of target information according to a set cache updating strategy; and storing the corresponding key and the expiration time of the updated cache data into a delay queue so as to wait for the next cache updating action trigger when the next updating operation time arrives. Therefore, the expiration time of each cache data is managed in a centralized mode, invalid operation is reduced in the cache updating process, and the risk of business concurrency peaks is reduced.

Description

Cache data updating method, manager and cache server
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method, a manager, and a cache server for updating cache data.
Background
Caching refers to a technology for storing data which needs to be accessed frequently in a system which is closer to a user and has a higher access speed so as to improve the data access speed. A cache server is a server that stores frequently accessed data.
At present, a cache server stores cache data and its expiration time in the same storage service instance together, and manages the expiration time of each cache data by adopting a method of enumerating all cache entries periodically.
Disclosure of Invention
Research shows that the management mode of storing the cache data and the expiration time thereof together and enumerating all cache entries regularly adopted by the cache server makes the expiration time of each cache data inconvenient to manage, and unless enumerating all cache entries, the expiration condition of each cache data is difficult to obtain comprehensively.
The embodiment of the disclosure stores keys and expiration times corresponding to each cache data in a delay queue in a centralized manner, so that the expiration times of each cache data can be managed in a centralized manner, target information which can initiate an updating operation can be conveniently acquired based on the delay queue without enumerating all cache entries, invalid operations are reduced, a cache updating behavior aiming at the target information is triggered, and the risk of a business concurrency peak is reduced.
Some embodiments of the present disclosure provide a method for updating cache data, including:
storing keys and expiration time corresponding to cache data in a cache server to a delay queue;
when the updating operation time is up, judging whether at least one piece of target information which can initiate the updating operation is acquired based on the delay queue;
if at least one piece of target information which can initiate updating operation is acquired, triggering a cache updating behavior aiming at the at least one piece of target information, and executing the updating operation of cache data and outdated data of the at least one piece of target information according to a set cache updating strategy;
and storing the corresponding key and the expiration time of the updated cache data into a delay queue so as to wait for the next cache updating action trigger when the next updating operation time arrives.
In some embodiments, it is determined whether the update operation time has arrived based on whether a set update time interval has arrived.
In some embodiments, it is determined whether the update operation time is reached according to whether an expiration time corresponding to at least one key in the delay queue meets an expiration condition.
In some embodiments, the determining based on whether the delay queue has acquired at least one piece of target information that may initiate the update operation includes: and judging whether at least one key with the expiration time meeting the expiration condition exists in the delay queue, and if so, taking the at least one key with the expiration time meeting the expiration condition as the acquired at least one piece of target information capable of initiating the updating operation.
In some embodiments, the determining based on whether the delay queue has acquired at least one piece of target information that may initiate the update operation further comprises: and judging whether at least one key with the expiration time meeting the expiration condition meets a preset current limiting strategy or not, and taking the at least one key with the expiration time meeting the expiration condition and meeting the current limiting strategy as the acquired at least one piece of target information capable of initiating the updating operation.
In some embodiments, it is determined whether the key whose expiration time satisfies the expiration condition belongs to the currently processed time slice, and if it belongs to the currently processed time slice, it is determined that the key whose expiration time satisfies the expiration condition satisfies the current limiting policy.
In some embodiments, at least one key with the expiration time meeting the expiration condition is sorted according to the ascending order of the expiration time, whether the ranking of the keys with the expiration time meeting the expiration condition is not greater than the preset count value of the counter is sequentially judged according to the ascending order of the sorting results, and if the ranking is not greater than the preset count value of the counter, the keys with the expiration time meeting the expiration condition are judged to meet the current limiting policy.
In some embodiments, at least one key with expiration time meeting the expiration condition is sorted according to the ascending order of the expiration time, the keys are sequentially placed into the leaky bucket according to the ascending order of the sorting result, the keys placed into the leaky bucket flow out at a preset speed according to the sequence of key placement, the keys flowing out of the leaky bucket meet the current limiting strategy, and the keys overflowing the leaky bucket do not meet the current limiting strategy.
In some embodiments, tokens are put into the token bucket at a preset speed, at least one key with expiration time meeting an expiration condition is sorted according to an ascending order of expiration time, and tokens are sequentially obtained from the token bucket, keys capable of obtaining tokens meet a current limiting policy, and keys incapable of obtaining tokens do not meet the current limiting policy.
In some embodiments, when the update operation time arrives, determining whether the update operation time is in an off-user traffic peak time period, and if so, performing an operation of determining whether at least one piece of target information capable of initiating the update operation is acquired based on the delay queue and an operation of triggering a cache update action for the at least one piece of target information.
In some embodiments, the performing, according to the set cache update policy, an update operation of the cache data and the outdated data thereof on the at least one piece of target information includes: according to the access heat information of the cache data corresponding to the key, if the cache data is the access hotspot data, the corresponding expiration time of the cache data is prolonged, if the cache data is not the access hotspot data, the corresponding expiration time of the cache data is shortened or not updated, and when the expiration time indicates that the corresponding cache data is expired, the cache data is eliminated.
In some embodiments, the delay queues include one or more, different delay queues have different update time intervals, wherein the corresponding key and expiration time of the cache data in the cache server are stored to the delay queue of the corresponding update time interval according to the updating speed of the cache data.
Some embodiments of the present disclosure provide a manager for updating cache data, including: a memory; and a processor coupled to the memory, the processor configured to execute instructions stored in the memory to perform a method of updating cached data.
Some embodiments of the present disclosure provide a manager for updating cache data, including:
the information storage unit is configured to store keys and expiration times corresponding to the cache data in the cache server to the delay queue;
an information acquisition unit configured to determine, when an update operation time arrives, whether to acquire at least one piece of target information that can initiate an update operation based on the delay queue; and
the updating triggering unit is configured to trigger a cache updating behavior aiming at least one piece of target information if the at least one piece of target information which can initiate the updating operation is acquired, so that the updating operation of the cache data and the overdue data of the at least one piece of target information is executed according to a set cache updating strategy;
the information storage unit is further configured to store the key and the expiration time corresponding to the updated cache data in a delay queue so as to wait for the next cache update behavior trigger when the next update operation time arrives.
Some embodiments of the present disclosure provide a cache server, including: the first storage unit is used for storing the cache data and the expiration time of the cache data; a second storage unit for storing the delay queue; and the manager described above.
Some embodiments of the present disclosure provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of a method of updating cached data.
Drawings
The drawings that will be used in the description of the embodiments or the related art will be briefly described below. The present disclosure can be understood more clearly from the following detailed description, which proceeds with reference to the accompanying drawings.
It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without undue inventive faculty.
Fig. 1 illustrates a schematic diagram of a cache server of some embodiments of the present disclosure.
Fig. 2 illustrates a flow diagram of a method of updating cached data according to some embodiments of the present disclosure.
Fig. 3 is a flow chart illustrating a method for updating cached data according to further embodiments of the present disclosure.
FIG. 4 illustrates a schematic diagram of a manager updating cached data, according to some embodiments of the present disclosure.
FIG. 5 illustrates a schematic diagram of a manager updating cached data, according to some embodiments of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.
Unless otherwise specified, "first", "second", and the like in the present disclosure are described to distinguish different objects, and are not intended to mean size, timing, or the like.
Fig. 1 illustrates a schematic diagram of a cache server of some embodiments of the present disclosure.
As shown in fig. 1, the cache server 100 of this embodiment includes: a first storage unit 110 for storing the cache data and its expiration time; a second storage unit 120 for storing a delay queue, which stores keys (keys) and expiration times corresponding to the buffered data in the first storage unit 110; and a manager 130 for updating the cache data, configured to perform a method for updating the cache data, which is described in detail later, so as to implement centralized management of the expiration time of the cache data in the first storage unit 110 according to the delay queue in the second storage unit 120.
Fig. 2 illustrates a flow diagram of a method of updating cached data according to some embodiments of the present disclosure. The method may be performed, for example, by manager 130 updating the cached data.
As shown in fig. 2, the method 200 for updating the cache data of this embodiment includes: step 210, 240.
In step 210, the corresponding key and expiration time of the cached data in the cache server are stored in the delay queue.
Because the keys and the expiration times corresponding to the cache data are stored in the delay queue in a centralized manner, the keys and the expiration times thereof are inquired from the delay queue, and the expiration times of the cache data corresponding to all the keys can be uniformly obtained.
The delay queue includes one or more. When a plurality of delay queues exist, different delay queues have different updating time intervals, and the corresponding keys and the expiration times of the cache data in the cache server are stored in the delay queues of the corresponding updating time intervals according to the updating speed of the cache data. For example, the cache data which is updated faster in the cache server is stored in the delay queue with smaller update time interval, and the cache data which is updated slower in the cache server is stored in the delay queue with larger update time interval.
In step 220, when the time for the update operation arrives, it is determined whether at least one piece of target information that can initiate the update operation is acquired based on the delay queue.
The update time interval of the delay queue may be preset, and whether the update operation time is reached is determined according to whether the set update time interval is reached, and if the update time interval is reached, the update operation time is determined to be reached. Or, determining whether the update operation time is reached according to whether the expiration time corresponding to at least one key in the delay queue meets an expiration condition. For example, the expiration time corresponding to a preset number of keys, which may be one or more, satisfies an expiration condition, such as a time threshold indicating expiration or imminent expiration, and the update operation time is determined to have arrived.
In some embodiments, the determining based on whether the delay queue has acquired at least one piece of target information that may initiate the update operation includes: and judging whether at least one key with the expiration time meeting the expiration condition exists in the delay queue, and if so, taking the at least one key with the expiration time meeting the expiration condition as the acquired at least one piece of target information capable of initiating the updating operation. The delay queue has the characteristic of a queue, and has a function of adding a delay consumption queue message, that is, a function of specifying a time point at which a message in the queue is consumed can be added.
In some embodiments, when the update operation time arrives, it may be determined whether the update operation time is in an off-user traffic peak time period, and if so, the operation of determining whether to acquire at least one piece of target information that may initiate the update operation based on the delay queue and the subsequent operation of triggering a cache update behavior for the at least one piece of target information are performed, so that the cache update avoids a traffic peak as much as possible, and avoids competition with user traffic for resources.
In step 230, if at least one piece of target information that can initiate an update operation is acquired, a cache update behavior for the at least one piece of target information is triggered, so that an update operation of cache data and its outdated data is performed on the at least one piece of target information according to a set cache update policy.
In some embodiments, the performing, according to the set cache update policy, an update operation of the cache data and the outdated data thereof on the at least one piece of target information includes: according to the access heat information of the cache data corresponding to the key, if the cache data is the access hotspot data, the corresponding expiration time of the cache data can be prolonged, if the cache data is not the access hotspot data, the corresponding expiration time of the cache data is shortened or not updated, and when the expiration time indicates that the corresponding cache data is expired, the cache data is eliminated, namely, the data is not cached any more.
In step 240, the corresponding key and expiration time of the updated cache data are stored in the delay queue to form a closed loop operation, so as to wait for the next time of the update operation to reach for the next cache update action trigger.
In the embodiment, the keys and the expiration times corresponding to the cache data are stored in the delay queue in a centralized manner, so that the expiration times of the cache data are managed in a centralized manner, the target information which can initiate the updating operation can be conveniently acquired based on the delay queue without enumerating all cache entries, the invalid operation is reduced, the cache updating behavior aiming at the target information is triggered, the risk of a business concurrency peak is reduced, and after the cache updating, the keys and the expiration times corresponding to the updated cache data are stored in the delay queue so as to facilitate the next cache updating.
Fig. 3 is a flow chart illustrating a method for updating cached data according to further embodiments of the present disclosure. The method may be performed, for example, by manager 130 updating the cached data.
As shown in fig. 3, the method 300 for updating the cache data of this embodiment includes: step 310-.
In step 310, the corresponding key and expiration time of the cached data in the cache server are stored in the delay queue.
Because the keys and the expiration times corresponding to the cache data are stored in the delay queue in a centralized manner, the keys and the expiration times thereof are inquired from the delay queue, and the expiration times of the cache data corresponding to all the keys can be uniformly obtained.
The delay queue includes one or more. When a plurality of delay queues exist, different delay queues have different updating time intervals, and the corresponding keys and the expiration times of the cache data in the cache server are stored in the delay queues of the corresponding updating time intervals according to the updating speed of the cache data. For example, the cache data which is updated faster in the cache server is stored in the delay queue with smaller update time interval, and the cache data which is updated slower in the cache server is stored in the delay queue with larger update time interval.
In step 320, when the update operation time arrives, it is determined whether at least one key whose expiration time satisfies the expiration condition exists in the delay queue, and it is determined whether at least one key whose expiration time satisfies the expiration condition satisfies a preset current limiting policy, and the at least one key whose expiration time satisfies the expiration condition and satisfies the current limiting policy is used as the acquired at least one piece of target information that can initiate the update operation.
The update time interval of the delay queue may be preset, and whether the update operation time is reached is determined according to whether the set update time interval is reached, and if the update time interval is reached, the update operation time is determined to be reached. Or, determining whether the update operation time is reached according to whether the expiration time corresponding to at least one key in the delay queue meets an expiration condition. For example, the expiration time corresponding to a preset number of keys, which may be one or more, satisfies an expiration condition, such as a time threshold indicating expiration or imminent expiration, and the update operation time is determined to have arrived.
Some exemplary current limiting strategies are listed below.
And the current limiting strategy 1 is used for judging whether the key with the expiration time meeting the expiration condition belongs to the current processing time slice or not, and if the key with the expiration time meeting the expiration condition belongs to the current processing time slice, judging that the key with the expiration time meeting the expiration condition meets the current limiting strategy. Thus, the cache refresh is accomplished in multiple time slices.
And (2) sorting at least one key with the expiration time meeting the expiration condition according to an ascending order of the expiration time, sequentially judging whether the ranking of the keys with the expiration time meeting the expiration condition is not greater than a preset count value of a counter or not according to an ascending order of sorting results, if not, judging that the keys with the expiration time meeting the expiration condition meet the current limiting policy, and increasing the count value of the counter by 1, and if the ranking of the keys with the expiration time meeting the expiration condition is greater than the preset count value of the counter, judging that the keys with the expiration time meeting the expiration condition do not meet the current limiting policy, and resetting the counter to be an initial count value. Thus, the update amount per cache does not exceed the preset count value of the counter.
And (3) sorting at least one key with the expiration time meeting the expiration condition according to the ascending order of the expiration time, sequentially putting the keys into the leaky bucket according to the order from small to large of the sorting result, enabling the keys put into the leaky bucket to flow out at a preset speed according to the order of putting the keys, enabling the keys flowing out of the leaky bucket to meet the current limiting strategy, and enabling the keys overflowing the leaky bucket not to meet the current limiting strategy. Thereby, it is ensured that the cache update processing capacity is not exceeded.
And the current limiting strategy 4 is used for placing tokens into the token bucket at a preset speed, sequencing at least one key with expiration time meeting an expiration condition according to an ascending order of the expiration time and sequentially obtaining the tokens from the token bucket, wherein the key capable of obtaining the token meets the current limiting strategy, and the key incapable of obtaining the token does not meet the current limiting strategy. Thereby, it is ensured that the cache update processing capacity is not exceeded.
In some embodiments, when the update operation time arrives, it may be further determined whether the update operation time is in an off-user traffic peak time period, and if so, an operation of determining whether to acquire at least one piece of target information that may initiate an update operation based on the delay queue and a subsequent operation of triggering a cache update behavior for the at least one piece of target information are performed, so that the cache update avoids a traffic peak as much as possible, and avoids resource competition with the user traffic.
In step 330, if at least one piece of target information that can initiate an update operation is acquired, a cache update behavior for the at least one piece of target information is triggered, so that an update operation of cache data and its outdated data is performed on the at least one piece of target information according to a set cache update policy.
In some embodiments, the performing, according to the set cache update policy, an update operation of the cache data and the outdated data thereof on the at least one piece of target information includes: according to the access heat information of the cache data corresponding to the key, if the cache data is the access hotspot data, the corresponding expiration time of the cache data can be prolonged, if the cache data is not the access hotspot data, the corresponding expiration time of the cache data is shortened or not updated, and when the expiration time indicates that the corresponding cache data is expired, the cache data is eliminated, namely, the data is not cached any more.
In step 340, the corresponding key and expiration time of the updated cache data are stored in the delay queue to form a closed-loop operation, so as to wait for the next time of the update operation to reach for the next cache update action trigger.
In the above embodiment, the keys and the expiration times corresponding to the cache data are stored in the delay queue in a centralized manner, so that the expiration times of the cache data are managed in a centralized manner, target information that can initiate an update operation can be conveniently acquired without enumerating all cache entries based on the delay queue, invalidation operations are reduced, an update target can be determined by combining a current-limiting policy, a cache update behavior for the target information is triggered, the risk of a traffic concurrency peak is further reduced, and after cache update, the keys and the expiration times corresponding to the updated cache data are stored in the delay queue so as to perform next cache update.
FIG. 4 illustrates a schematic diagram of a manager updating cached data, according to some embodiments of the present disclosure.
As shown in fig. 4, the manager 130 for updating cache data of this embodiment includes: a memory 410 and a processor 420 coupled to the memory 410, the processor 420 being configured to execute instructions stored in the memory 410 to perform a method of updating cache data in any of the embodiments described above.
For example, storing keys and expiration times corresponding to cache data in a cache server to a delay queue; when the updating operation time is up, judging whether at least one piece of target information which can initiate the updating operation is acquired based on the delay queue; if at least one piece of target information which can initiate updating operation is acquired, triggering a cache updating behavior aiming at the at least one piece of target information, and executing the updating operation of cache data and outdated data of the at least one piece of target information according to a set cache updating strategy; and storing the corresponding key and the expiration time of the updated cache data into a delay queue so as to wait for the next cache updating action trigger when the next updating operation time arrives.
Memory 410 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
FIG. 5 is a schematic diagram of a manager updating cached data according to further embodiments of the present disclosure.
As shown in fig. 5, the manager 130 for updating cache data of this embodiment includes: cells 510 and 530.
An information storage unit 510 configured to store keys and expiration times corresponding to the cache data in the cache server to the delay queue.
An information acquisition unit 520 configured to determine, when the update operation time arrives, whether to acquire at least one piece of target information that can initiate the update operation based on the delay queue.
An update triggering unit 530 configured to, if at least one piece of target information that may initiate an update operation is acquired, trigger a cache update behavior for the at least one piece of target information, so that an update operation of cache data and its stale data is performed on the at least one piece of target information according to a set cache update policy.
The information storage unit 510 is further configured to store the key and the expiration time corresponding to the updated cache data in a delay queue, so as to wait for a next cache update action trigger when a next update operation time arrives.
The delay queues include one or more, and different delay queues have different update time intervals. In some embodiments, the information storage unit 510 is configured to store the corresponding key and expiration time of the cached data in the cache server to the delay queue of the corresponding update time interval according to how fast the cached data is updated.
In some embodiments, the information obtaining unit 520 is further configured to determine whether the update operation time arrives according to whether a set update time interval arrives; or, determining whether the update operation time is reached according to whether the expiration time corresponding to at least one key in the delay queue meets an expiration condition.
In some embodiments, the information obtaining unit 520 is configured to determine whether at least one key whose expiration time meets the expiration condition exists in the delay queue, and if so, take the at least one key whose expiration time meets the expiration condition as the obtained at least one piece of target information that may initiate the update operation.
In some embodiments, the information obtaining unit 520 is configured to determine whether at least one key whose expiration time meets the expiration condition exists in the delay queue, determine whether at least one key whose expiration time meets the expiration condition meets a preset current limiting policy, and use the at least one key whose expiration time meets the expiration condition and meets the current limiting policy as the obtained at least one piece of target information that can initiate the update operation.
In some embodiments, the information obtaining unit 520 is configured to determine whether a key with an expiration time satisfying an expiration condition belongs to a currently processed time slice, and if the key belongs to the currently processed time slice, determine that the key with the expiration time satisfying the expiration condition satisfies a current limiting policy.
In some embodiments, the information obtaining unit 520 is configured to sort the at least one key whose expiration time meets the expiration condition in an ascending order of expiration times, sequentially determine whether the ranking of the key whose expiration time meets the expiration condition is not greater than a preset count value of the counter according to a descending order of the sorting result, and determine that the key whose expiration time meets the expiration condition meets the current limiting policy if the ranking is not greater than the preset count value of the counter.
In some embodiments, the information obtaining unit 520 is configured to sort, in ascending order of the expiration times, at least one key whose expiration time satisfies the expiration condition, and sequentially place the keys into the leaky bucket in descending order of the sorting result, where the keys placed into the leaky bucket flow out at a preset speed according to the order of the key placement, the keys flowing out of the leaky bucket satisfy the current limiting policy, and the keys overflowing the leaky bucket do not satisfy the current limiting policy.
In some embodiments, the information obtaining unit 520 is configured to place tokens into the token bucket at a preset speed, sort at least one key whose expiration time meets an expiration condition in an ascending order of expiration time, and obtain tokens from the token bucket in sequence, where keys capable of obtaining tokens meet the current limiting policy, and keys incapable of obtaining tokens do not meet the current limiting policy.
In some embodiments, the information obtaining unit 520 is further configured to determine whether the update operation time is in an off-user traffic peak time period when the update operation time arrives, and if so, perform the operation of determining whether to obtain at least one piece of target information that can initiate the update operation based on the delay queue and the operation of triggering the cache update behavior for the at least one piece of target information.
Wherein, the executing the updating operation of the cache data and the expired data of the at least one piece of target information according to the set cache updating policy comprises: according to the access heat information of the cache data corresponding to the key, if the cache data is the access hotspot data, the corresponding expiration time of the cache data is prolonged, if the cache data is not the access hotspot data, the corresponding expiration time of the cache data is shortened or not updated, and when the expiration time indicates that the corresponding cache data is expired, the cache data is eliminated.
Some embodiments of the present disclosure provide a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the steps of the method of updating cached data of the embodiments.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more non-transitory computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (15)

1. A method for updating cached data, comprising:
storing keys and expiration time corresponding to cache data in a cache server to a delay queue;
when the updating operation time is up, judging whether at least one piece of target information which can initiate the updating operation is acquired based on the delay queue;
if at least one piece of target information which can initiate updating operation is acquired, triggering a cache updating behavior aiming at the at least one piece of target information, and executing the updating operation of cache data and outdated data of the at least one piece of target information according to a set cache updating strategy;
and storing the corresponding key and the expiration time of the updated cache data into a delay queue so as to wait for the next cache updating action trigger when the next updating operation time arrives.
2. The method of claim 1,
determining whether the updating operation time is reached according to whether the set updating time interval is reached; alternatively, the first and second electrodes may be,
and determining whether the updating operation time is reached according to whether the corresponding expiration time of at least one key in the delay queue meets the expiration condition.
3. The method of claim 1, wherein the determining whether at least one piece of target information that can initiate an update operation is acquired based on the delay queue comprises:
and judging whether at least one key with the expiration time meeting the expiration condition exists in the delay queue, and if so, taking the at least one key with the expiration time meeting the expiration condition as the acquired at least one piece of target information capable of initiating the updating operation.
4. The method of claim 3, wherein the determining based on whether the delay queue has obtained at least one piece of target information that can initiate the update operation further comprises:
and judging whether at least one key with the expiration time meeting the expiration condition meets a preset current limiting strategy or not, and taking the at least one key with the expiration time meeting the expiration condition and meeting the current limiting strategy as the acquired at least one piece of target information capable of initiating the updating operation.
5. The method of claim 4,
and judging whether the key with the expiration time meeting the expiration condition belongs to the current processing time slice, and if the key with the expiration time meeting the expiration condition belongs to the current processing time slice, judging that the key with the expiration time meeting the expiration condition meets the current limiting strategy.
6. The method of claim 4,
sorting at least one key with the expiration time meeting the expiration condition according to the ascending order of the expiration time, sequentially judging whether the ranking of the keys with the expiration time meeting the expiration condition is not greater than the preset count value of the counter or not according to the ascending order of the sorting result, and if not, judging that the keys with the expiration time meeting the expiration condition meet the current limiting strategy.
7. The method of claim 4,
sorting at least one key with the expiration time meeting the expiration condition according to the ascending order of the expiration time, sequentially putting the keys into a leaky bucket according to the sequence of the sorting result from small to large, and enabling the keys put into the leaky bucket to flow out at a preset speed according to the sequence of the key putting, wherein the keys flowing out of the leaky bucket meet a current limiting strategy, and the keys overflowing out of the leaky bucket do not meet the current limiting strategy.
8. The method of claim 4,
and placing tokens into the token bucket at a preset speed, sequencing at least one key with the expiration time meeting the expiration condition according to the ascending order of the expiration time, and acquiring the tokens from the token bucket in sequence, wherein the key capable of acquiring the token meets the current limiting strategy, and the key incapable of acquiring the token does not meet the current limiting strategy.
9. The method of claim 1,
when the updating operation time arrives, judging whether the updating operation time is in a non-user traffic peak time period, if so, then executing an operation of judging whether at least one piece of target information capable of initiating the updating operation is acquired based on the delay queue and an operation of triggering a cache updating action aiming at the at least one piece of target information.
10. The method according to claim 1, wherein the performing, according to the set cache update policy, an update operation of the cache data and the stale data of the at least one piece of target information comprises:
according to the access heat information of the cache data corresponding to the key, if the cache data is the access hotspot data, the corresponding expiration time of the cache data is prolonged, if the cache data is not the access hotspot data, the corresponding expiration time of the cache data is shortened or not updated, and when the expiration time indicates that the corresponding cache data is expired, the cache data is eliminated.
11. The method of claim 2,
the delay queues include one or more, different delay queues have different update time intervals,
and storing the corresponding key and the expiration time of the cache data in the cache server into a delay queue of a corresponding updating time interval according to the updating speed of the cache data.
12. A manager for updating cached data, comprising:
a memory; and
a processor coupled to the memory, the processor configured to execute instructions stored in the memory to perform the method of updating cached data as recited in any of claims 1-11.
13. A manager for updating cached data, comprising:
the information storage unit is configured to store keys and expiration times corresponding to the cache data in the cache server to the delay queue;
an information acquisition unit configured to determine, when an update operation time arrives, whether to acquire at least one piece of target information that can initiate an update operation based on the delay queue; and
the updating triggering unit is configured to trigger a cache updating behavior aiming at least one piece of target information if the at least one piece of target information which can initiate the updating operation is acquired, so that the updating operation of the cache data and the overdue data of the at least one piece of target information is executed according to a set cache updating strategy;
the information storage unit is further configured to store the key and the expiration time corresponding to the updated cache data in a delay queue so as to wait for the next cache update behavior trigger when the next update operation time arrives.
14. A cache server, comprising:
the first storage unit is used for storing the cache data and the expiration time of the cache data;
a second storage unit for storing the delay queue; and
the manager of claim 12 or 13.
15. A non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of updating cached data as claimed in any one of claims 1-11.
CN202110848905.4A 2021-07-27 2021-07-27 Cache data updating method, manager and cache server Pending CN113486037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110848905.4A CN113486037A (en) 2021-07-27 2021-07-27 Cache data updating method, manager and cache server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110848905.4A CN113486037A (en) 2021-07-27 2021-07-27 Cache data updating method, manager and cache server

Publications (1)

Publication Number Publication Date
CN113486037A true CN113486037A (en) 2021-10-08

Family

ID=77943936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110848905.4A Pending CN113486037A (en) 2021-07-27 2021-07-27 Cache data updating method, manager and cache server

Country Status (1)

Country Link
CN (1) CN113486037A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048197A (en) * 2022-01-13 2022-02-15 浙江大华技术股份有限公司 Tree structure data processing method, electronic equipment and computer readable storage device
CN117112267A (en) * 2023-10-20 2023-11-24 成都华栖云科技有限公司 Cache maintenance method of application interface

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012043338A (en) * 2010-08-23 2012-03-01 Nippon Telegr & Teleph Corp <Ntt> Cache management apparatus, cache management program and recording medium
CN106021468A (en) * 2016-05-17 2016-10-12 上海携程商务有限公司 Updating method and system for distributed caches and local caches
US20170277638A1 (en) * 2016-03-25 2017-09-28 Home Box Office, Inc. Cache map with sequential tracking for invalidation
CN110471939A (en) * 2019-07-11 2019-11-19 平安普惠企业管理有限公司 Data access method, device, computer equipment and storage medium
CN110764796A (en) * 2018-07-27 2020-02-07 北京京东尚科信息技术有限公司 Method and device for updating cache
CN110837513A (en) * 2019-11-07 2020-02-25 腾讯科技(深圳)有限公司 Cache updating method, device, server and storage medium
CN111177165A (en) * 2019-12-23 2020-05-19 拉扎斯网络科技(上海)有限公司 Method, device and equipment for detecting data consistency
CN111221828A (en) * 2018-11-26 2020-06-02 福建省华渔教育科技有限公司 Method and terminal for improving consistency of database data and cache data
CN111813792A (en) * 2020-06-22 2020-10-23 上海悦易网络信息技术有限公司 Method and equipment for updating cache data in distributed cache system
CN112148504A (en) * 2020-09-15 2020-12-29 海尔优家智能科技(北京)有限公司 Target message processing method and device, storage medium and electronic device
CN112579652A (en) * 2020-12-28 2021-03-30 咪咕文化科技有限公司 Method and device for deleting cache data, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012043338A (en) * 2010-08-23 2012-03-01 Nippon Telegr & Teleph Corp <Ntt> Cache management apparatus, cache management program and recording medium
US20170277638A1 (en) * 2016-03-25 2017-09-28 Home Box Office, Inc. Cache map with sequential tracking for invalidation
CN106021468A (en) * 2016-05-17 2016-10-12 上海携程商务有限公司 Updating method and system for distributed caches and local caches
CN110764796A (en) * 2018-07-27 2020-02-07 北京京东尚科信息技术有限公司 Method and device for updating cache
CN111221828A (en) * 2018-11-26 2020-06-02 福建省华渔教育科技有限公司 Method and terminal for improving consistency of database data and cache data
CN110471939A (en) * 2019-07-11 2019-11-19 平安普惠企业管理有限公司 Data access method, device, computer equipment and storage medium
CN110837513A (en) * 2019-11-07 2020-02-25 腾讯科技(深圳)有限公司 Cache updating method, device, server and storage medium
CN111177165A (en) * 2019-12-23 2020-05-19 拉扎斯网络科技(上海)有限公司 Method, device and equipment for detecting data consistency
CN111813792A (en) * 2020-06-22 2020-10-23 上海悦易网络信息技术有限公司 Method and equipment for updating cache data in distributed cache system
CN112148504A (en) * 2020-09-15 2020-12-29 海尔优家智能科技(北京)有限公司 Target message processing method and device, storage medium and electronic device
CN112579652A (en) * 2020-12-28 2021-03-30 咪咕文化科技有限公司 Method and device for deleting cache data, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048197A (en) * 2022-01-13 2022-02-15 浙江大华技术股份有限公司 Tree structure data processing method, electronic equipment and computer readable storage device
CN117112267A (en) * 2023-10-20 2023-11-24 成都华栖云科技有限公司 Cache maintenance method of application interface
CN117112267B (en) * 2023-10-20 2024-01-23 成都华栖云科技有限公司 Cache maintenance method of application interface

Similar Documents

Publication Publication Date Title
EP3229142B1 (en) Read cache management method and device based on solid state drive
CA2785398C (en) Managing queries
US20110066830A1 (en) Cache prefill on thread migration
CN113486037A (en) Cache data updating method, manager and cache server
US20120179882A1 (en) Cooperative memory management
US20100223305A1 (en) Infrastructure for spilling pages to a persistent store
JP2018533122A (en) Efficient scheduling of multiversion tasks
CN110780823B (en) Small object memory management method, small object memory management device, electronic equipment and computer readable medium
US20170031812A1 (en) Scheme for determining data object usage in a memory region
CN104536813A (en) Accelerating method and device for computing equipment
CN105095495B (en) A kind of distributed file system buffer memory management method and system
US6615316B1 (en) Using hardware counters to estimate cache warmth for process/thread schedulers
CN110413545B (en) Storage management method, electronic device, and computer program product
US11561929B2 (en) Method, device and computer program product for shrinking storage space
CN108595251B (en) Dynamic graph updating method, device, storage engine interface and program medium
CN110688360A (en) Distributed file system storage management method, device, equipment and storage medium
US10606795B2 (en) Methods for managing a buffer cache and devices thereof
CN113672166A (en) Data processing method and device, electronic equipment and storage medium
CN115484167B (en) Network slice shutdown method in communication network, computer device and storage medium
WO2019206260A1 (en) Method and apparatus for reading file cache
KR101771183B1 (en) Method for managing in-memory cache
CN113742131B (en) Method, electronic device and computer program product for storage management
CN113885801A (en) Memory data processing method and device
CN111090627B (en) Log storage method and device based on pooling, computer equipment and storage medium
Zhang et al. Predicting for I/O stack optimizations on cyber–physical systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination