CN113742381B - Cache acquisition method, device and computer readable medium - Google Patents

Cache acquisition method, device and computer readable medium Download PDF

Info

Publication number
CN113742381B
CN113742381B CN202111004002.4A CN202111004002A CN113742381B CN 113742381 B CN113742381 B CN 113742381B CN 202111004002 A CN202111004002 A CN 202111004002A CN 113742381 B CN113742381 B CN 113742381B
Authority
CN
China
Prior art keywords
cache
actual
value
refreshing
interval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111004002.4A
Other languages
Chinese (zh)
Other versions
CN113742381A (en
Inventor
李巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oudian Cloud Information Technology Jiangsu Co ltd
Original Assignee
Oudian Cloud Information Technology Jiangsu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oudian Cloud Information Technology Jiangsu Co ltd filed Critical Oudian Cloud Information Technology Jiangsu Co ltd
Priority to CN202111004002.4A priority Critical patent/CN113742381B/en
Publication of CN113742381A publication Critical patent/CN113742381A/en
Application granted granted Critical
Publication of CN113742381B publication Critical patent/CN113742381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a cache acquisition method, apparatus, and computer readable medium, the method comprising the steps of: receiving an incoming cache-related parameter, wherein the cache-related parameter comprises a key of a cache value and a refresh interval; acquiring an actual cache object based on the key; and triggering asynchronous cache refreshing logic to return a business cache value in the actual cache object in response to the interval between the last refreshing time and the current moment in the actual cache object being greater than or equal to the refreshing interval. The cache acquisition method, the cache acquisition equipment and the computer readable medium for preventing cache breakdown can effectively enable the cache to be dynamically refreshed, the cache can be acquired through a unified tool class, a series of cache related parameters for preventing the cache breakdown are utilized as an access for acquiring the cache, and all logic for preventing the cache breakdown is packaged in the cache related parameters, so that a more stable, real-time and simplified strategy for preventing the cache breakdown is provided, and the software development efficiency is improved.

Description

Cache acquisition method, device and computer readable medium
Technical Field
The present disclosure relates to the field of computer software technologies, and in particular, to a cache acquisition method, apparatus, and computer readable medium for preventing cache breakdown.
Background
As the user quantity of software increases, when the flow rate increases and reaches the throughput bottleneck of the database, a software developer often adds a buffer layer before the flow rate reaches the database, and reduces the pressure of the database by inquiring the buffer data, thereby improving the overall inquiry performance and throughput of the system.
Referring to fig. 2, the system model with the added cache layer mainly includes four components: client, server, cache and database. The client is a terminal actually sending a data request, the server is a program providing a data request service, the client can inquire in a corresponding cache layer after receiving the request of the client, if the corresponding cache data is inquired according to the request parameters, the client directly returns, and if the corresponding cache data is not inquired, the database is inquired and the data is returned.
When a cache layer is added for an application, if the cache is invalid due to the reasons of expiration of the cache, cold start and the like, a large number of concurrent requests are just encountered at the moment, and the requests directly pass through the cache to access back-end data, so that the pressure of a service or a database is increased suddenly, the throughput and the response speed of a system are reduced, and serious service downtime even is caused. This scenario of a cache miss, which is again faced with a large number of requests, is called a cache breakdown.
Currently, the main solutions in the case of cache breakdown are generally the following two solutions:
1. and adding a mutual exclusion lock mode. In multiple concurrent requests, only the first request thread can take the lock and execute database query operation, other threads cannot take the lock and block waiting until the first thread writes data into the cache and then directly walks the cache.
2. The hot spot data never expires. The buffer is directly set to be not expired, and then the timing task asynchronously loads data and updates the buffer. This approach is applicable to more extreme scenarios, such as very traffic scenarios, where it is guaranteed that there must be data per cache acquisition.
However, both of the above solutions have the following significant drawbacks, resulting in an inability to effectively meet various needs.
The mode of adding the mutual exclusion lock can cause a plurality of threads to be blocked in a program, and only the pressure of a database can be relieved, but the pressure of a service can not be relieved. If the number of threads created reaches the upper limit of the operating system or the memory is consumed by the created threads, then the program itself may crash.
The method of making the cache never expire and then using the timing task refresh requires the developer to consider the inconsistent time of the business acceptable data due to the timing refresh characteristic. Also for the handling of abnormal situations, if a cache is not refreshed due to a problem in the network of the service and cache layers at a certain refresh, dirty data is always caused during several refresh windows.
Moreover, the manner in which the timing tasks are introduced into the system must be with a distributed timing task solution because of the need to ensure the reliability of the timing tasks, and the need to execute the update logic in a separate thread or process beyond the main logic increases the complexity of the system. And the data is not necessarily changed when the data is updated regularly, and the refreshing is even meaningless in the unmanned access period.
In summary, the existing solution to the cache breakdown is difficult to protect the cache breakdown scene encountered by using the cache layer, and the solution is complex to implement and has low efficiency.
Disclosure of Invention
It is a primary object of the present disclosure to provide a cache acquisition method, apparatus and computer readable medium for preventing cache breakdown, which ameliorate the above-mentioned drawbacks of the prior art.
The technical problems are solved by the following technical scheme:
as an aspect of the present disclosure, there is provided a cache acquisition method, including the steps of:
receiving an incoming cache-related parameter, wherein the cache-related parameter comprises a key of a cache value and a refresh interval;
acquiring an actual cache object based on the key; the method comprises the steps of,
and triggering asynchronous cache refreshing logic to return a business cache value in the actual cache object in response to the fact that the interval between the last refreshing time and the current moment in the actual cache object is greater than or equal to the refreshing interval.
Optionally, the cache-related parameters further include a dataLoader function (data loader) for loading a cache value;
the step of acquiring the actual cache object based on the key comprises the following steps:
executing the dataLoader function to obtain a service cache value in response to the fact that the actual cache object corresponding to the key cannot be obtained;
creating an actual cache object, setting a service cache value in the actual cache object, and taking the current moment as the last refreshing time;
storing the actual cache object to a cache service;
and returning a service cache value obtained by executing the dataLoader function.
Optionally, the cache-related parameter further includes a maximum lock time for refreshing a thread to hold a distributed lock;
the step of triggering asynchronous cache refreshing logic to return the service cache value in the actual cache object in response to the interval between the last refreshing time and the current moment in the actual cache object being greater than or equal to the refreshing interval specifically comprises the following steps:
creating a new thread to execute asynchronous cache refresh logic in response to an interval between a last refresh time in the actual cache object and a current time being greater than or equal to the refresh interval;
acquiring a distributed lock according to the key through a new thread, wherein the maximum locking time is used as the maximum locking time;
responding to the lock, executing the dataLoader function to obtain a service cache value to be cached;
creating an actual cache object, setting a service cache value in the actual cache object, and taking the current moment as the last refreshing time;
the actual cache object is saved to the cache service and the distributed lock held by the current thread is released.
Optionally, the step of creating a new thread to perform asynchronous cache refresh logic includes:
the traffic cache value in the existing real cache object is returned and a new thread is created to perform the asynchronous cache flush logic.
Optionally, the step of returning the service cache value in the existing actual cache object includes:
the business cache value in the existing actual cache object is returned without blocking.
Optionally, the step of acquiring the actual cache object based on the key includes:
and responding to the obtained actual cache object corresponding to the key, and taking out and returning the business cache value in the actual cache object.
Optionally, the method further comprises:
and returning the business cache value in the actual cache object in response to the fact that the interval between the last refreshing time and the current moment in the actual cache object is smaller than the refreshing interval.
Optionally, the cache-related parameter further includes an actual expiration time of the key.
As another aspect of the disclosure, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the cache acquisition method as described above when executing the computer program.
As another aspect of the disclosure, a computer readable medium is provided having stored thereon computer instructions which, when executed by a processor, implement a cache acquisition method as described above.
Other aspects of the present disclosure will be appreciated by those of skill in the art in light of the present disclosure.
The positive progress effect of the present disclosure is:
the cache acquisition method, the cache acquisition equipment and the computer readable medium for preventing cache breakdown can effectively enable the cache to be dynamically refreshed, the cache can be acquired through a unified tool class, a series of cache related parameters for preventing the cache breakdown are utilized as an access for acquiring the cache, and all logic for preventing the cache breakdown is packaged in the cache related parameters, so that a more stable, real-time and simplified strategy for preventing the cache breakdown is provided, and the software development efficiency is improved.
Drawings
The features and advantages of the present disclosure will be better understood upon reading the detailed description of embodiments of the disclosure in conjunction with the following drawings. In the drawings, the components are not necessarily to scale and components having similar related features or characteristics may have the same or similar reference numerals.
FIG. 1 is a flow chart of a cache acquisition method according to an embodiment of the disclosure.
Fig. 2 is a schematic view showing a scenario when a service is requested.
Fig. 3 is a schematic view of a scenario when acquiring a cache according to a cache acquisition method of an embodiment of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device implementing a cache acquisition method according to another embodiment of the present disclosure.
Detailed Description
The present disclosure is further illustrated by way of examples below, but is not thereby limited to the scope of the examples described.
It should be noted that references in the specification to "one embodiment," "an alternative embodiment," "another embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
In the description of the present disclosure, it should be understood that the terms "center," "lateral," "upper," "lower," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientations or positional relationships illustrated in the drawings, merely to facilitate description of the present disclosure and simplify description, and do not indicate or imply that the devices or elements being referred to must have a particular orientation, be configured and operated in a particular orientation, and thus should not be construed as limiting the present disclosure. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more. In addition, the term "include" and any variations thereof are intended to cover a non-exclusive inclusion.
In the description of the present disclosure, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in the present disclosure may be understood in detail by those of ordinary skill in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In order to overcome the above-mentioned drawbacks, the present embodiment provides a cache acquisition method, which includes the following steps: receiving an incoming cache-related parameter, wherein the cache-related parameter comprises a key of a cache value and a refresh interval; acquiring an actual cache object based on the key; and triggering asynchronous cache refresh logic to return the business cache value in the actual cache object in response to the interval between the last refresh time in the actual cache object and the current time being greater than or equal to the refresh interval.
In this embodiment, the cache may be effectively refreshed dynamically, and the cache may be obtained by a unified tool class, and a series of cache related parameters for preventing cache breakdown are used as an entry for obtaining the cache, and all logic for preventing cache breakdown is packaged therein, so as to provide a more stable, real-time and simplified strategy for preventing cache breakdown, and improve software development efficiency.
Specifically, as an embodiment, referring to fig. 1, the cache obtaining method provided in this embodiment mainly includes the following steps:
step 101, receiving the input cache related parameters.
In this step, the required cache-related parameters are entered, and as a preferred embodiment, the cache-related parameters mainly include a key (key) of the cache value, an actual expiration time (expireTime) of the key, a refresh interval (refresh interval), a maximum lock time (maxCalcTime) for holding the distributed lock for the refresh thread, and a dataLoader function for loading the cache value. Of course, the embodiment is not limited to the cache related parameters, and may be selected and adjusted according to the actual requirement or the possible occurrence of the requirement.
The cache-related parameters are described in detail below with reference to fig. 3.
The key is a key of a value to be cached, and a corresponding cached value can be found through the key. The buffer value is the data that the service needs to buffer.
In this embodiment, the actual cache object is a data model that is actually stored in the cache layer, and a layer is wrapped outside the service cache value that is actually needed to be cached, and the time for updating the cache data for the last time is additionally included. The actual cache object is stored in the cache layer instead of the incoming business cache object, and whether the asynchronous cache refreshing logic is triggered or not can be determined by judging the last updating time in the subsequent acquisition.
The expireTime is the actual expiration time of the cache, and if the cache does not have any update operation, the actual expiration time will expire after the specified cache time, i.e. the physical deletion cannot be accessed.
The refresh interval is the interval of a triggerable refresh mechanism of the cache, and when the interval between the last refresh time and the current moment in the actual cache object is greater than or equal to the refresh interval in the process of obtaining the cache, a thread is started to execute the cache refresh logic in the background. RefreshInterval may be understood as a logical cache expiration time.
The thread executing the cache refreshing logic in the background uses a distributed lock mechanism to ensure that the refreshing logic only has one at the same time. The request still returns the cache value in the actual cache object.
maxCalcTime is the maximum locking time of the distributed lock and is also the time for calculating the new cache value at maximum, so as to ensure that the lock can expire naturally when the lock is not released all the time due to system load, network and other reasons. After the hold lock exceeds the maxCalcTime set time, the lock will be automatically released and a new acquire cache request arrives to trigger the cache flush logic again.
The data loader is a function for loading the service cache value, the calling end self-defines the calculation logic in the function, and the object obtained by the function is the object needing real cache. And callback is performed when the cache is obtained through the Key and the refresh interval is reached to trigger a refresh mechanism, and the expiration time expireTime is reset after the refresh of the cache is finished.
Step 102, based on the key, acquiring an actual cache object.
In the step, the service cache value in the actual cache object is taken out and returned in response to the actual cache object corresponding to the key.
In this step, in response to the fact that the actual cache object corresponding to the key cannot be obtained, the method specifically includes the following steps:
step 1021, executing a dataLoader function to obtain a service cache value of a service to be cached;
step 1022, creating an actual cache object, setting a service cache value in the actual cache object, and taking the current moment as the last refreshing time;
step 1023, storing the actual cache object to a cache service;
step 1024, returning to the service buffer value generated in step 1021.
Step 103, judging whether the interval between the last refreshing time and the current time in the actual cache object is greater than or equal to the refreshing interval, if yes, executing step 104, and if not, executing step 105.
In this step, the last refresh time field included in the actual cache object is obtained through the actual cache object obtained in step 102, and the interval between the current time and the last refresh time is calculated, and step 104 is executed when the interval is greater than or equal to the refresh interval entered in step 101, otherwise step 105 is executed.
Step 104, triggering asynchronous cache refreshing logic.
As an optional implementation manner, in this step, in response to the interval between the last refresh time and the current time in the actual cache object being smaller than the refresh interval, the method specifically includes the following steps:
step 1041, returning the service cache value in the existing actual cache object, and creating a new thread to execute the asynchronous cache refreshing logic.
In this embodiment, the service cache value in the existing actual cache object is returned without blocking, so as to ensure that the request for obtaining the cache is not blocked.
Step 1042, attempting to acquire a distributed lock by the new thread according to the key, ensures that global only one refresh logic is executing, wherein the maximum lock time is taken as the maximum lock time.
Step 1043, in response to obtaining the lock, executing a dataLoader function to obtain a service cache value to be cached.
Step 1044, creating an actual cache object, setting a service cache value in the actual cache object, and taking the current time as the last refreshing time.
Step 1045, saving the actual cache object to the cache service, and releasing the distributed lock held by the current thread.
And step 105, returning the business cache value in the actual cache object.
In the step, the service cache value in the actual cache object is returned in response to the fact that the interval between the last refreshing time and the current moment in the actual cache object is smaller than the refreshing interval.
In this embodiment, a cache acquiring system for preventing cache breakdown by using the above-mentioned cache acquiring method is further provided, so that service codes only need to execute logic for acquiring cache through one entry, and cache breakdown is effectively prevented by a real-time judging and non-blocking asynchronous refreshing cache acquiring method, and meanwhile, indexes in various aspects such as stability, real-time and simplicity are ensured, thereby improving usability and software development efficiency, and reducing maintenance cost.
The cache acquisition method for preventing cache breakdown provided by the embodiment mainly has the following beneficial effects:
1. the acquisition cache and the refreshing cache are packaged in one method, so that the use is simple;
2. unless the cache is empty, the update cache is asynchronous, and all operations attempting to acquire the cache are not blocked, so that the performance is improved;
3. the refresh interval is judged to operate the cache refresh when the cache is acquired, so that the real-time performance is high, the cache cannot be refreshed when no person accesses the cache, and the resource utilization rate is high;
4. for one cache update, only one thread is in calculation in the whole distributed system, and the resource consumption is low.
Fig. 4 is a schematic structural diagram of an electronic device according to the present embodiment. The electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the cache acquisition method in the above embodiments when executing the program. The electronic device 30 shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic device 30 may be embodied in the form of a general purpose computing device, which may be a server device, for example. Components of electronic device 30 may include, but are not limited to: the at least one processor 31, the at least one memory 32, a bus 33 connecting the different system components, including the memory 32 and the processor 31.
The bus 33 includes a data bus, an address bus, and a control bus.
Memory 32 may include volatile memory such as Random Access Memory (RAM) 321 and/or cache memory 322, and may further include Read Only Memory (ROM) 323.
Memory 32 may also include a program/utility 325 having a set (at least one) of program modules 324, such program modules 324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The processor 31 executes various functional applications and data processing, such as the cache acquisition method in the above embodiments of the present disclosure, by executing a computer program stored in the memory 32.
The electronic device 30 may also communicate with one or more external devices 34 (e.g., keyboard, pointing device, etc.). Such communication may be through an input/output (I/O) interface 35. Also, model-generating device 30 may also communicate with one or more networks, such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet, via network adapter 36. As shown in fig. 4, network adapter 36 communicates with the other modules of model-generating device 30 via bus 33. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in connection with the model-generating device 30, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, data backup storage systems, and the like.
It should be noted that although several units/modules or sub-units/modules of an electronic device are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the cache acquisition method in the above embodiments.
More specifically, among others, readable storage media may be employed including, but not limited to: portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation manner, the present disclosure may also be implemented in the form of a program product, which includes a program code for causing a terminal device to perform steps in implementing the cache acquisition method as in the above embodiments, when the program product is executed on the terminal device.
Wherein the program code for carrying out the present disclosure may be written in any combination of one or more programming languages, and the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device, partly on a remote device or entirely on the remote device.
While specific embodiments of the present disclosure have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and the scope of the disclosure is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the disclosure, but such changes and modifications fall within the scope of the disclosure.

Claims (8)

1. The cache acquisition method is characterized by comprising the following steps of:
receiving an incoming cache-related parameter, wherein the cache-related parameter comprises a key of a cache value and a refresh interval;
acquiring an actual cache object based on the key; the method comprises the steps of,
triggering asynchronous cache refreshing logic to return a business cache value in the actual cache object in response to the fact that the interval between the last refreshing time in the actual cache object and the current moment is greater than or equal to the refreshing interval;
the cache related parameters also comprise a dataLoader function for loading a cache value;
the step of acquiring the actual cache object based on the key comprises the following steps:
executing the dataLoader function to obtain a service cache value in response to the fact that the actual cache object corresponding to the key cannot be obtained;
creating an actual cache object, setting a service cache value in the actual cache object, and taking the current moment as the last refreshing time;
storing the actual cache object to a cache service;
returning a service cache value obtained by executing the dataLoader function;
the cache-related parameters also include a maximum lock time for refreshing the thread to hold the distributed lock;
the step of triggering asynchronous cache refreshing logic to return the service cache value in the actual cache object in response to the interval between the last refreshing time and the current moment in the actual cache object being greater than or equal to the refreshing interval specifically comprises the following steps:
creating a new thread to execute asynchronous cache refresh logic in response to an interval between a last refresh time in the actual cache object and a current time being greater than or equal to the refresh interval;
acquiring a distributed lock according to the key through a new thread, wherein the maximum locking time is used as the maximum locking time;
responding to the lock, executing the dataLoader function to obtain a service cache value to be cached;
creating an actual cache object, setting a service cache value in the actual cache object, and taking the current moment as the last refreshing time;
the actual cache object is saved to the cache service and the distributed lock held by the current thread is released.
2. The cache acquisition method as recited in claim 1, wherein the step of creating a new thread to perform asynchronous cache flush logic comprises:
the traffic cache value in the existing real cache object is returned and a new thread is created to perform the asynchronous cache flush logic.
3. The cache acquisition method as claimed in claim 2, wherein the step of returning the service cache value in the existing actual cache object comprises:
the business cache value in the existing actual cache object is returned without blocking.
4. The cache acquisition method as recited in claim 1, wherein the step of acquiring the actual cache object based on the key comprises:
and responding to the obtained actual cache object corresponding to the key, and taking out and returning the business cache value in the actual cache object.
5. The cache acquisition method as recited in claim 1, further comprising:
and returning the business cache value in the actual cache object in response to the fact that the interval between the last refreshing time and the current moment in the actual cache object is smaller than the refreshing interval.
6. The cache acquisition method as recited in claim 1, wherein the cache-related parameter further comprises an actual expiration time of the key.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the cache acquisition method according to any one of claims 1 to 6 when executing the computer program.
8. A computer readable medium having stored thereon computer instructions which, when executed by a processor, implement a cache acquisition method as claimed in any one of claims 1 to 6.
CN202111004002.4A 2021-08-30 2021-08-30 Cache acquisition method, device and computer readable medium Active CN113742381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111004002.4A CN113742381B (en) 2021-08-30 2021-08-30 Cache acquisition method, device and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111004002.4A CN113742381B (en) 2021-08-30 2021-08-30 Cache acquisition method, device and computer readable medium

Publications (2)

Publication Number Publication Date
CN113742381A CN113742381A (en) 2021-12-03
CN113742381B true CN113742381B (en) 2023-07-25

Family

ID=78733778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111004002.4A Active CN113742381B (en) 2021-08-30 2021-08-30 Cache acquisition method, device and computer readable medium

Country Status (1)

Country Link
CN (1) CN113742381B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6643289B1 (en) * 1999-12-29 2003-11-04 3Com Corporation Method of MPOA status change notification
CN105373369A (en) * 2014-08-25 2016-03-02 北京皮尔布莱尼软件有限公司 Asynchronous caching method, server and system
CN105630812A (en) * 2014-10-30 2016-06-01 阿里巴巴集团控股有限公司 Refreshing method and device of cluster application cache
CN106502589A (en) * 2016-10-21 2017-03-15 普元信息技术股份有限公司 The loading of caching or the system and method for persistence is realized based on cloud computing
CN106815287A (en) * 2016-12-06 2017-06-09 中国银联股份有限公司 A kind of buffer memory management method and device
CN106844784A (en) * 2017-03-14 2017-06-13 上海网易小额贷款有限公司 Data cache method, device and computer-readable recording medium
CN109032771A (en) * 2018-05-31 2018-12-18 深圳壹账通智能科技有限公司 Local cache method, apparatus, computer equipment and storage medium
CN110865768A (en) * 2018-08-27 2020-03-06 中兴通讯股份有限公司 Write cache resource allocation method, device, equipment and storage medium
CN111414392A (en) * 2020-03-25 2020-07-14 浩鲸云计算科技股份有限公司 Cache asynchronous refresh method, system and computer readable storage medium
CN111966719A (en) * 2020-10-21 2020-11-20 四川新网银行股份有限公司 Method for refreshing local data cache of distributed consumer credit system in real time
CN112486948A (en) * 2020-11-25 2021-03-12 福建省数字福建云计算运营有限公司 Real-time data processing method
CN113010278A (en) * 2021-02-19 2021-06-22 建信金融科技有限责任公司 Batch processing method and system for financial insurance core system
CN113312391A (en) * 2021-06-01 2021-08-27 上海万物新生环保科技集团有限公司 Method and equipment for cache asynchronous delay refreshing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9531720B2 (en) * 2014-09-02 2016-12-27 Akamai Technologies, Inc. System and methods for leveraging an object cache to monitor network traffic
US10650475B2 (en) * 2016-05-20 2020-05-12 HomeAway.com, Inc. Hierarchical panel presentation responsive to incremental search interface
US10665210B2 (en) * 2017-12-29 2020-05-26 Intel Corporation Extending asynchronous frame updates with full frame and partial frame notifications

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6643289B1 (en) * 1999-12-29 2003-11-04 3Com Corporation Method of MPOA status change notification
CN105373369A (en) * 2014-08-25 2016-03-02 北京皮尔布莱尼软件有限公司 Asynchronous caching method, server and system
CN105630812A (en) * 2014-10-30 2016-06-01 阿里巴巴集团控股有限公司 Refreshing method and device of cluster application cache
CN106502589A (en) * 2016-10-21 2017-03-15 普元信息技术股份有限公司 The loading of caching or the system and method for persistence is realized based on cloud computing
CN106815287A (en) * 2016-12-06 2017-06-09 中国银联股份有限公司 A kind of buffer memory management method and device
CN106844784A (en) * 2017-03-14 2017-06-13 上海网易小额贷款有限公司 Data cache method, device and computer-readable recording medium
CN109032771A (en) * 2018-05-31 2018-12-18 深圳壹账通智能科技有限公司 Local cache method, apparatus, computer equipment and storage medium
CN110865768A (en) * 2018-08-27 2020-03-06 中兴通讯股份有限公司 Write cache resource allocation method, device, equipment and storage medium
CN111414392A (en) * 2020-03-25 2020-07-14 浩鲸云计算科技股份有限公司 Cache asynchronous refresh method, system and computer readable storage medium
CN111966719A (en) * 2020-10-21 2020-11-20 四川新网银行股份有限公司 Method for refreshing local data cache of distributed consumer credit system in real time
CN112486948A (en) * 2020-11-25 2021-03-12 福建省数字福建云计算运营有限公司 Real-time data processing method
CN113010278A (en) * 2021-02-19 2021-06-22 建信金融科技有限责任公司 Batch processing method and system for financial insurance core system
CN113312391A (en) * 2021-06-01 2021-08-27 上海万物新生环保科技集团有限公司 Method and equipment for cache asynchronous delay refreshing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
洋山港四期自动化码头任务调度系统优化;丁益华等;《港口科技》;16-21 *

Also Published As

Publication number Publication date
CN113742381A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
EP3796150B1 (en) Storage volume creation method and apparatus, server, and storage medium
US8909996B2 (en) Utilizing multiple storage devices to reduce write latency for database logging
US7783852B2 (en) Techniques for automated allocation of memory among a plurality of pools
US7650400B2 (en) Dynamic configuration and self-tuning on inter-nodal communication resources in a database management system
US7426735B2 (en) Threading and communication architecture for a graphical user interface
US20180060145A1 (en) Message cache management for message queues
US9354989B1 (en) Region based admission/eviction control in hybrid aggregates
US9128895B2 (en) Intelligent flood control management
EP2541423B1 (en) Replacement policy for resource container
JP2004062869A (en) Method and apparatus for selective caching of transactions in computer system
JPH02228744A (en) Data processing system
CN101137984A (en) Systems, methods, and software for distributed loading of databases
CN108614847A (en) A kind of caching method and system of data
CN113742381B (en) Cache acquisition method, device and computer readable medium
CN101626313A (en) Network management system client and performance data display method thereof
US20110302377A1 (en) Automatic Reallocation of Structured External Storage Structures
US10970175B2 (en) Flexible per-request data durability in databases and other data stores
US7827215B2 (en) Real-time operation by a diskless client computer
CN116450966A (en) Cache access method and device, equipment and storage medium
CN112948399B (en) Serial number generation method and device, computer equipment and storage medium
US20040025007A1 (en) Restricting access to a method in a component
CN115599542A (en) Method and system for realizing shared memory pool
CN109739516B (en) Cloud cache operation method and system
CN113901018A (en) Method and device for identifying file to be migrated, computer equipment and storage medium
US20060064426A1 (en) Apparatus and method for inhibiting non-critical access based on measured performance in a database system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant