CN116992180A - Method, device, terminal equipment and storage medium for caching data - Google Patents

Method, device, terminal equipment and storage medium for caching data Download PDF

Info

Publication number
CN116992180A
CN116992180A CN202311063496.2A CN202311063496A CN116992180A CN 116992180 A CN116992180 A CN 116992180A CN 202311063496 A CN202311063496 A CN 202311063496A CN 116992180 A CN116992180 A CN 116992180A
Authority
CN
China
Prior art keywords
cache
end interface
identifier
data
policy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311063496.2A
Other languages
Chinese (zh)
Inventor
邱梦兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202311063496.2A priority Critical patent/CN116992180A/en
Publication of CN116992180A publication Critical patent/CN116992180A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a method, a device, terminal equipment and a storage medium for caching data, and belongs to the technical field of financial science and technology. The method comprises the following steps: determining a cache policy corresponding to the front-end interface, wherein the cache policy comprises a first policy and a second policy, the first policy obtains cache data of the front-end interface and is related to the set cache time, and the second policy obtains cache data of the front-end interface and is related to the set cache identifier; when the caching strategy is the first strategy, determining target cache data corresponding to the front-end interface according to target cache time set by the front-end interface; and when the caching strategy is the second strategy, acquiring a first caching identifier and a second caching identifier corresponding to the front-end interface, and determining target caching data corresponding to the front-end interface according to the first caching identifier and the second caching identifier. Therefore, the problem that the cache data cannot be updated in time is solved, intelligent cache of the front-end data is realized, and the cache data is more efficient and timely.

Description

Method, device, terminal equipment and storage medium for caching data
Technical Field
The present application relates to the technical field of financial science and technology, and in particular, to a method, an apparatus, a terminal device, and a storage medium for caching data.
Background
For some high concurrency interface access pages, in order to reduce server pressure, the front end adds a data caching function, namely, the data obtained during request is cached for a designated time, whether the interface is cached is judged next time, if yes, cached data is directly fetched, and if not, the request is restarted and the designated time is re-cached. For example, in the insurance industry, a time-limited sales promotion may occur when a new insurance service is developed, so that more users access the interface simultaneously in a short time, a larger pressure is caused on the server at this time, and if the result cannot be fed back to the users in time, the experience and satisfaction of the users are also greatly reduced. The problem can be solved by caching the data. The caching method can effectively relieve the pressure of the server, but the caching method has the defect that updated data cannot be obtained in real time, the latest data can be pulled after the caching time is invalid, if the interface data is changed during the effective period of caching, a user cannot pull the latest data immediately, so that abnormal page display can be caused, and the user experience is influenced.
Disclosure of Invention
The embodiment of the application mainly aims to provide a method, a device, terminal equipment and a storage medium for caching data, and aims to solve the problems that in the existing process of relieving the pressure of a server, updated data cannot be acquired in real time, so that the cached data cannot be updated in time, abnormal display is caused when the data is fed back or displayed to a user, and the user experience is reduced.
In a first aspect, an embodiment of the present application provides a method for caching data, including:
determining a cache policy corresponding to a front-end interface, wherein the cache policy comprises a first policy and a second policy, the first policy obtains cache data of the front-end interface and is related to a set cache time, and the second policy obtains cache data of the front-end interface and is related to a set cache identifier;
when the caching strategy is the first strategy, determining target cache data corresponding to the front-end interface according to target cache time set by the front-end interface;
and when the caching strategy is the second strategy, acquiring a first caching identifier and a second caching identifier corresponding to the front-end interface, and determining target caching data corresponding to the front-end interface according to the first caching identifier and the second caching identifier.
In a second aspect, an embodiment of the present application provides an apparatus for caching data, including:
the system comprises a determining policy module, a processing module and a processing module, wherein the determining policy module is used for determining a cache policy corresponding to a front-end interface, the cache policy comprises a first policy and a second policy, the first policy obtains cache data of the front-end interface and is related to set cache time, and the second policy obtains cache data of the front-end interface and is related to set cache identification;
the first policy execution module is used for determining target cache data corresponding to the front-end interface according to the target cache time set by the front-end interface when the cache policy is the first policy;
and the second policy execution module is used for acquiring a first cache identifier and a second cache identifier corresponding to the front-end interface when the cache policy is the second policy, and determining target cache data corresponding to the front-end interface according to the first cache identifier and the second cache identifier.
In a third aspect, embodiments of the present application further provide a terminal device, the terminal device comprising a processor, a memory, a computer program stored on the memory and executable by the processor, and a data bus for enabling a connection communication between the processor and the memory, wherein the computer program, when executed by the processor, implements the steps of any of the methods of caching data as provided in the present specification.
In a fourth aspect, embodiments of the present application further provide a storage medium for computer readable storage, wherein the storage medium stores one or more programs executable by one or more processors to implement steps of any of the methods of caching data as provided in the present specification.
The embodiment of the application provides a method, a device, a terminal device and a storage medium for caching data, wherein the method comprises the steps of determining a caching strategy adopted by data caching corresponding to a front-end interface, wherein the caching strategy comprises a first strategy and a second strategy, the first strategy is used for acquiring the caching data of the front-end interface and is related to set caching time, the second strategy is used for acquiring the caching data of the front-end interface and is related to set caching identification, and further the caching data is acquired according to the caching strategy corresponding to the front-end interface. When the caching strategy is the first strategy, acquiring or updating target cache data corresponding to the front-end interface when the target cache time set by the front-end interface is met; when the caching strategy is the second strategy, the first caching identifier and the second caching identifier corresponding to the front-end interface are acquired, and then whether to update or acquire the target caching data corresponding to the front-end interface is determined by comparing the first caching identifier and the second caching identifier. Therefore, the problem that cache data cannot be updated in time is solved, a matched cache strategy is set according to the requirement of a front-end interface, and further, the cache data can be timely and efficiently acquired while the pressure of a server is relieved, the cache data is set more flexibly, and therefore a user can have better experience.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart illustrating a method for buffering data according to an embodiment of the present application;
FIG. 2 is a flow chart of sub-step S103 of the method of caching data in FIG. 1;
FIG. 3 is a schematic diagram of a scenario in which the method for buffering data according to the present embodiment is implemented;
fig. 4 is a schematic diagram of an apparatus for caching data according to an embodiment of the present application;
fig. 5 is a schematic block diagram of a structure of a terminal device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The embodiment of the application provides a method, a device, terminal equipment and a storage medium for caching data. The method for caching data can be applied to terminal equipment, and the terminal equipment can be electronic equipment such as tablet computers, notebook computers, desktop computers, personal digital assistants, wearable equipment and the like. Or may be a server or a cluster of servers.
Some embodiments of the application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flowchart of a method for buffering data according to an embodiment of the present application.
As shown in fig. 1, the method for buffering data includes steps S101 to S103.
Step S101, determining a cache policy corresponding to a front-end interface, wherein the cache policy comprises a first policy and a second policy, the first policy obtains cache data of the front-end interface and is related to set cache time, and the second policy obtains cache data of the front-end interface and is related to set cache identification.
Illustratively, different front-end interfaces in the front-end interface need to employ different caching policies according to the functions of the front-end interface, the caching policies including a first policy related to setting a caching time and a second policy related to setting a caching identification.
For example, the front-end interface is mainly used for realizing data displayed by the fixed interface, and the data content is updated every 3 months, so that the front-end interface can adopt a first strategy to update the cache data of the front-end interface every 3 months; if the front-end interface is mainly used for realizing the data recommended by the user, the recommended content changes along with the use of the user, and when the recommended content acquired by the back-end changes, the corresponding cache data in the front-end interface also needs to change along with the change, and a second strategy can be adopted, and the cache data can be updated in time by setting the identification and the strategy through the front-end and the back-end.
For example, when the cache policy corresponding to the front-end interface is obtained, a keyword field may be set in the front-end interface, and different cache policies may be further represented by different keyword results by querying the keyword result corresponding to the keyword field.
In some embodiments, the determining the cache policy corresponding to the front-end interface includes: requesting a main interface to acquire an interface cache identifier corresponding to the front-end interface; and determining a cache strategy corresponding to the front-end interface according to the interface cache identifier.
In an exemplary embodiment, a main interface main exists in the program, and a variable exists in the main interface for storing an interface cache identifier corresponding to each front-end interface, so that the interface cache identifier corresponding to the front-end interface is obtained by requesting the main interface. And determining the cache strategy corresponding to the front-end interface according to the mapping relation between the interface cache identification and the cache strategy.
For example, when a variable front2cache exists in the main interface, the data format of the variable front2cache is a dictionary, the front2 cache= { front end interface 1:time2cache, the front end interface 2:label2cache }, the name of the front end interface is sent to the main interface, and the interface cache identifier corresponding to the name of the front end interface is obtained. If the time2cache in the interface cache identifier represents the first policy, the label2cache represents the second policy, and then the cache policy corresponding to the front-end interface can be obtained according to the mapping relation.
In some embodiments, before the requesting master interface obtains the interface cache identifier corresponding to the front-end interface, the method further includes: acquiring a first data identifier corresponding to the front-end interface and a second data identifier of cache data corresponding to the front-end interface; and when the first data identifier is inconsistent with the second data identifier, clearing the cache data corresponding to the front-end interface, and storing the first data identifier into the front-end interface.
The method includes the steps that a first data identifier corresponding to a front-end interface current processing object and a second data identifier corresponding to cache data are obtained, the first data identifier and the second data identifier are compared, when the first data identifier is inconsistent with the second data identifier, cache data corresponding to the front-end interface are cleared, and the first data identifier is stored in the front-end interface required by obtaining the cache data.
For example, the front-end interface is required to acquire the insurance data related to the city a, and the cache data stores the insurance data related to the city B, so that the first data identification city a and the second data identification city B are inconsistent, and the cache data is required to be emptied, and the first data identification city a is stored in the front-end interface, so that when the cache data is required to be acquired again, the corresponding cache data can be acquired according to the first data identification city a.
And step S102, when the caching strategy is the first strategy, determining target cache data corresponding to the front-end interface according to the target cache time set by the front-end interface.
When the cache policy is the first policy, if the current time meets the target cache time, the corresponding interface is called to obtain target cache data corresponding to the front-end interface.
For example, the validity period of the cache data is 7 days, and when the validity period of the cache data is 7 days after the cache data is acquired, the interface for updating the cache data is called again to acquire the corresponding target cache data.
In some embodiments, the determining, according to the target cache time set by the front-end interface, target cache data corresponding to the front-end interface includes: acquiring target cache time set by the front-end interface and the cached time of the current cache data of the front-end interface; and when the target cache time and the cached time meet preset conditions, calling an instruction of the data to be cached by the front-end interface to obtain target cache data.
The method includes the steps that a target cache time corresponding to cache data set by a current front-end interface and a cached time corresponding to the current cache data of the front-end interface are determined, and when the target cache time is consistent with the cached time, an instruction of calling data to be cached by the front-end interface is called to obtain target cache data.
For example, the cache time is represented by a cacheTime, which may be defined as a numeric type or a function type. When the cacheTime is a numerical value, the target cache time is set to 10 hours, calculation can be performed by using a countdown mode, the cached time is the target cache time minus the countdown time length, and when the cached time is consistent with the target cache time, the cache data is required to be updated, and then the front-end interface calls a data acquisition instruction of the data to be cached to acquire the target cache data.
Step 103, when the caching policy is the second policy, acquiring a first caching identifier and a second caching identifier corresponding to the front-end interface, and determining target caching data corresponding to the front-end interface according to the first caching identifier and the second caching identifier.
When the caching policy is the second policy, the first cache identifier and the second cache identifier corresponding to the cache data currently stored by the front-end interface are obtained, and the first cache identifier and the second cache identifier are compared, so as to determine whether to obtain the target cache data corresponding to the front-end interface.
In some embodiments, the step S103 includes substeps S1031 to S1032, referring to fig. 2, of obtaining the first buffer identifier and the second buffer identifier corresponding to the front-end interface, and determining the target buffer data corresponding to the front-end interface according to the first buffer identifier and the second buffer identifier.
And step S1031, obtaining the second cache identifier corresponding to the front-end interface from a database, wherein the database is used for storing the second cache identifier corresponding to the front-end interface and the front-end interface, and the second cache identifier is used for representing whether the data corresponding to the cache data changes or not.
The method includes the steps that whether data of a back end interface corresponding to data to be cached of a front end interface are changed or not is represented by a second cache identifier, the second cache identifier is stored in a database, and when whether the cache data are updated or not is judged, the second cache identifier corresponding to the front end interface is obtained from the database.
For example, after the cache data is obtained, the second cache identifier is True, and if the data in the back-end interface changes, the second cache identifier changes to False at this time.
And S1032, when the first cache identifier is inconsistent with the second cache identifier, the interface is recalled to acquire the target cache data corresponding to the front-end interface.
The second buffer identifier is assigned to the first buffer identifier after the buffer data is acquired, if the second buffer identifier changes after the data in the back end changes, the first buffer identifier and the second buffer identifier are inconsistent after being compared, and then the interface for acquiring the buffer data is recalled, so that the target buffer data corresponding to the front-end interface is acquired.
In some embodiments, when the first cache identifier and the second cache identifier are inconsistent, re-invoking to obtain the target cache data corresponding to the front-end interface includes: when the data to be cached corresponding to the front-end interface changes, the second cache identifier is increased, the first cache identifier is inconsistent with the second cache identifier, the interface is called again to obtain target cache data corresponding to the front-end interface, and the second cache identifier is assigned to the first cache identifier.
The method includes that when data to be cached corresponding to the front-end interface changes in a self-increasing mode, a value corresponding to the second cache identifier is increased, values corresponding to the first cache identifier and the second cache identifier are respectively inconsistent, the interface is called again to obtain target cache data corresponding to the front-end interface, and after the target cache data are obtained, the second cache identifier is assigned to the first cache identifier.
For example, the second cache identifier corresponding to the interface for pulling the configuration data is represented by the cacheId, and if the configuration is updated or changed, the back end will self-increase the cacheId, so that the front end interface determines whether the cache is invalid or not according to whether the cacheId of the last request cache is the same as the cacheId of the corresponding interface acquired by the current request main interface. Thus realizing the purpose of caching and simultaneously pulling up to the latest data.
In some embodiments, the caching policy further includes a third policy, where the third policy is used to characterize that the target cache data corresponding to the front-end interface is permanently stored, and when the caching policy is the third policy, the target cache data corresponding to the front-end interface will not change.
For example, when the cached data does not need to be changed, a corresponding third policy may be represented according to the third cache identifier, so as to represent that the target cached data corresponding to the front-end interface is permanently saved.
For example, the buffer time is set according to the requirement for the data returned by the interface, if a new person block in the current page only displays the data related to the new person gift bag, the permanent buffer can be set according to the result returned by the request, if the new person returned by the new person block interface is identified as false, that is, the expiration time of the buffer is not set. The interface is pulled each time. The interface is guaranteed to be called only by new people next time. This saves a lot of unnecessary requests.
Or, the cacheTime is used for representing the cache identifier, and can be defined as a numerical type or a function type, if the cacheTime is empty, the cache is a default cache duration, after the request is finished, the cache policy controls the cache duration according to the cacheTime, and if the cacheTime is-1, the cache cannot be invalidated.
Referring to fig. 3, fig. 3 is a schematic diagram of a scenario in which the method for buffering data provided in this embodiment is implemented, as shown in fig. 3, according to the first data identifier, it is determined whether the buffered data corresponding to the front-end interface needs to be emptied, and if the buffered data is emptied, the latest data identifier is stored in the front-end interface. The method comprises the steps that a main interface is requested to acquire a cache strategy corresponding to a front-end interface, when the cache strategy is a first cache strategy, whether cache data are updated is judged according to the cache time, when the cache strategy is a second cache strategy, the first cache mark and the second cache mark are acquired, the first cache mark and the second cache mark are compared, when the first cache mark and the second cache mark are inconsistent, the cache data are updated, the problem that the cache data cannot be updated in time is solved, the matched cache strategy is set according to the requirement of the front-end interface, further, the cache data are acquired in time and efficiently while the pressure of a server is relieved, and the cache data are set more flexibly, so that a user can have better experience.
Referring to fig. 4, fig. 4 is a schematic diagram of an apparatus 200 for buffering data according to an embodiment of the present application, where the apparatus 200 for buffering data includes: the device comprises a determining policy module 201, a first policy executing module 202 and a second policy executing module 203, wherein the determining policy module 201 is configured to determine a cache policy corresponding to a front-end interface, the cache policy includes a first policy and a second policy, the first policy obtains cache data of the front-end interface and is related to a set cache time, and the second policy obtains cache data of the front-end interface and is related to a set cache identifier; a first policy execution module 202, configured to determine, when the cache policy is the first policy, target cache data corresponding to the front-end interface according to a target cache time set by the front-end interface; and the second policy execution module 203 is configured to obtain a first cache identifier and a second cache identifier corresponding to the front-end interface when the cache policy is the second policy, and determine target cache data corresponding to the front-end interface according to the first cache identifier and the second cache identifier.
In some embodiments, the determining policy module 201 performs, in determining the cache policy corresponding to the front-end interface:
requesting a main interface to acquire an interface cache identifier corresponding to the front-end interface;
and determining a cache strategy corresponding to the front-end interface according to the interface cache identifier.
In some embodiments, before the request main interface obtains the interface cache identifier corresponding to the front-end interface, the determining policy module 201 further performs:
acquiring a first data identifier corresponding to the front-end interface and a second data identifier of cache data corresponding to the front-end interface;
and when the first data identifier is inconsistent with the second data identifier, clearing the cache data corresponding to the front-end interface, and storing the first data identifier into the front-end interface.
In some embodiments, the first policy execution module 202 executes, in the process of determining, according to the target cache time set by the front-end interface, target cache data corresponding to the front-end interface:
acquiring target cache time set by the front-end interface and the cached time of the current cache data of the front-end interface;
and when the target cache time and the cached time meet preset conditions, calling an instruction of the data to be cached by the front-end interface to obtain target cache data.
In some embodiments, the second policy execution module 203 executes, in the process of obtaining the first cache identifier and the second cache identifier corresponding to the front-end interface, and determining, according to the first cache identifier and the second cache identifier, the target cache data corresponding to the front-end interface:
the second cache identifier corresponding to the front-end interface is obtained from a database, wherein the database is used for storing the second cache identifier corresponding to the front-end interface and the front-end interface, and the second cache identifier is used for representing whether the data corresponding to the cache data changes or not;
and when the first cache identifier is inconsistent with the second cache identifier, the interface is recalled to acquire target cache data corresponding to the front-end interface.
In some embodiments, when the first cache identifier and the second cache identifier are inconsistent, the second policy execution module 203 recalls and obtains the target cache data corresponding to the front-end interface, and executes:
when the data to be cached corresponding to the front-end interface changes, the second cache identifier is increased, the first cache identifier is inconsistent with the second cache identifier, the interface is called again to obtain target cache data corresponding to the front-end interface, and the second cache identifier is assigned to the first cache identifier.
In some embodiments, the buffering policy in the apparatus 200 for buffering data further includes a third policy, where the third policy is used to characterize that the target buffered data corresponding to the front-end interface is permanently stored, and when the buffering policy is the third policy, the target buffered data corresponding to the front-end interface will not change.
Optionally, the apparatus 200 for buffering data is applied to a terminal device
Referring to fig. 5, fig. 5 is a schematic block diagram of a structure of a terminal device according to an embodiment of the present application.
As shown in fig. 5, the terminal device 300 includes a processor 301 and a memory 302, the processor 301 and the memory 302 being connected by a bus 303, such as an I2C (Inter-integrated Circuit) bus.
In particular, the processor 301 is used to provide computing and control capabilities, supporting the operation of the entire terminal device. The processor 301 may be a central processing unit (Central Processing Unit, CPU), the processor 301 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Specifically, the Memory 302 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of a portion of the structure related to the embodiment of the present application, and does not constitute a limitation of the terminal device to which the embodiment of the present application is applied, and that a specific server may include more or less components than those shown in the drawings, or may combine some components, or have a different arrangement of components.
The processor is configured to run a computer program stored in the memory, and implement any one of the methods for caching data provided by the embodiments of the present application when the computer program is executed.
In an embodiment, the processor is configured to run a computer program stored in a memory and to implement the following steps when executing the computer program:
determining a cache policy corresponding to a front-end interface, wherein the cache policy comprises a first policy and a second policy, the first policy obtains cache data of the front-end interface and is related to a set cache time, and the second policy obtains cache data of the front-end interface and is related to a set cache identifier;
when the caching strategy is the first strategy, determining target cache data corresponding to the front-end interface according to target cache time set by the front-end interface;
and when the caching strategy is the second strategy, acquiring a first caching identifier and a second caching identifier corresponding to the front-end interface, and determining target caching data corresponding to the front-end interface according to the first caching identifier and the second caching identifier.
In some embodiments, the processor 301 performs, in the determining the cache policy corresponding to the front-end interface:
requesting a main interface to acquire an interface cache identifier corresponding to the front-end interface;
and determining a cache strategy corresponding to the front-end interface according to the interface cache identifier.
In some embodiments, before the requesting host interface obtains the interface cache identifier corresponding to the front-end interface, the processor 301 further performs:
acquiring a first data identifier corresponding to the front-end interface and a second data identifier of cache data corresponding to the front-end interface;
and when the first data identifier is inconsistent with the second data identifier, clearing the cache data corresponding to the front-end interface, and storing the first data identifier into the front-end interface.
In some embodiments, the processor 301 performs, in the determining, according to the target cache time set by the front-end interface, the target cache data corresponding to the front-end interface:
acquiring target cache time set by the front-end interface and the cached time of the current cache data of the front-end interface;
and when the target cache time and the cached time meet preset conditions, calling an instruction of the data to be cached by the front-end interface to obtain target cache data.
In some embodiments, in the process of obtaining the first cache identifier and the second cache identifier corresponding to the front-end interface and determining the target cache data corresponding to the front-end interface according to the first cache identifier and the second cache identifier, the processor 301 performs:
the second cache identifier corresponding to the front-end interface is obtained from a database, wherein the database is used for storing the second cache identifier corresponding to the front-end interface and the front-end interface, and the second cache identifier is used for representing whether the data corresponding to the cache data changes or not;
and when the first cache identifier is inconsistent with the second cache identifier, the interface is recalled to acquire target cache data corresponding to the front-end interface.
In some embodiments, when the first cache identifier and the second cache identifier are inconsistent, the processor 301 performs, in a process of recalling and acquiring the target cache data corresponding to the front-end interface, the following steps:
when the data to be cached corresponding to the front-end interface changes, the second cache identifier is increased, the first cache identifier is inconsistent with the second cache identifier, the interface is called again to obtain target cache data corresponding to the front-end interface, and the second cache identifier is assigned to the first cache identifier.
In some embodiments, the processor 301 further executes a third policy, where the third policy is used to characterize that the target cache data corresponding to the front-end interface is permanently stored, and when the cache policy is the third policy, the target cache data corresponding to the front-end interface will not change.
It should be noted that, for convenience and brevity of description, a specific working process of the terminal device described above may refer to a corresponding process in the foregoing method embodiment for caching data, which is not described herein again.
Embodiments of the present application also provide a storage medium for computer readable storage, where the storage medium stores one or more programs that can be executed by one or more processors to implement the steps of any of the methods for caching data provided in the embodiments of the present application.
The storage medium may be an internal storage unit of the terminal device according to the foregoing embodiment, for example, a hard disk or a memory of the terminal device. The storage medium may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware embodiment, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
It should be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions may be made therein without departing from the spirit and scope of the application as defined by the appended claims. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (10)

1. A method of caching data, the method comprising:
determining a cache policy corresponding to a front-end interface, wherein the cache policy comprises a first policy and a second policy, the first policy obtains cache data of the front-end interface and is related to a set cache time, and the second policy obtains cache data of the front-end interface and is related to a set cache identifier;
when the caching strategy is the first strategy, determining target cache data corresponding to the front-end interface according to target cache time set by the front-end interface;
and when the caching strategy is the second strategy, acquiring a first caching identifier and a second caching identifier corresponding to the front-end interface, and determining target caching data corresponding to the front-end interface according to the first caching identifier and the second caching identifier.
2. The method of claim 1, wherein the determining the cache policy corresponding to the front-end interface comprises:
requesting a main interface to acquire an interface cache identifier corresponding to the front-end interface;
and determining a cache strategy corresponding to the front-end interface according to the interface cache identifier.
3. The method of claim 2, wherein before the requesting master interface obtains the interface cache identifier corresponding to the front-end interface, the method further comprises:
acquiring a first data identifier corresponding to the front-end interface and a second data identifier of cache data corresponding to the front-end interface;
and when the first data identifier is inconsistent with the second data identifier, clearing the cache data corresponding to the front-end interface, and storing the first data identifier into the front-end interface.
4. The method according to claim 1, wherein the determining the target cache data corresponding to the front-end interface according to the target cache time set by the front-end interface includes:
acquiring target cache time set by the front-end interface and the cached time of the current cache data of the front-end interface;
and when the target cache time and the cached time meet preset conditions, calling an instruction of the data to be cached by the front-end interface to obtain target cache data.
5. The method of claim 1, wherein the obtaining the first cache identifier and the second cache identifier corresponding to the front-end interface, and determining the target cache data corresponding to the front-end interface according to the first cache identifier and the second cache identifier, comprises:
the second cache identifier corresponding to the front-end interface is obtained from a database, wherein the database is used for storing the second cache identifier corresponding to the front-end interface and the front-end interface, and the second cache identifier is used for representing whether the data corresponding to the cache data changes or not;
and when the first cache identifier is inconsistent with the second cache identifier, the interface is recalled to acquire target cache data corresponding to the front-end interface.
6. The method of claim 5, wherein when the first cache identifier and the second cache identifier are inconsistent, re-invoking the obtaining the target cache data corresponding to the front-end interface comprises:
when the data to be cached corresponding to the front-end interface changes, the second cache identifier is increased, the first cache identifier is inconsistent with the second cache identifier, the interface is called again to obtain target cache data corresponding to the front-end interface, and the second cache identifier is assigned to the first cache identifier.
7. The method of claim 1, wherein the caching policy further comprises a third policy, the third policy is used to characterize that the target cache data corresponding to the front-end interface is permanently stored, and when the caching policy is the third policy, the target cache data corresponding to the front-end interface will not change.
8. An apparatus for caching data, comprising:
the system comprises a determining policy module, a processing module and a processing module, wherein the determining policy module is used for determining a cache policy corresponding to a front-end interface, the cache policy comprises a first policy and a second policy, the first policy obtains cache data of the front-end interface and is related to set cache time, and the second policy obtains cache data of the front-end interface and is related to set cache identification;
the first policy execution module is used for determining target cache data corresponding to the front-end interface according to the target cache time set by the front-end interface when the cache policy is the first policy;
and the second policy execution module is used for acquiring a first cache identifier and a second cache identifier corresponding to the front-end interface when the cache policy is the second policy, and determining target cache data corresponding to the front-end interface according to the first cache identifier and the second cache identifier.
9. A terminal device, characterized in that the terminal device comprises a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and to implement the method of caching data according to any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, which when executed by one or more processors causes the one or more processors to perform the steps of the method of caching data as claimed in any one of claims 1 to 7.
CN202311063496.2A 2023-08-22 2023-08-22 Method, device, terminal equipment and storage medium for caching data Pending CN116992180A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311063496.2A CN116992180A (en) 2023-08-22 2023-08-22 Method, device, terminal equipment and storage medium for caching data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311063496.2A CN116992180A (en) 2023-08-22 2023-08-22 Method, device, terminal equipment and storage medium for caching data

Publications (1)

Publication Number Publication Date
CN116992180A true CN116992180A (en) 2023-11-03

Family

ID=88526664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311063496.2A Pending CN116992180A (en) 2023-08-22 2023-08-22 Method, device, terminal equipment and storage medium for caching data

Country Status (1)

Country Link
CN (1) CN116992180A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117896440A (en) * 2024-03-15 2024-04-16 江西曼荼罗软件有限公司 Data caching acquisition method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117896440A (en) * 2024-03-15 2024-04-16 江西曼荼罗软件有限公司 Data caching acquisition method and system
CN117896440B (en) * 2024-03-15 2024-05-24 江西曼荼罗软件有限公司 Data caching acquisition method and system

Similar Documents

Publication Publication Date Title
US10191856B2 (en) Method of managing web browser cache size using logical relationships and clustering
US5758358A (en) Method and system for reconciling sections of documents
US9916390B2 (en) Managing web browser cache for offline browsing
EP1267278A1 (en) Streaming of real-time data to a browser
CN109376318B (en) Page loading method, computer readable storage medium and terminal equipment
US8225192B2 (en) Extensible cache-safe links to files in a web page
WO2020047840A1 (en) Bill information caching method, bill information query method and terminal device
CN109933585B (en) Data query method and data query system
CN116992180A (en) Method, device, terminal equipment and storage medium for caching data
US20050172076A1 (en) System for managing distributed cache resources on a computing grid
WO2015002739A1 (en) Providing a query results page
CN110990439A (en) Cache-based quick query method and device, computer equipment and storage medium
CN112306993B (en) Redis-based data reading method, device, equipment and readable storage medium
CN108319619B (en) Data processing method and device
CN111966938B (en) Configuration method and system for realizing loading speed improvement of front-end page of cloud platform
CN111737564A (en) Information query method, device, equipment and medium
WO2019127772A1 (en) Data dictionary display method and device, terminal device and storage medium
CN109656592B (en) Card management method, device, terminal and computer readable storage medium
CN111552548A (en) Task processing method and device, electronic equipment and machine-readable storage medium
US11134116B2 (en) System and method for dynamically loading a webpage
CN112650940A (en) Recommendation method and device of application program, computer equipment and storage medium
CN116028530A (en) Object resource reading method and device, electronic equipment and readable storage medium
CN115114332A (en) Database query method, device and equipment and readable storage medium
CN108255417B (en) Data access method, electronic device and readable storage medium
CN110889053B (en) Interface data caching method and device and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination