CN114048409A - Cache management method and device, computing equipment and storage medium - Google Patents

Cache management method and device, computing equipment and storage medium Download PDF

Info

Publication number
CN114048409A
CN114048409A CN202111357841.4A CN202111357841A CN114048409A CN 114048409 A CN114048409 A CN 114048409A CN 202111357841 A CN202111357841 A CN 202111357841A CN 114048409 A CN114048409 A CN 114048409A
Authority
CN
China
Prior art keywords
cache
request
data
instance metadata
management platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111357841.4A
Other languages
Chinese (zh)
Inventor
郭煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Wangxing Information Technology Co ltd
Original Assignee
Guangzhou Wangxing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Wangxing Information Technology Co ltd filed Critical Guangzhou Wangxing Information Technology Co ltd
Priority to CN202111357841.4A priority Critical patent/CN114048409A/en
Publication of CN114048409A publication Critical patent/CN114048409A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a method and a device for managing a cache comprising one or more cache components. The cache management method comprises the following steps: receiving a data request from a client, wherein the data request comprises a first parameter combination; sending a cache instance metadata retrieval request to a cache access management platform according to the first parameter combination; receiving cache instance metadata returned by the cache access management platform in response to the cache instance metadata retrieval request, the cache instance metadata including cache component type information; and constructing a cache request according to the cache instance metadata, and acquiring data from the one or more cache components based on the cache request. The cache management method of the invention dynamically discovers cache components according to the identification by means of the uniform cache access service, shields the details of the cache components and performs uniform cache management.

Description

Cache management method and device, computing equipment and storage medium
Technical Field
The present invention relates to computer technologies, and in particular, to a method and an apparatus for unified cache management, a computing device, and a storage medium.
Background
With the rapid development of computer technologies, especially internet and big data processing technologies, various business scenarios of remote data access and processing are continuously developed and put into application, such as online shopping, warehouse management, news, video services, etc. These business scenarios require a large and frequent data access, and the speed requirements for data access are increasing.
The system main storage device is generally difficult to meet the frequent and high-speed data access requirement due to the limitation of the storage mechanism and the access mode. In a distributed system, this problem becomes more pronounced. For example, in an online shopping system based on a distributed system, merchandise data may be stored in a plurality of primary storage devices located at different locations, and users may be spread over different regions of even multiple countries. A system may need to process thousands of data requests from users in different geographic areas per minute, and data access of this magnitude may be long lasting, while often the commodity data for a large number of data requests is the same. If each data request is realized by direct query to the database, the response speed of the system is very slow, and the parallel processing capability of the system is greatly limited. To overcome these problems, in system development implementation, a cache is usually used to dynamically store hot data and to submit system application performance.
However, the development of cache systems often presents certain difficulties. On one hand, because different systems, even different devices or functions in the same system, need to be implemented by different development languages, for different development languages, codes or frameworks with similar logic need to be written separately to interface with the cache component. On the other hand, cache systems, especially distributed cache systems, often employ a variety of different cache components to implement the caching function, such as Redis, Memcached, Pika, MongoDB, and the like. The combination of multiple cache components and multiple different development languages can bring great burden to the system development, for example, a system uses 4 development languages and 3 cache components, and then 12 cache middleware clients need to be developed. This greatly affects the efficiency of system development and increases the probability of problems in system debugging and operation.
Disclosure of Invention
In order to solve the above problems, the present invention provides a concept of a unified cache access service or a cache middle layer, which can implement development language independence (using http interaction), shield the specific cache component docking details, and implement the effect of switching the specific cache component without perception.
One aspect of the present invention provides a cache management method, where the cache includes one or more cache components, and the cache management method includes:
receiving a data request from a client, wherein the data request comprises a first parameter combination;
sending a cache instance metadata retrieval request to a cache access management platform according to the first parameter combination;
receiving cache instance metadata returned by the cache access management platform in response to the cache instance metadata retrieval request, the cache instance metadata including cache component type information;
and constructing a cache request according to the cache instance metadata, and acquiring data from the one or more cache components based on the cache request.
Another aspect of the present invention provides a cache management apparatus, including:
a data request receiving module configured to receive a data request from a client, the data request including a first parameter combination;
the metadata retrieval request module is configured to send a cache instance metadata retrieval request to the cache access management platform according to the first parameter combination;
a metadata receiving module configured to receive cache instance metadata returned by the cache access management platform in response to the cache instance metadata retrieval request, the cache instance metadata including cache component type information;
and the cache request module is configured to construct a cache request according to the cache instance metadata and acquire data from the one or more cache components based on the cache request.
Specifically, the functions of the above modules may be implemented by one or more caching agents included in the unified cache access service, where the one or more caching agents are configured to:
receiving the data request from the client;
sending a cache instance metadata retrieval request to a cache access management platform;
receiving cache instance metadata returned by the cache access management platform in response to the cache instance metadata retrieval request; and
and constructing a cache request according to the cache instance metadata, and acquiring data from the one or more cache components based on the cache request.
Yet another aspect of the invention provides a computing device comprising:
a memory for storing computer executable instructions; and
and the processor is used for running the computer executable instruction so as to execute the cache management method provided by the invention.
Yet another aspect of the present invention provides a non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a computer, cause the computer to perform a cache management method provided by the present invention.
By using the unified cache access service provided by the invention, a development team does not need to sense the specific use details of the cache component and increase the cost of learning a new cache component, and can process http requests and responses according to a familiar technical system in the team, so that the cache can be conveniently used.
In addition, after the unified cache access service is introduced, the unified cache access service dynamically discovers cache components according to the identification, shields the details of the cache components and performs unified cache management. Specifically, the unified cache access service of the present invention has the following advantages:
each application service component uses the cache with a simple http request, without concern for cache component details;
the unified management of all the cache assemblies is realized, cache avalanche and cache penetration are avoided, the cache assemblies can be switched in time when the downstream cache assemblies are unavailable, and the relevant cache assemblies are degraded if necessary;
and the uniform access layer uniformly reports the buried points, so that the system can monitor the working condition of the cache service and provide a data basis for big data analysis.
Drawings
FIG. 1 is a schematic diagram of a distributed caching system architecture;
FIG. 2 is a flow chart of a cache management method of the present invention;
FIG. 3 is a schematic diagram of a cache management architecture of the present invention suitable for an e-commerce application scenario;
FIG. 4 is a schematic diagram of a cache management system according to the present invention;
fig. 5 is a schematic diagram of a cache access mode of the cache management system according to the present invention.
Detailed Description
In order that the principles and operation of the present invention may be readily understood by the reader, a more particular description of the invention briefly described below will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. However, it will be understood by those skilled in the art that the present invention is not limited to these exemplary embodiments, and that several equivalent modifications can be made by those skilled in the art without inventive step in view of the solution provided by the present invention, to achieve substantially the same technical purpose and effect.
The unified cache access method can be suitable for various application scenes needing to realize quick data access through cache, including online shopping, warehouse management, news, video service and the like. The principles and implementation of the present invention are described below primarily in connection with the application of an online shopping, e-commerce scenario.
As shown in the exemplary embodiments in the present application, the unified cache access method of the present invention is particularly suitable for distributed cache systems, but does not exclude the application of the present invention in other types of cache systems.
FIG. 1 illustrates a typical distributed cache system architecture. Such distributed caching systems are common in e-commerce scenarios, and other application scenarios where a large number of users need to be supported to access data from multiple different geographic locations.
In a distributed cache system, data and files are often stored in databases distributed over different geographical primary storage devices, including database servers, file servers, and the like. Corresponding to the main storage devices, cache devices distributed in different areas are arranged in the system to store common data or data called by an application program recently. After receiving the data request, the application program firstly queries related data from the local cache, and if the local cache request hits, reads corresponding data in the local cache and feeds the data back to the user. If the local cache request is not hit, the application program needs to request relevant data from the corresponding distributed cache server, and the data request to the distributed cache server generally needs to be performed through corresponding cache middleware developed for a cache component on the server. And if the cache request of the distributed cache hits, reading the related data in the corresponding distributed cache and feeding back the data to the user. If the distributed cache request is not hit, the application program needs to return to the source interface according to the setting, and request relevant data from the corresponding main storage device so as to feed back the user.
There are often certain difficulties in the development and use of such distributed caching systems. On one hand, because different systems, even different devices or functions in the same system, need to be implemented by different development languages, for different development languages, codes or frameworks with similar logic need to be written separately to interface with the cache component. On the other hand, such distributed cache systems often employ a variety of different cache components to implement the cache function. The combination of multiple cache components and multiple different development languages can bring a great burden to system development. Rapid iterations of computing techniques, and explosive growth in data demand, mean that the cache system may need to be upgraded in shorter cycles, such as adding new cache components on demand. The increase of the demand and the change of the cache system may cause that the existing cache management service cannot fully utilize the extra data storage and processing capacity brought by the system upgrade, so that the user demand cannot be met, and the user experience is influenced.
In view of this, the present invention introduces a Uniform Cache access Service (UCS) into the existing Cache management system to shield the specific Cache component docking details during the development and use processes, so as to achieve the effect of switching the specific Cache component without perception.
Fig. 2 is a schematic flow chart of the unified cache access service operation method of the present invention. As shown in fig. 2, the cache management system of the present invention operates by first receiving a data request containing a first parameter combination from a client by a unified cache access service (step S1). The data request may be directly from the client, or may be a data request generated by processing of an application service through an instruction or a request of the client. After receiving the data request, the unified cache access service may obtain the first parameter combination contained in the data request by parsing the data request. The parsing of the data request may also be performed by an application service. The first parameter combination may include an application identifier corresponding to the data request and an application sub-module identifier corresponding to the data request, and uniquely corresponds to a set of Cache instance metadata stored in a unified Cache access management Platform (UCP). Each set of cache instance metadata includes cache element type information to determine a cache element to which the associated data request corresponds. After obtaining the first parameter combination, the unified cache access service sends a cache instance metadata retrieval request to the unified cache access management platform according to the first parameter combination (step S2). The unified cache access management platform can respond to the cache instance metadata retrieval request, select corresponding cache instance metadata from the cache instance metadata stored in the unified cache access management platform according to the first parameter combination, and return the cache instance metadata to the unified cache access service. After receiving the cache instance metadata returned by the unified cache access management platform (step S3), the unified cache access service constructs a cache request according to the cache instance metadata, and obtains data from one or more cache components based on the cache request (step S4).
In a specific application, before the uniform cache access service performs cache reading, the uniform cache access service may further send a registration statement to the uniform cache access management platform, where the registration statement includes the first parameter combination, the source return interface, and the cache component type information. When the retrieval of the metadata of the cache instance fails or the cache request is not hit, the application service can return to the source according to the source return interface to obtain the relevant data. Before data is acquired according to the back source interface back source, the cache management method can further comprise the step of acquiring and using a distributed lock assembly lock so as to avoid the situation that the back source is sent back in parallel to cause instantaneous high concurrency pressure on back source service, and therefore service avalanche is caused.
Fig. 3 shows an example of the application of the method of the invention to an e-commerce scenario. In an e-commerce scenario, a user often queries for goods and places an order for a purchase through an external request initiator, i.e., a client, such as an e-commerce website, a mobile terminal single-page website (e.g., a http 5-based webpage), or an e-commerce application App. After receiving the user order request, the background system needs to perform a plurality of functional processes, including risk control, network security control, user information query, order processing, coupon application, and the like. Various functional processes often need to be completed by or based on data within the corresponding subsystems. For example: related data such as whether a user corresponding to an order is dark, whether malicious behaviors exist or whether the behavior of an order is limited and the like are acquired through a risk control subsystem; acquiring whether a user corresponding to the order is a normal user request or not, whether network attack hidden danger exists or not and other related data through a network security control subsystem; acquiring relevant information such as whether a user corresponding to the order normally logs in, available balance of the account and the like through a user information inquiry subsystem; related data such as repeated ordering limitation of a user corresponding to an order are obtained through an order subsystem; and acquiring related information such as validity judgment of the coupon corresponding to the order through the coupon subsystem.
The data of each subsystem is stored in one or more main storage devices, wherein the common data and the recently used data are cached by the corresponding cache components. If the application service directly queries each cache component, the query is implemented by means of a plurality of different cache middleware, each cache middleware needs to be developed separately for different cache components, and depending on different development languages of the application service, a plurality of versions of cache middleware may need to be developed for each cache component in a plurality of development languages so as to provide application service calls in use. Therefore, the development cost of the corresponding part of the system is increased in proportion to the number of the cache middleware and the number of the development languages, the data storage and processing capacity of the system can not be fully utilized, the response speed of the system is reduced, and the shopping experience of a user is influenced.
In order to reduce the system development cost and improve the user experience, the invention introduces a cache middle layer between the application service and each cache component, and the application service can selectively access the cache component of the corresponding subsystem for inquiry through the cache middle layer aiming at different processing tasks without the help of a plurality of different cache middle pieces.
The following explains a specific implementation manner of the cache middle layer in conjunction with the structural diagram of the cache management system of the present invention shown in fig. 4.
The cache management system is composed of a client, an application service and a background data processing subsystem. In the application of the e-commerce scenario, the client may be an e-commerce website for commodity inquiry and order-placing shopping, a mobile terminal single-page website (e.g., a web page based on http 5), or an e-commerce application App as shown in fig. 3. The application service is a functional module or a combination of functional modules that receives and processes client requests. For example, in particular embodiments, the application service may be a shopping cart settlement program. The background data processing subsystem comprises a main storage database (not shown in the figure), a data cache structure, and a query platform/report system/data analysis system.
Wherein the data caching structure comprises a plurality of caching components, such as Redis clusters, Memcached, Pika, and other types of caching components. Each cache component is used for caching data of functional subsystems such as risk control, network security control, order processing, user information query, coupon application and the like. The cache component and each functional subsystem may have any corresponding relationship, for example, data of any one or more functional subsystems may be cached by using a Redis cluster, and data of any one functional subsystem may also be cached in any one or more cache components.
The access mechanism of the data cache structure, namely the cache middle layer, comprises a unified cache access service UCS and a unified cache access management platform UCP. The UCS is used for receiving a request of an application service, acquiring corresponding data from a corresponding cache component according to the received request, and returning the data to the application service. The UCS comprises one or more caching agents, each caching agent is an application service instance and is used for proxying a caching request of an application service, an inquiry is sent to a UCP based on the received caching request, a corresponding caching component is selected based on the feedback of the UCP, and data is read from the corresponding caching component and fed back to the application service. The unified cache access management platform UCP is used for managing cache access services, and registers service registration statements submitted by application services before the application services are accessed so as to clarify the corresponding relation between application requests in various formats and cache components, and feeds back cache component information corresponding to related requests aiming at the inquiry of the unified cache access service UCS.
Before accessing the service and receiving a specific service request transmitted by a client, the application service firstly sends a service registration statement to the UCP (unified cache access management platform). The registration statement comprises a service id, a prefix, a back source interface and a cache component type. The service id is an access party application identifier, the prefix is an access party application submodule identifier, the source returning interface is an access party cache source returning interface and is used for returning a source when the cache is not hit, and the type of the cache component is a specific cache component corresponding to the service request, such as Redis, Pika, Memcached and the like.
In a specific application example of the e-commerce scenario, the sending of the registration statement by the application service to the UCP may be in the following format:
service id: cart
Prefix: order
A source returning interface: http://192.168.10.125: 8080/cart/order/gettorderid { }
Type of cache component: redis cluster
The service id indicates that the service type of the data request is shopping cart settlement, the prefix indicates an order module corresponding to the service request, the source return interface indicates a database query path needing to acquire related data from main storage equipment of the system when cache query is not hit, and the type of the cache component indicates the type of the cache component corresponding to the request.
The content of the declaration, i.e., the metadata, is registered. After receiving the registration statement, the unified cache access management platform UCP stores the statement content in a storage component in a persistent manner through a cache metadata management module, and registers the service id, the prefix, the source return interface and the type of the cache component.
The combination of the service id and the prefix is a globally unique combination in the unified cache access management platform UCP, and can be used for uniquely determining the cache component and binding the corresponding back-to-source interface. For example, the service id: the car + prefix: order, which represents the order placing module of the shopping cart system; and the service id: sales + prefix: TMS (transport management system) for representing logistics modules; service id: DN + prefix: wms (geographic management system), represents a warehousing module, and so on. The back source interface and the cache component type can be set according to the storage and cache addresses of related data. According to the type of the cache component contained in the statement, a cache agent is set in a unified cache access service UCS to preset a corresponding cache component access function. For example, if the type of the Redis component is declared, a Redis access function needs to be pre-fabricated in the UCS.
After the registration statement is completed, the client responds to the operation of the user, generates and sends a request with a preset format to the application service, the application service analyzes the request sent by the client and sends the analyzed request to the UCS, and the UCS inquires the UCP to determine a corresponding cache component and further obtains corresponding data through the cache component. The specific operation process is shown in fig. 5.
Referring again to FIG. 4, the user makes a query and selection of items via a client, such as an e-commerce website, a mobile terminal single page website (e.g., a web page based on http 5), or an e-commerce application App, and adds the selected items to the shopping cart for purchase at some later time with other items in the cart or to select to purchase the selected items directly. After receiving a settlement request or a direct purchase request of a user, the client generates an http order request and transmits the order request to the application service. The order request may be in the format: and/ucsbizId (cart & prefix) 20210101888888.
As shown in fig. 4 and 5, after receiving an http shopping cart order request transmitted by a client, an application service parses the request, determines a service id, a prefix, and a key (key), and transmits the parsed request to a unified cache access service UCS. The operation of requesting parsing by the client may also be performed by a unified cache access service UCS. After the cache intermediate layer is introduced, the corresponding cache can be obtained only by carrying the service id, the prefix and the key in the http request parameter, and the actual cache storage details are not needed to be known. In the example shown in FIG. 5, the shopping cart order request includes a service id of cart, a prefix of order, and a key of 20210101888888.
And after receiving the service request, the UCS acquires metadata from the UCP according to the service id and the prefix parameter value. The content of the metadata is: service id: a car t; prefix: order; a source returning interface: http:// 192.168.10.125.8080/cart/order/geterId { }; type of cache component: redis cluster information. After the UCS obtains the relevant metadata, a corresponding cache agent in the UCS constructs a Redis get request according to the metadata cache component type cluster information Redis cluster: and (8) key to cart to 20210101888888 to obtain the cache data.
If the cache is not hit in the step of obtaining the cache, the application service needs to carry out a back source operation to obtain the required data from the main storage device of the system. The back-to-source operation here may be initiated by the application service that initiated the cache query request, or may be initiated by another application service in response to an event of a cache query miss. Before the back source operation, a uniform cache access service UCS needs to acquire and use a distributed lock assembly lock 'cart: order: 20210101888888' to avoid the condition that the back source is sent back and causes instantaneous high concurrency pressure on back source service, thereby causing service avalanche.
After the distributed lock is acquired in the above steps, a source return request is constructed by the unified cache access service UCS: http:// 192.168.10.125.8080/car/order/geterId 20210101888888, so that the application service can obtain the required data from the corresponding main storage device and write the obtained data into the Redis cluster, thereby improving the subsequent access speed of the data.
Another aspect of the invention also includes a data analysis function based on the caching service.
Still referring to fig. 4, after passing through the unified cache access service, the unified cache access service UCS may report the cache and the return source query information, and the reported information may be called by a query platform, a report system, or a data analysis system in charge of monitoring the working state of the cache after being consumed/cleaned/stored by the message queue, and may be statistically analyzed using a big data analysis tool, or may generate a report for analyzing cache hit rates (hits), monitoring the response time of the return source interface, monitoring the working condition of the cache component, and the like. The message queue consumption module can be developed by using Spark and Flink components, and any other component for consuming the message queue can be used. The data buried point can be stored by Hive, and can also be stored by any distributed storage component. The information reporting here may be real-time reporting after each query, or may be reported at regular time, or may be triggered and reported by a preset event. The big data analysis result can be generated into a unified data view by an analysis tool, and the system management personnel can conveniently check the big data analysis result. The specific operations of analysis, report making, data view generation and the like can be executed by a third-party platform or UCP. As can be seen from the above description, the present invention unifies the cache access layer and the information reporting burying point, so that the UCP can conveniently and timely perform data analysis.
As can be seen from the above description, after the cache middle layer is introduced, on one hand, the unified cache access service may dynamically discover the cache component according to the identifier, so as to implement reading of data in the cache. This enables individual systems to use the cache with simple http requests, without concern for cache component details, and automatically back to the source in the event of a miss in the cache. Since the cache components can be uniformly replaced by the dynamic discovery of the access service, the upgrade of the cache component suspension service is not needed. On the other hand, the functions of uniformly managing all the cache components, limiting and sending back the source by using the distributed locks, avoiding cache avalanche or cache penetration, switching the cache components and even degrading when the downstream cache components are unavailable and the like can be realized. In another aspect, after the cache middle layer is introduced, the work of repeatedly developing and developing corresponding cache component clients for different cache components based on different development languages can be omitted, and the system development cost is reduced. On the other hand, the reporting of the data burying points is unified, so that the execution condition of the global rule engine can be mastered.
Although the present application describes the above exemplary method as a series of steps or acts, it should be understood that the present invention is not limited by the order of the steps or acts illustrated herein. One or more of the above steps may be performed in a different order, where some steps may be performed concurrently with other steps. For example, a plurality of cache agents under the UCS may be set after the UCS completes registration declaration, or may be set in advance. Moreover, not all illustrated steps may be required to implement a methodology in accordance with the present invention. Some of the operations may be performed in advance or may be performed when the present invention is implemented. The steps of data embedding point reporting and data analysis are not essential to the present invention, and can be omitted in appropriate cases. The cache component types are also not limited to Redis, Memcached, Pika, or MongoDB, and may be any of a variety of components that may be used for distributed caching.
According to another aspect of the present invention, there is also provided a cache management apparatus, including a data request receiving module, a metadata retrieval request module, a metadata receiving module, and a cache request module. The respective modules may be implemented in the form of software codes, or may be implemented in the form of a combination of hardware and software. Wherein the data request receiving module is configured to receive a data request from a client, the data request containing a first parameter combination; the metadata retrieval request module is configured to send a cache instance metadata retrieval request to a cache access management platform according to the first parameter combination; the metadata receiving module is configured to receive cache instance metadata returned by the cache access management platform in response to the cache instance metadata retrieval request, wherein the cache instance metadata comprises cache component type information; the cache request module is configured to construct a cache request from the cache instance metadata and retrieve data from the one or more cache components based on the cache request.
Specifically, the functions of the modules may be implemented by one or more caching agents included in the UCS, where each caching agent is an application service instance and is used for proxying a caching request of an application service. The one or more caching agents are configured to implement the functions of the modules, namely receiving the data request from the client, sending a cache instance metadata retrieval request to a cache access management platform, receiving cache instance metadata returned by the cache access management platform in response to the cache instance metadata retrieval request, constructing a cache request according to the cache instance metadata, and acquiring data from the one or more caching components based on the cache request.
According to yet another aspect of the invention, a computing device is also provided. The computing device may include one or more processors and memory. The memory has stored therein computer-executable instructions that, when executed by the processor, cause the computing device to perform any of the embodiments of the unified cache access method described above. The processor may be any suitable processing device, such as a microprocessor (micro processor), a microcontroller (micro controller), an integrated circuit, or other suitable processing device. The memory may include any suitable computing system or medium, including but not limited to a non-transitory computer readable medium, Random Access Memory (RAM), Read Only Memory (ROM), hard disk, flash memory, or other memory device. The memory may store computer-executable instructions that are executable by the processor to cause the computing device to perform any of the embodiments of the unified cache access method described above. The memory may also store data. In embodiments of the present invention, the processor may execute various modules included in the instructions to implement any of the embodiments of the unified cache access method described above.
According to yet another aspect of the present invention, there is also provided a non-transitory computer-readable storage medium. The storage medium has stored thereon computer-executable instructions that, when executed by a computer, cause the computer to perform any of the embodiments of the unified cache access method described above.
It should be understood that the term "module" refers to computer logic for providing the desired functionality. Accordingly, a module may be implemented by hardware, dedicated circuitry, firmware and/or software, and combinations thereof. In one embodiment, a module is a program code file stored on a storage device, loaded into memory, and executed by a processor, or a computer program product (e.g., computer executable instructions) that may be stored in a tangible computer readable storage medium such as RAM, hard disk, or optical or magnetic media.
While certain embodiments and features of the present invention have been described above, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation to the teachings of the invention without departing from the essential scope thereof. Therefore, the invention is not to be limited by the specific embodiments disclosed herein.

Claims (11)

1. A method of cache management, the cache including one or more cache components, the method comprising:
receiving a data request from a client, wherein the data request comprises a first parameter combination;
sending a cache instance metadata retrieval request to a cache access management platform according to the first parameter combination;
receiving cache instance metadata returned by the cache access management platform in response to the cache instance metadata retrieval request, the cache instance metadata including cache component type information;
and constructing a cache request according to the cache instance metadata, and acquiring data from the one or more cache components based on the cache request.
2. The method of claim 1, wherein the first combination of parameters uniquely corresponds to a set of cache instance metadata in the cache access management platform.
3. The method of claim 2, wherein the first combination of parameters includes an application identification corresponding to the data request and a corresponding application sub-module identification.
4. The method of claim 3, comprising sending a registration statement to the cache access management platform, the registration statement comprising the first parameter combination, a back-to-source interface, and a cache component type.
5. The method of claim 4, comprising:
and if the data acquisition from the one or more cache components based on the cache request is unsuccessful, acquiring data according to the source return interface source return.
6. The method of claim 5, further comprising, prior to said acquiring data back to the source according to the back to source interface, acquiring and using a distributed lock assembly lock to avoid concurrent back to the source.
7. The method of claim 6, wherein prior to said issuing a cache instance metadata retrieval request to a cache access management platform in accordance with said first combination of parameters, said cache management method further comprises:
and analyzing the data request from the client to obtain the first parameter combination.
8. A cache management apparatus, characterized in that the cache management apparatus comprises:
a data request receiving module configured to receive a data request from a client, the data request including a first parameter combination;
the metadata retrieval request module is configured to send a cache instance metadata retrieval request to the cache access management platform according to the first parameter combination;
a metadata receiving module configured to receive cache instance metadata returned by the cache access management platform in response to the cache instance metadata retrieval request, the cache instance metadata including cache component type information;
and the cache request module is configured to construct a cache request according to the cache instance metadata and acquire data from the one or more cache components based on the cache request.
9. The apparatus of claim 8, the apparatus comprising one or more caching agents configured to:
receiving the data request from the client;
sending a cache instance metadata retrieval request to a cache access management platform;
receiving cache instance metadata returned by the cache access management platform in response to the cache instance metadata retrieval request; and
and constructing a cache request according to the cache instance metadata, and acquiring data from the one or more cache components based on the cache request.
10. A computing device, comprising:
a memory for storing computer executable instructions; and
a processor for executing the computer-executable instructions to perform the method of any of claims 1 to 7.
11. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a computer, cause the computer to perform the method of any of claims 1 to 7.
CN202111357841.4A 2021-11-16 2021-11-16 Cache management method and device, computing equipment and storage medium Pending CN114048409A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111357841.4A CN114048409A (en) 2021-11-16 2021-11-16 Cache management method and device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111357841.4A CN114048409A (en) 2021-11-16 2021-11-16 Cache management method and device, computing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114048409A true CN114048409A (en) 2022-02-15

Family

ID=80209590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111357841.4A Pending CN114048409A (en) 2021-11-16 2021-11-16 Cache management method and device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114048409A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115277838A (en) * 2022-07-28 2022-11-01 天翼云科技有限公司 Cloud cache database service method, device, equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115277838A (en) * 2022-07-28 2022-11-01 天翼云科技有限公司 Cloud cache database service method, device, equipment and readable storage medium
CN115277838B (en) * 2022-07-28 2024-01-02 天翼云科技有限公司 Cloud cache database service method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN109582303B (en) General component calling method, device, computer equipment and storage medium
US9787784B2 (en) Tracking web server
US7062756B2 (en) Dynamic object usage pattern learning and efficient caching
US8725794B2 (en) Enhanced website tracking system and method
US6917950B2 (en) Modifying a shared resource
US20160306637A1 (en) Application Object Framework
US11210198B2 (en) Distributed web page performance monitoring methods and systems
US10999399B2 (en) Offline use of network application
CN111143383B (en) Data updating method and device, electronic equipment and storage medium
CN114048409A (en) Cache management method and device, computing equipment and storage medium
CN113360210A (en) Data reconciliation method and device, computer equipment and storage medium
CA3023737C (en) Processing application programming interface (api) queries based on variable schemas
CN111273964B (en) Data loading method and device
CN112434037A (en) Data processing method, processing device, data processing apparatus, and storage medium
CN111245890B (en) Method and device for downloading files in webpage
WO2021108416A1 (en) Object-based search processing
US10067808B2 (en) Nondeterministic operation execution environment utilizing resource registry
CN114186148A (en) Page loading method and device, electronic equipment and storage medium
CN112181391A (en) Method and system capable of dynamically expanding data
EP2990960A1 (en) Data retrieval via a telecommunication network
US8321844B2 (en) Providing registration of a communication
US11716405B1 (en) System and method for identifying cache miss in backend application
CN116974780A (en) Data caching method, device, software program, equipment and storage medium
CN113886439A (en) Method, device and storage medium for managing cache data
CN117632994A (en) Data query method, data query device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination