CN113360528A - Data query method and device based on multi-level cache - Google Patents

Data query method and device based on multi-level cache Download PDF

Info

Publication number
CN113360528A
CN113360528A CN202010152491.7A CN202010152491A CN113360528A CN 113360528 A CN113360528 A CN 113360528A CN 202010152491 A CN202010152491 A CN 202010152491A CN 113360528 A CN113360528 A CN 113360528A
Authority
CN
China
Prior art keywords
inventory
cache
state data
inventory state
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010152491.7A
Other languages
Chinese (zh)
Inventor
张亚琴
李世杰
刘强
孟强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202010152491.7A priority Critical patent/CN113360528A/en
Publication of CN113360528A publication Critical patent/CN113360528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a data query method and device based on multi-level cache, and relates to the technical field of computers. One embodiment of the method comprises: receiving an inventory query request; judging whether inventory state data corresponding to the inventory inquiry request exists in a cache or not; the caching includes: a first cache, a second cache, and a third cache; the first cache caches the inventory state data of a preset calling party, and the second cache caches the inventory state data of the goods-free objects; the third cache caches the inventory state data of the hot spot articles; if yes, taking the inventory state data as result data; otherwise, acquiring basic data corresponding to the inventory query request from a database, determining the inventory state data according to the basic data, and taking the inventory state data as result data. The method and the system can improve the single machine throughput and the system concurrency capability under the conditions of high concurrency and large-flow access, are stable and reliable, thereby improving the user experience and saving the cost.

Description

Data query method and device based on multi-level cache
Technical Field
The invention relates to the technical field of computers, in particular to a data query and device based on multi-level cache.
Background
Under the condition of high concurrency and large flow access, the system may have the problems of long response time and even short-time unavailability, and the user experience is influenced. The prior art generally processes the defects through hardware expansion, distribution or hierarchical caching, but the methods have the problems of high cost, poor stability, weak stand-alone throughput and system concurrency capability under the condition of high concurrent large-flow access, and the like.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data query method and apparatus based on a multi-level cache, which can improve a single-machine throughput and a system concurrency capability under the conditions of high concurrency and large-flow access, and are stable and reliable, so as to improve user experience and save cost.
In order to achieve the above object, according to an aspect of the embodiments of the present invention, there is provided a data query method based on a multi-level cache, including:
receiving an inventory query request;
judging whether inventory state data corresponding to the inventory inquiry request exists in a cache or not; the caching includes: a first cache, a second cache, and a third cache; the first cache is used for caching the inventory state data of a preset calling party, and the second cache is used for caching the inventory state data of the goods-free article; the third cache is used for caching the inventory state data of the hot spot article;
if yes, taking the inventory state data as result data; otherwise, acquiring basic data corresponding to the inventory query request from a database, determining the inventory state data according to the basic data, and taking the inventory state data as result data.
Optionally, the determining whether the inventory status data corresponding to the inventory query request exists in the cache includes:
judging whether inventory state data corresponding to the inventory inquiry request exists in a first cache or not; if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in a second cache or not;
if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in a third cache or not;
if so, taking the inventory state data as the result data; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the cache.
Optionally, the first cache includes: the system comprises a first local cache unit and a distributed cache unit;
judging whether the inventory state data corresponding to the inventory inquiry request exists in the first cache or not, wherein the judging step comprises the following steps: judging whether inventory state data corresponding to the inventory inquiry request exists in a first local cache unit or not; if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in the distributed cache unit; if so, writing the stock state data into a first local cache unit, and taking the stock state data as the result data; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the first cache.
Optionally, the first local cache unit includes: presetting a calling party directory and cache records of all query objects corresponding to the preset calling party; the cache records are stored in a key-value pair mode, wherein the key is a query object identifier, and the value is inventory state data corresponding to the query object;
judging whether the inventory state data corresponding to the inventory inquiry request exists in the first local cache unit or not, wherein the judging step comprises the following steps: judging whether a calling party identifier of the inventory inquiry request exists in the preset calling party directory or not; if yes, acquiring inventory state data corresponding to the query object identifier of the inventory query request from the cache record; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the first local cache unit.
Optionally, the third cache comprises: the second local cache unit is used for storing the hot spot article identifier, and the third local cache unit is used for storing the inventory state data of the hot spot article;
judging whether the third cache has inventory state data corresponding to the inventory inquiry request or not, wherein the judging step comprises the following steps: judging whether an article identifier of the inventory inquiry request exists in a second local cache unit; if so, acquiring inventory state data corresponding to the item identifier of the inventory inquiry request from a third local cache unit; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the hot spot cache.
Optionally, the method of the embodiment of the present invention further includes: and if the third local cache unit does not have the inventory state data corresponding to the item identifier of the inventory query request, writing a placeholder in the third local cache unit, then loading the inventory state data corresponding to the inventory query request from a database by adopting an asynchronous thread, taking the obtained inventory state data as the result data, and writing back the obtained inventory state data to the third local cache unit.
Optionally, the method of the embodiment of the present invention further includes: monitoring hot spot articles in the second local cache unit; and for any hot spot item in the second local cache unit, deleting the any hot spot item when the any hot spot item does not meet the hot spot condition.
Optionally, after acquiring the inventory status data corresponding to the inventory query request from the database, the method further includes: and writing the acquired inventory state data into a first cache, and writing the acquired inventory state data into a second cache when the acquired inventory state data indicate that the article requested by the inventory inquiry is not good.
Optionally, the query object identification includes: and the item identifier, the channel identifier and the address identifier of the inventory inquiry request.
According to a second aspect of the embodiments of the present invention, there is provided a data query apparatus based on a multi-level cache, including:
the request receiving module is used for receiving an inventory inquiry request;
the cache query module is used for judging whether the inventory state data corresponding to the inventory query request exists in a cache; the caching includes: a first cache, a second cache, and a third cache; the first cache is used for caching the inventory state data of a preset calling party, and the second cache is used for caching the inventory state data of the goods-free article; the third cache is used for caching the inventory state data of the hot spot article;
the result determining module takes the inventory state data as result data if the inventory state data is positive; otherwise, acquiring basic data corresponding to the inventory query request from a database, determining the inventory state data according to the basic data, and taking the inventory state data as result data.
Optionally, the determining, by the cache querying module, whether inventory status data corresponding to the inventory querying request exists in a cache includes:
judging whether inventory state data corresponding to the inventory inquiry request exists in a first cache or not; if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in a second cache or not;
if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in a third cache or not;
if so, taking the inventory state data as the result data; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the cache.
Optionally, the first cache includes: the system comprises a first local cache unit and a distributed cache unit;
the cache query module judges whether the first cache has inventory state data corresponding to the inventory query request, and the judgment includes: judging whether inventory state data corresponding to the inventory inquiry request exists in a first local cache unit or not; if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in the distributed cache unit; if so, writing the stock state data into a first local cache unit, and taking the stock state data as the result data; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the first cache.
Optionally, the first local cache unit includes: presetting a calling party directory and cache records of all query objects corresponding to the preset calling party; the cache records are stored in a key-value pair mode, wherein the key is a query object identifier, and the value is inventory state data corresponding to the query object;
the cache query module judges whether the first local cache unit has inventory state data corresponding to the inventory query request, and the judgment includes: judging whether a calling party identifier of the inventory inquiry request exists in the preset calling party directory or not; if yes, acquiring inventory state data corresponding to the query object identifier of the inventory query request from the cache record; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the first local cache unit.
Optionally, the third cache comprises: the second local cache unit is used for storing the hot spot article identifier, and the third local cache unit is used for storing the inventory state data of the hot spot article;
the cache query module judges whether the third cache has inventory state data corresponding to the inventory query request, and the judgment includes: judging whether an article identifier of the inventory inquiry request exists in a second local cache unit; if so, acquiring inventory state data corresponding to the item identifier of the inventory inquiry request from a third local cache unit; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the hot spot cache.
Optionally, the cache query module is further configured to: and if the third local cache unit does not have the inventory state data corresponding to the item identifier of the inventory query request, writing a placeholder in the third local cache unit, then loading the inventory state data corresponding to the inventory query request from a database by adopting an asynchronous thread, taking the obtained inventory state data as the result data, and writing back the obtained inventory state data to the third local cache unit.
Optionally, the cache query module is further configured to: monitoring hot spot articles in the second local cache unit; and for any hot spot item in the second local cache unit, deleting the any hot spot item when the any hot spot item does not meet the hot spot condition.
Optionally, the result determination module is further configured to: and after acquiring the inventory state data corresponding to the inventory inquiry request from a database, writing the acquired inventory state data into a first cache, and when the acquired inventory state data indicate that the article requested by the inventory inquiry request is not good, writing the acquired inventory state data into a second cache.
Optionally, the query object identification includes: and the item identifier, the channel identifier and the address identifier of the inventory inquiry request.
Optionally, the number of levels of the cache is four or more.
According to a third aspect of the embodiments of the present invention, there is provided an electronic device for data query based on multi-level cache, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method provided by the first aspect of the embodiments of the present invention.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits: by adopting a multi-level cache mechanism comprising a first cache, a second cache and a third cache, the single-machine throughput and the system concurrency capability can be improved under the conditions of high concurrency and large-flow access, and the method is stable and reliable, so that the user experience is improved, and the cost is saved.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram illustrating a multi-level cache-based data query method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a main flow of a data query method based on multi-level cache according to an alternative embodiment of the present invention;
FIG. 3 is a diagram illustrating a main process of querying inventory status data in a first cache according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a main process flow of querying inventory status data in the second cache according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a main process flow of querying inventory status data in a third cache according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the main modules of a data query device based on multi-level cache according to an embodiment of the present invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 8 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
According to an aspect of the embodiments of the present invention, a data query method based on a multi-level cache is provided.
Fig. 1 is a schematic diagram illustrating a data query method based on a multi-level cache according to an embodiment of the present invention, and as shown in fig. 1, the present invention employs a multi-level cache mechanism including a first cache, a second cache, and a third cache. The first cache is used for caching the inventory state data of a preset calling party, and the second cache is used for caching the inventory state data of the goods-free article; the third cache is used for caching the inventory status data of the hot spot item. The data query process based on the inventory mechanism shown in fig. 1 includes: receiving an inventory query request; judging whether inventory state data corresponding to the inventory inquiry request exists in a cache or not; if yes, taking the inventory state data as result data; otherwise, acquiring basic data corresponding to the inventory query request from a database, determining the inventory state data according to the basic data, and taking the inventory state data as result data.
Fig. 2 is a schematic main flow chart of a data query method based on multi-level cache according to an alternative embodiment of the present invention, and as shown in fig. 2, the data query method includes:
step S201, receiving an inventory inquiry request;
step S202, acquiring inventory state data corresponding to the inventory inquiry request from the first cache;
step S203, judging whether a result is inquired in the first cache; if yes, jumping to step S209; otherwise, jumping to step S204;
step S204, acquiring inventory state data corresponding to the inventory inquiry request from the second cache;
step S205, judging whether a result is inquired in the second cache; if yes, jumping to step S209; otherwise, jumping to step S206;
step S206, acquiring inventory state data corresponding to the inventory inquiry request from the third cache;
step S207, judging whether a result is inquired in the third cache; if yes, jumping to step S209; otherwise, jumping to step S208;
step S208, acquiring basic data corresponding to the inventory query request from the database, and determining inventory state data according to the basic data;
step S209 takes the inquired stock state data as result data.
Inventory status data is used to indicate whether an item is in stock or out of stock. The first cache caches stock state data of preset callers, such as callers for personalized recommendations, advertisements, and the like. The calling party is characterized by large interface calling amount, large coverage range of the query object and capability of accepting a certain time delay for result data of the query object. The first cache caches the inventory state data of query objects such as commodities by taking a calling party as a dimension, can intercept a large number of inventory query requests, reduces resource consumption caused by simultaneously calling a database query interface by a large number of requests under the conditions of high concurrency and large-flow access, improves single-machine throughput and system concurrency capability, is stable and reliable, improves user experience and saves cost.
The second cache is used for caching the inventory state data of the goods without goods and can intercept the inventory inquiry request of the goods without goods. Taking a second killing or robbery scene of the articles as an example, the hot spot articles have limited inventory quantity and large purchase quantity, can be sold empty within a few seconds and usually cannot be replenished immediately. And the second cache is adopted to cache the inventory state data of the non-cargo articles, so that repeated invalid basic information query and calculation can be avoided, and a peak clipping effect (namely, the access to the database is reduced under the conditions of high concurrency and large flow access) is realized. In the practical application process, a certain cache failure time can be set according to the practical business scene, so that the situation that goods with no goods are broken down to subsequent inquiry and calculation in a short time is ensured, and the cache data can be updated in time after the stock state data of the goods are updated.
The third cache is used for caching the inventory status data of the hot spot item. The concurrency of the hot spot articles is large, and by caching the inventory state data of the hot spot articles, a large number of requests can be prevented from calling a database query interface at the same time, and the resource consumption is reduced.
Although only three levels, i.e. the first level, the second level and the third level, are shown in fig. 1, it should be understood by those skilled in the art that four or more levels of caches may be included in the multi-level caching mechanism, and the data cached in each of the fourth level and the other levels may be selectively determined according to actual situations, for example, the stock data of the item exclusively shared by the member, the stock data of the newly released item, and the like.
When the inventory state data corresponding to the inventory inquiry request does not exist in the cache, acquiring basic data corresponding to the inventory inquiry request from the database, such as commodity information, inventory quantity, warehouse attribute and the like, matching a bin covering an address according to an order placing address, and performing state calculation according to the inventory quantity to obtain the inventory state data. The database may employ Redis (an open source storage system written in ANSI C language), or the like. In the actual application process, the database can be queried through an RPC (remote Procedure Call) interface. The query basic data depends on network interaction and an RPC interface, the influence on the performance of the interface is large, the subsequent calculation processing has high consumption on system resources, and the problem that the system is long in response time and even unavailable in a short time under the condition of high concurrency and large-flow access can be caused, so that the user experience is influenced. The invention adopts a multi-level cache mechanism comprising the first cache, the second cache and the third cache, can intercept a large number of inventory query requests through the caches, avoid unnecessary resource consumption, improve the single machine throughput and the system concurrency capability under the conditions of high concurrency and large-flow access, and is stable and reliable, thereby improving the user experience and saving the cost.
Fig. 3 is a schematic diagram of the main process of querying the inventory status data in the first cache according to an alternative embodiment of the present invention. In this example, the first cache includes: the system comprises a first local cache unit and a distributed cache unit. The data query process comprises the following steps:
step S301, receiving an inventory inquiry request;
step S302, inquiring inventory state data corresponding to the inventory inquiry request in a first local cache unit;
step S303, determining whether a result is queried in the first local cache unit. If yes, jumping to step S307; otherwise, jumping to step S304;
step S304, inquiring inventory state data corresponding to the inventory inquiry request in a distributed cache unit;
step S305, judging whether a result is inquired in the distributed cache unit. If yes, jumping to step S307; otherwise, jumping to step S306;
step S306, inquiring inventory state data corresponding to the inventory inquiry request in the second cache, and writing the inquired inventory state data into the first local cache unit;
step S307, the inquired stock state data is used as result data.
The calling party has the characteristics of large interface calling amount, large article coverage range and capability of accepting a certain time delay for the commodity cargo or non-cargo state. Because the query object range is large and limited by the space limitation of a local memory, the inventory state data of all query objects are cached locally and often cannot meet the requirement. In this example, a second-level cache mechanism of "local cache unit + distributed cache unit" is adopted, so that the problem that query performance is affected due to large query amount of distributed cache can be avoided.
The first local cache unit may employ Guava (a Java library based on an open source), Caffeine (a rewritten version of Guava cache using Java 8), and the like, and the distributed cache unit may employ Redis, memcached (a distributed memory object cache system), and the like. The inventory status data in the first local cache unit and the distributed cache unit may be set to fail within a preset time. And when the result is not found in the first local cache unit, continuously querying the distributed cache unit, and writing back the result to the first local cache unit after the result is obtained from the distributed cache unit. The write-back can be performed in a synchronous write-back mode so as to improve the timeliness of the cache data. A first local cache unit is added, firstly, in order to relieve the query pressure of the distributed cache unit, some data are cached in a short time, so that the distributed cache unit is prevented from being punctured under the conditions of high concurrency and large flow access, the problem that query performance is affected due to overhigh QPS (requests per second or query quantity) caused by large query quantity of the distributed cache unit is further avoided, and secondly, in order to improve the query speed, the space is used for changing time.
Optionally, the first local cache unit includes: presetting a calling party directory and cache records of all query objects corresponding to the preset calling party; the cache records are stored in a key-value pair mode, wherein the key is a query object identifier, and the value is inventory state data corresponding to the query object. Judging whether the inventory state data corresponding to the inventory inquiry request exists in the first local cache unit or not, wherein the judging step comprises the following steps: judging whether a calling party identifier of the inventory inquiry request exists in the preset calling party directory or not; if yes, acquiring inventory state data corresponding to the query object identifier of the inventory query request from the cache record; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the first local cache unit.
The query object may be selectively defined according to actual situations, for example, each item is used as a query object, or items in a certain administrative area are used as query objects, or items in each channel are used as query objects, and the like. The query object identification is a unique representation of the query object. Optionally, the query object identification comprises: an item identification, a channel (e.g., an item procurement channel, an item sales channel, etc.) identification, an address identification of the inventory query request. Illustratively, "Sku (Stock Keeping Unit) + channel + administrative partition four-level address" is taken as the query object identifier.
The inventory query request of the cached caller can be effectively intercepted by caching the preset caller directory. In this example, the operations of adding a caller, deleting a caller, modifying a caller, and the like can be performed on the preset caller directory according to the actual situation, and the expandability is provided.
Optionally, the third cache comprises: the system comprises a first local cache unit used for storing the hot spot item identification, a second local cache unit used for storing the hot spot item identification, and a third local cache unit used for storing the inventory status data of the hot spot item. Judging whether the third cache has inventory state data corresponding to the inventory inquiry request or not, wherein the judging step comprises the following steps: judging whether an article identifier of the inventory inquiry request exists in a second local cache unit; if so, acquiring inventory state data corresponding to the item identifier of the inventory inquiry request from a third local cache unit; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the hot spot cache.
Hot spot caching is where the status results (i.e., inventory status data) are cached for relatively hot items. The judgment basis of the hot spot article can be selectively set according to the actual situation. The item identification of the query request refers to a unique representation of the item requested in the query request. Illustratively, as shown in fig. 5, the second local cache unit is a layer of Guava, and the third local cache unit is a layer of Guava. And taking the item which has a time length exceeding a preset time length n (namely the time existing in the cache from the time of entering the cache to the time before being removed) and does not fail in the second local cache unit as the hot spot item. The second layer is used to cache the status results of the hot spot items, for example, in Sku, channel, etc. dimensions, and the cached status results may be the stock status of all regions.
Some data can be cached in a short time through local caching, and the database is prevented from being punctured in a hot spot scene under the conditions of high concurrency and large flow. The second cache is a local cache, for example, Caffeine or the like is adopted. By adopting a double-layer cache mechanism of 'a second local cache unit + a third local cache unit', the query efficiency of the third cache can be improved.
Optionally, the method of the embodiment of the present invention further includes: and if the third local cache unit does not have the inventory state data corresponding to the item identifier of the inventory query request, writing a placeholder in the third local cache unit, then loading the inventory state data corresponding to the inventory query request from a database by adopting an asynchronous thread, taking the obtained inventory state data as the result data, and writing back the obtained inventory state data to the third local cache unit.
By writing the placeholder, the database can be prevented from being respectively queried by a plurality of query requests aiming at the same query object, and the resource consumption of the system is reduced. As shown in fig. 5, if the current hot Sku does not exist in the second layer guava, a placeholder data is written before loading the inventory state data by using the Load method (a remote calling method), and then the asynchronous thread is initiated to call all local inventory state interfaces. All zone interfaces refer to stock status interfaces at all addresses of an item. The Load method has a locking mechanism to ensure that the concurrent request only has one entering method call. Other calls of the same Sku take the placeholder data and find the placeholder data to be an empty space set, and then follow-up calculation is continued. And the system resource consumption caused by the simultaneous request that the same Sku starts the asynchronous thread to call all the area interfaces is avoided.
Optionally, the method of the embodiment of the present invention further includes: monitoring hot spot articles in the second local cache unit; and for any hot spot item in the second local cache unit, deleting the any hot spot item when the any hot spot item does not meet the hot spot condition.
The hot spot condition can be selectively set according to the actual situation. For example, monitoring the cache duration of each hot-spot item in the second local cache unit; and for any hot spot article in the second local cache unit, when the cache duration of the any hot spot article is greater than a preset duration threshold, deleting the any hot spot article. For another example, the total number of hot spot items in the second local cache unit is monitored; and when the total number is larger than a preset number threshold, deleting at least one part of hot spot articles in the second local cache unit. For another example, the number of requests for any item is monitored, and when the number of requests is greater than or equal to a preset number threshold, the item is taken as a hot item, and when the number of requests is less than the preset number threshold, the item is deleted from the second local cache unit.
Illustratively, each cache data of the cache queue of the hot spot articles cached in the second local cache unit has a fixed expiration time. When the inventory inquiry request is a certain Sku, recording the current time in a second local cache unit, and calculating the time length of the Sku existing in the second local cache before the Sku is removed from the cache due to cache failure, wherein the cache failure time is more than or equal to the time of the current Sku existing in the cache. And when the Sku is not accessed in the dead time and is removed from the cache queue, under the condition of high concurrency, the length of the cache queue is limited, and more skus enter the cache queue, and partial skus in the cache queue are removed.
By updating the hot spot articles in the second local cache unit in real time, the query efficiency can be improved, and the memory consumption can be reduced.
Optionally, after acquiring the inventory status data corresponding to the inventory query request from the database, the method further includes: the obtained inventory status data is written to a first cache, as shown in fig. 3, and the obtained inventory status data is written to a second cache when the obtained inventory status data indicates that the item requested by the inventory query is out of stock.
The second cache is a local cache, for example, Caffeine or the like is adopted. Some data can be cached in a short time through local caching, and the database is prevented from being punctured in a hot spot scene under the conditions of high concurrency and large flow. Fig. 4 is a schematic diagram of a main flow of querying the inventory status data in the second cache according to the embodiment of the present invention. As shown in fig. 4, an inventory query request is received in step S401, and inventory status data corresponding to the inventory query request is queried in the second cache in step S402. Step S403, judging whether a result is inquired; if so, go to step S405, otherwise go to step S404. And S404, loading the stock state data from the database and writing the stock state data into a second cache. Step S405, the inquired stock state data is used as result data.
And when writing back, a synchronous writing back mode or an asynchronous writing back mode can be adopted. Of synchronous and asynchronous interest are messaging mechanisms (synchronization/asynchorous communication). Synchronization is the process of issuing a call without returning the call until no result is obtained. But once the call returns, the return value is obtained. In other words, it is up to the caller to wait for the result of the call. Asynchronous is the opposite, and calls return directly after being issued, so no results are returned. In other words, when an asynchronous procedure call is issued, the caller does not get the result immediately. Instead, after the call is issued, the callee notifies the caller via a state, notification, or handles the call via a callback function. Illustratively, a synchronous write-back mode is adopted when data is written into the local cache, and an asynchronous write-back mode is adopted when data is written into the distributed cache so as to reduce resource consumption.
After the data are acquired and stored from the database, the acquired inventory state data are written into the first cache or the second cache, and the same subsequent inventory query request can be responded through the first cache or the second cache, so that the consumption of system resources is reduced.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for implementing the above method.
Fig. 6 is a schematic diagram of main modules of a data query apparatus based on a multi-level cache according to an embodiment of the present invention, and as shown in fig. 6, the data query apparatus 600 based on a multi-level cache includes:
a request receiving module 601, which receives an inventory query request;
a cache query module 602, configured to determine whether stock state data corresponding to the stock query request exists in a cache; the caching includes: a first cache, a second cache, and a third cache; the first cache is used for caching the inventory state data of a preset calling party, and the second cache is used for caching the inventory state data of the goods-free article; the third cache is used for caching the inventory state data of the hot spot article;
a result determining module 603, if yes, using the inventory status data as result data; otherwise, acquiring basic data corresponding to the inventory query request from a database, determining the inventory state data according to the basic data, and taking the inventory state data as result data.
Optionally, the determining, by the cache querying module, whether inventory status data corresponding to the inventory querying request exists in a cache includes:
judging whether inventory state data corresponding to the inventory inquiry request exists in a first cache or not; if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in a second cache or not;
if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in a third cache or not;
if so, taking the inventory state data as the result data; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the cache.
Optionally, the first cache includes: the system comprises a first local cache unit and a distributed cache unit;
the cache query module judges whether the first cache has inventory state data corresponding to the inventory query request, and the judgment includes: judging whether inventory state data corresponding to the inventory inquiry request exists in a first local cache unit or not; if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in the distributed cache unit; if so, writing the stock state data into a first local cache unit, and taking the stock state data as the result data; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the first cache.
Optionally, the first local cache unit includes: presetting a calling party directory and cache records of all query objects corresponding to the preset calling party; the cache records are stored in a key-value pair mode, wherein the key is a query object identifier, and the value is inventory state data corresponding to the query object;
the cache query module judges whether the first local cache unit has inventory state data corresponding to the inventory query request, and the judgment includes: judging whether a calling party identifier of the inventory inquiry request exists in the preset calling party directory or not; if yes, acquiring inventory state data corresponding to the query object identifier of the inventory query request from the cache record; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the first local cache unit.
Optionally, the third cache comprises: the second local cache unit is used for storing the hot spot article identifier, and the third local cache unit is used for storing the inventory state data of the hot spot article;
the cache query module judges whether the third cache has inventory state data corresponding to the inventory query request, and the judgment includes: judging whether an article identifier of the inventory inquiry request exists in a second local cache unit; if so, acquiring inventory state data corresponding to the item identifier of the inventory inquiry request from a third local cache unit; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the hot spot cache.
Optionally, the cache query module is further configured to: and if the third local cache unit does not have the inventory state data corresponding to the item identifier of the inventory query request, writing a placeholder in the third local cache unit, then loading the inventory state data corresponding to the inventory query request from a database by adopting an asynchronous thread, taking the obtained inventory state data as the result data, and writing back the obtained inventory state data to the third local cache unit.
Optionally, the cache query module is further configured to: monitoring hot spot articles in the second local cache unit; and for any hot spot item in the second local cache unit, deleting the any hot spot item when the any hot spot item does not meet the hot spot condition.
Optionally, the result determination module is further configured to: and after acquiring the inventory state data corresponding to the inventory inquiry request from a database, writing the acquired inventory state data into a first cache, and when the acquired inventory state data indicate that the article requested by the inventory inquiry request is not good, writing the acquired inventory state data into a second cache.
Optionally, the query object identification includes: and the item identifier, the channel identifier and the address identifier of the inventory inquiry request.
According to a third aspect of the embodiments of the present invention, there is provided an electronic device for data query based on multi-level cache, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method provided by the first aspect of the embodiments of the present invention.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
Fig. 7 illustrates an exemplary system architecture 700 of a multi-level cache based data query method or a multi-level cache based data query apparatus to which an embodiment of the present invention may be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 701, 702, 703 to interact with a server 705 over a network 704, to receive or send messages or the like. The terminal devices 701, 702, 703 may have installed thereon various communication client applications, such as a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, social platform software, etc. (by way of example only).
The terminal devices 701, 702, 703 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 701, 702, 703. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the data query method based on multi-level cache provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the data query apparatus based on multi-level cache is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a request receiving module to receive an inventory query request; the cache query module is used for judging whether the inventory state data corresponding to the inventory query request exists in a cache; the caching includes: a first cache, a second cache, and a third cache; the first cache is used for caching the inventory state data of a preset calling party, and the second cache is used for caching the inventory state data of the goods-free article; the third cache is used for caching the inventory state data of the hot spot article; the result determining module takes the inventory state data as result data if the inventory state data is positive; otherwise, acquiring basic data corresponding to the inventory query request from a database, determining the inventory state data according to the basic data, and taking the inventory state data as result data. The names of these modules do not constitute a limitation to the module itself in some cases, for example, the request receiving module may also be described as a "module that determines whether inventory status data corresponding to the inventory query request exists in the cache".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: receiving an inventory query request; judging whether inventory state data corresponding to the inventory inquiry request exists in a cache or not; the caching includes: a first cache, a second cache, and a third cache; the first cache is used for caching the inventory state data of a preset calling party, and the second cache is used for caching the inventory state data of the goods-free article; the third cache is used for caching the inventory state data of the hot spot article; if yes, taking the inventory state data as result data; otherwise, acquiring basic data corresponding to the inventory query request from a database, determining the inventory state data according to the basic data, and taking the inventory state data as result data.
According to the technical scheme of the embodiment of the invention, a multi-level cache mechanism comprising the first cache, the second cache and the third cache is adopted, so that the single-machine throughput and the system concurrency capability can be improved under the conditions of high concurrency and large-flow access, the operation is stable and reliable, the user experience is improved, and the cost is saved.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A data query method based on multi-level cache is characterized by comprising the following steps:
receiving an inventory query request;
judging whether inventory state data corresponding to the inventory inquiry request exists in a cache or not; the caching includes: a first cache, a second cache, and a third cache; the first cache is used for caching the inventory state data of a preset calling party, and the second cache is used for caching the inventory state data of the goods-free article; the third cache is used for caching the inventory state data of the hot spot article;
if yes, taking the inventory state data as result data; otherwise, acquiring basic data corresponding to the inventory query request from a database, determining the inventory state data according to the basic data, and taking the inventory state data as result data.
2. The method of claim 1, wherein determining whether inventory status data corresponding to the inventory query request exists in a cache comprises:
judging whether inventory state data corresponding to the inventory inquiry request exists in a first cache or not; if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in a second cache or not;
if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in a third cache or not;
if so, taking the inventory state data as the result data; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the cache.
3. The method of claim 2, wherein the first caching comprises: the system comprises a first local cache unit and a distributed cache unit;
judging whether the inventory state data corresponding to the inventory inquiry request exists in the first cache or not, wherein the judging step comprises the following steps: judging whether inventory state data corresponding to the inventory inquiry request exists in a first local cache unit or not; if so, taking the inventory state data as the result data; otherwise, judging whether inventory state data corresponding to the inventory inquiry request exists in the distributed cache unit; if so, writing the stock state data into a first local cache unit, and taking the stock state data as the result data; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the first cache.
4. The method of claim 3, wherein the first local cache unit comprises: presetting a calling party directory and cache records of all query objects corresponding to the preset calling party; the cache records are stored in a key-value pair mode, wherein the key is a query object identifier, and the value is inventory state data corresponding to the query object;
judging whether the inventory state data corresponding to the inventory inquiry request exists in the first local cache unit or not, wherein the judging step comprises the following steps: judging whether a calling party identifier of the inventory inquiry request exists in the preset calling party directory or not; if yes, acquiring inventory state data corresponding to the query object identifier of the inventory query request from the cache record; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the first local cache unit.
5. The method of claim 2, wherein the third cache comprises: the second local cache unit is used for storing the hot spot article identifier, and the third local cache unit is used for storing the inventory state data of the hot spot article;
judging whether the third cache has inventory state data corresponding to the inventory inquiry request or not, wherein the judging step comprises the following steps: judging whether an article identifier of the inventory inquiry request exists in a second local cache unit; if so, acquiring inventory state data corresponding to the item identifier of the inventory inquiry request from a third local cache unit; otherwise, judging that the inventory state data corresponding to the inventory inquiry request does not exist in the hot spot cache.
6. The method of claim 5, further comprising: and if the third local cache unit does not have the inventory state data corresponding to the item identifier of the inventory query request, writing a placeholder in the third local cache unit, then loading the inventory state data corresponding to the inventory query request from a database by adopting an asynchronous thread, taking the obtained inventory state data as the result data, and writing back the obtained inventory state data to the third local cache unit.
7. The method of claim 5, further comprising: monitoring hot spot articles in the second local cache unit; and for any hot spot item in the second local cache unit, deleting the any hot spot item when the any hot spot item does not meet the hot spot condition.
8. The method of claim 1, wherein after obtaining inventory status data corresponding to the inventory query request from a database, further comprising: and writing the acquired inventory state data into a first cache, and writing the acquired inventory state data into a second cache when the acquired inventory state data indicate that the article requested by the inventory inquiry is not good.
9. The method of claim 4, wherein the query object identification comprises: and the item identifier, the channel identifier and the address identifier of the inventory inquiry request.
10. A data query device based on multi-level cache is characterized by comprising:
the request receiving module is used for receiving an inventory inquiry request;
the cache query module is used for judging whether the inventory state data corresponding to the inventory query request exists in a cache; the caching includes: a first cache, a second cache, and a third cache; the first cache is used for caching the inventory state data of a preset calling party, and the second cache is used for caching the inventory state data of the goods-free article; the third cache is used for caching the inventory state data of the hot spot article;
the result determining module takes the inventory state data as result data if the inventory state data is positive; otherwise, acquiring inventory state data corresponding to the inventory query request from a database as result data.
11. An electronic device for data query based on multi-level cache, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN202010152491.7A 2020-03-06 2020-03-06 Data query method and device based on multi-level cache Pending CN113360528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010152491.7A CN113360528A (en) 2020-03-06 2020-03-06 Data query method and device based on multi-level cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010152491.7A CN113360528A (en) 2020-03-06 2020-03-06 Data query method and device based on multi-level cache

Publications (1)

Publication Number Publication Date
CN113360528A true CN113360528A (en) 2021-09-07

Family

ID=77524126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010152491.7A Pending CN113360528A (en) 2020-03-06 2020-03-06 Data query method and device based on multi-level cache

Country Status (1)

Country Link
CN (1) CN113360528A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115913646A (en) * 2022-10-21 2023-04-04 网易(杭州)网络有限公司 Method and device for intercepting blacklist object, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017050014A1 (en) * 2015-09-21 2017-03-30 北京奇虎科技有限公司 Data storage processing method and device
CN108132958A (en) * 2016-12-01 2018-06-08 阿里巴巴集团控股有限公司 A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device
CN109977129A (en) * 2019-03-28 2019-07-05 中国联合网络通信集团有限公司 Multi-stage data caching method and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017050014A1 (en) * 2015-09-21 2017-03-30 北京奇虎科技有限公司 Data storage processing method and device
CN108132958A (en) * 2016-12-01 2018-06-08 阿里巴巴集团控股有限公司 A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device
CN109977129A (en) * 2019-03-28 2019-07-05 中国联合网络通信集团有限公司 Multi-stage data caching method and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115913646A (en) * 2022-10-21 2023-04-04 网易(杭州)网络有限公司 Method and device for intercepting blacklist object, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110598138A (en) Cache-based processing method and device
CN103607312A (en) Data request processing method and system for server system
CN110909022A (en) Data query method and device
CN112783887A (en) Data processing method and device based on data warehouse
CN112445988A (en) Data loading method and device
CN109165078B (en) Virtual distributed server and access method thereof
CN113360528A (en) Data query method and device based on multi-level cache
CN112948498A (en) Method and device for generating global identification of distributed system
CN113138943B (en) Method and device for processing request
CN112688982B (en) User request processing method and device
CN109213815B (en) Method, device, server terminal and readable medium for controlling execution times
CN110807040B (en) Method, device, equipment and storage medium for managing data
CN112711572B (en) Online capacity expansion method and device suitable for database and table division
CN113127416A (en) Data query method and device
CN112699116A (en) Data processing method and system
CN113626176A (en) Service request processing method and device
CN113220981A (en) Method and device for optimizing cache
CN113704242A (en) Data processing method and device
CN112783914A (en) Statement optimization method and device
CN113535768A (en) Production monitoring method and device
CN111240810A (en) Transaction management method, device, equipment and storage medium
CN112860739A (en) Hotspot data processing method and device, service processing system and storage medium
CN111698273B (en) Method and device for processing request
US20240089339A1 (en) Caching across multiple cloud environments
CN113760967A (en) Data query method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination