CN114201466B - Anti-cache breakdown method, device, equipment and readable storage medium - Google Patents

Anti-cache breakdown method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN114201466B
CN114201466B CN202111533275.8A CN202111533275A CN114201466B CN 114201466 B CN114201466 B CN 114201466B CN 202111533275 A CN202111533275 A CN 202111533275A CN 114201466 B CN114201466 B CN 114201466B
Authority
CN
China
Prior art keywords
query
data
queried
requests
query requests
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111533275.8A
Other languages
Chinese (zh)
Other versions
CN114201466A (en
Inventor
庄志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111533275.8A priority Critical patent/CN114201466B/en
Publication of CN114201466A publication Critical patent/CN114201466A/en
Application granted granted Critical
Publication of CN114201466B publication Critical patent/CN114201466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • G06F16/1767Concurrency control, e.g. optimistic or pessimistic approaches
    • G06F16/1774Locking methods, e.g. locking methods for file systems allowing shared and concurrent access to files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the field of data storage, and discloses a cache breakdown prevention method, which comprises the following steps: when a query request is received, screening homologous query requests from the query requests, sending a first query request in the homologous query requests to a local cache, and locking the rest query requests in the homologous query requests; screening out a heterogeneous query request from the query requests, and judging whether query data corresponding to the heterogeneous query request exists in the local cache; when the query data corresponding to the heterogeneous query request does not exist in the local cache, a first query request in the query requests corresponding to the same data tags to be queried in the heterogeneous query requests is sent to a background database, and the query requests corresponding to the rest of the same data tags to be queried are locked. The invention also provides a device, equipment and storage medium for preventing cache breakdown. The invention can reduce the probability of cache breakdown of the background database.

Description

Anti-cache breakdown method, device, equipment and readable storage medium
Technical Field
The present invention relates to the field of data storage, and in particular, to a method and apparatus for preventing cache breakdown, an electronic device, and a readable storage medium.
Background
Under normal conditions, when a user acquires data, the IO unit firstly checks whether the data required by the user exists in the cache, and when the data required by the user does not exist in the cache, the IO unit reenters the database to acquire the data.
When multiple users simultaneously request to acquire data which is not in the cache but is in the database, because the number of concurrent users is very large, the concurrent users simultaneously read the cache and read the data, and simultaneously go to the database to acquire the data, the pressure of the database is instantaneously increased, and therefore the cache breakdown is caused. The cache breakdown causes how much pressure is on the database, affects the speed of the user to acquire the data, and even causes downtime of the server.
Disclosure of Invention
The invention provides a method and a device for preventing cache breakdown, electronic equipment and a computer readable storage medium, and aims to reduce the occurrence probability of cache breakdown of a background database.
In order to achieve the above object, the present invention provides a method for preventing cache breakdown, including:
when more than a preset number of inquiry requests are received within a preset time period, acquiring the sources of the inquiry requests;
screening query requests with the same sources from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests to send the first query request to a local cache according to the receiving time of the query requests, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
Screening query requests with different sources from the query requests to obtain a heterogeneous query request, and extracting a label to be queried in the heterogeneous query request;
judging whether query data corresponding to the data tag to be queried exists in a local cache;
if the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
if the query data corresponding to the data tag to be queried does not exist in the local cache, screening the same data tag to be queried and different data tags to be queried from the data tags to be queried;
according to the receiving time of the query requests corresponding to the same data tags to be queried, a first query request in the query requests corresponding to the same data tags to be queried is sent to a background database, and the query requests corresponding to the rest of the same data tags to be queried are locked by using a preset cache refreshing lock;
and placing the query requests corresponding to the different data tags to be queried into a preset message queue, and sending the query requests corresponding to the different data tags to be queried to a background database according to the message queue.
Optionally, the screening the query requests with the same source from the query requests to obtain the homologous query request includes:
analyzing the query request to obtain an address index label corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with the same IP addresses from the query requests to obtain homologous query requests.
Optionally, the parsing the query request to obtain an address index tag corresponding to the query request includes:
analyzing the query request by using a preset service connector to obtain a request head of the query request;
and extracting the uniform resource locator in the request header, and translating the uniform resource locator to obtain the address index label corresponding to the query request.
Optionally, the selecting the query requests with different sources from the query requests to obtain the heterogeneous query requests includes:
analyzing the query request to obtain an address index label corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with different IP addresses from the query requests to obtain heterogeneous query requests.
Optionally, according to the receiving time of the query requests corresponding to the same data tags to be queried, a first query request in the query requests corresponding to the same data tags to be queried is sent to a background database, and the query requests corresponding to the remaining same data tags to be queried are locked by using a preset cache refreshing lock, including:
acquiring a time stamp of the query request, selecting a first query request from query requests corresponding to the same data tags to be queried according to the time stamp, locking other query requests except the first query request in the query requests corresponding to the same data tags to be queried by using a preset cache refreshing lock, and sending the first query request to the background database for reading operation;
writing the query data read from the background database into a local cache;
and sending the rest inquiry requests except the first inquiry request in the inquiry requests corresponding to the same data label to be inquired to the local cache.
Optionally, the determining whether the query data corresponding to the to-be-queried data tag exists in the local cache includes:
Extracting index codes in the data labels to be queried;
inquiring whether the local cache contains query data corresponding to the index code according to the index code;
when the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
when the query data corresponding to the data tag to be queried does not exist in the local cache, the same data tag to be queried and different data tags to be queried are screened out from the data tags to be queried.
Optionally, the sending, according to the message queue, the query requests corresponding to the different data tags to be queried to a background database for reading operations includes:
analyzing the query request to obtain time stamps corresponding to the query requests corresponding to the different data tags to be queried;
according to the time stamp, sequentially sending the query requests corresponding to the different data tags to be queried to the background database, and monitoring whether the query requests in the background database exceed a preset threshold value or not at all times;
and stopping sending the query request from the message queue to the background database when the query request in the background database exceeds a preset threshold value until the query request for reading the query data in the background database is smaller than the preset threshold value again.
In order to solve the above problems, the present invention further provides a buffer breakdown prevention device, which includes:
the homologous query request processing module is used for screening query requests with the same sources from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests to send the first query request to a local cache according to the receiving time of the query requests, and locking the rest query requests in the homologous query requests by utilizing a preset anti-replay lock;
the heterogeneous query request processing module is used for screening query requests with different sources from the query requests to obtain heterogeneous query requests, extracting query tags in the heterogeneous query requests, sending the query requests corresponding to the query data tags to a local cache when query data corresponding to the query data tags exist in the local cache, screening the same query data tags and different query data tags from the query data tags when the query data corresponding to the query data tags do not exist in the local cache, sending a first query request in the query requests corresponding to the same query data tags to a background database according to the receiving time of the query requests corresponding to the same query data tags, locking the query requests corresponding to the rest of the same query data tags by utilizing a preset cache refreshing lock, placing the query requests corresponding to the different query data tags into a preset message queue, and sending the query requests corresponding to the different query data tags to the background database according to the message queue.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
a memory storing at least one computer program; and
And the processor executes the computer program stored in the memory to realize the anti-cache breakdown method.
In order to solve the above-mentioned problems, the present invention also provides a computer readable storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the above-mentioned anti-cache breakdown method.
According to the anti-cache breakdown method, the device, the electronic equipment and the readable storage medium, firstly, locking processing is carried out on the query requests with the same sources, so that a user is limited to carry out a large amount of operations in a short time, and the occupation of resources and the back-end service pressure caused by repeated requests are reduced; secondly, locking is carried out on the query request under the high concurrency condition, so that the database pressure caused by repeated requests is reduced, and cache breakdown caused by overlarge database pressure is prevented; and finally, carrying out asynchronous processing on the high-concurrency and different query requests by utilizing the message queue, thereby reducing the pressure of the database and preventing the database from being blocked.
Drawings
Fig. 1 is a flow chart of a method for preventing cache breakdown according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a buffer breakdown preventing device according to an embodiment of the invention;
fig. 3 is a schematic diagram of an internal structure of an electronic device for implementing a method for preventing cache breakdown according to an embodiment of the present invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the invention provides a cache breakdown prevention method. The execution body of the anti-cache breakdown method includes, but is not limited to, at least one of a server, a terminal, and the like, which can be configured to execute the method provided by the embodiment of the application. In other words, the anti-cache breakdown method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server may include an independent server, and may also include a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (ContentDelivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Referring to fig. 1, which is a schematic flow chart of a method for preventing cache breakdown according to an embodiment of the present invention, in an embodiment of the present invention, the method for preventing cache breakdown includes:
s1, when more than a preset number of inquiry requests are received in a preset time period, acquiring sources of the inquiry requests.
In the embodiment of the present invention, the preset time period may be, for example, 5s; and the preset number can be set according to the maximum access amount which can be born by the background database, such as 20 pieces.
Further, the query request may be a request action sent by the user to acquire related data from the database, for example, in the QQ message interface, the user may acquire the number of messages to be read by dragging the page to slide down.
In the embodiment of the invention, the source of the query request refers to the IP address for sending the query request.
S2, screening out query requests with the same sources from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests according to the receiving time of the query requests, sending the first query request to a local cache, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock.
In the embodiment of the present invention, the homologous query request refers to a query request from the same IP address, that is, a query request continuously sent by the same IP address in a preset period of time. For example, in the QQ message interface, the multi-dragging page slides down to obtain the number of messages to be read. According to the embodiment of the invention, according to the receiving time of the query requests, a first query request is selected from the homologous query requests to be sent to a local cache, and the rest query requests in the homologous query requests are locked by using a preset anti-replay lock.
It should be appreciated that, in the preset time, for example, 5s, the query requests from the same IP address are usually sent multiple times, so in order to reduce the pressure of the local cache and the background database, the embodiment of the present invention only selects the first query request from the homologous query requests to send to the local cache, and further, in the embodiment of the present invention, the preset anti-replay lock is used to lock the remaining query requests in the homologous query requests, so as to lock the remaining query requests in the homologous query requests, thereby preventing the resource waste caused by obtaining the same query data multiple times.
In detail, the step of screening the query requests with the same sources from the query requests to obtain homologous query requests includes:
analyzing the query request to obtain an address index label corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with the same IP addresses from the query requests to obtain homologous query requests.
Further, the parsing the query request to obtain an address index tag corresponding to the query request includes:
analyzing the query request by using a preset service connector to obtain a request head of the query request;
and extracting the uniform resource locator in the request header, and translating the uniform resource locator to obtain the address index label corresponding to the query request.
In the embodiment of the present invention, the uniform resource locator may be a network address of the user. The address index tag may be an index of a network address corresponding to the query request.
S3, selecting query requests with different sources from the query requests to obtain a heterogeneous query request, and extracting a label to be queried in the heterogeneous query request.
In the embodiment of the present invention, the heterogeneous query request refers to a query request from different IP addresses, that is, a query request sent by a plurality of IP addresses in the preset time period, for example, in a WeChat interface, a query request that a plurality of users want to acquire the same WeChat article in the same time period.
In the embodiment of the present invention, the data tag to be queried may be an index of query data corresponding to the query request. For example, if the query data corresponding to the query request of the user is an article, the data tag to be queried may be an ID of the article, or a character such as a title, which can identify the query data.
In the embodiment of the present invention, the step of screening query requests with different sources from the query requests to obtain heterogeneous query requests includes:
analyzing the query request to obtain an address index label corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with different IP addresses from the query requests to obtain heterogeneous query requests.
S4, judging whether query data corresponding to the data tag to be queried exists in the local cache.
In the embodiment of the invention, the query data, namely the data required by the user, is generally stored in a local cache and a background database. The data in the local cache is generally limit data, i.e. data which is automatically deleted after a period of storage.
In detail, the determining whether the query data corresponding to the to-be-queried data tag exists in the local cache includes:
extracting index codes in the data labels to be queried;
inquiring whether the local cache contains query data corresponding to the index code according to the index code;
and when the query data corresponding to the data tag to be queried exists in the local cache, entering S5, and sending the query request corresponding to the data tag to be queried to the local cache.
In the embodiment of the invention, when the local cache has query data corresponding to the data tag to be queried, the corresponding query data can be directly obtained from the local cache, so that all query requests corresponding to the data tag to be queried are sent to the local cache.
And when the query data corresponding to the data label to be queried does not exist in the local cache, entering S6, and screening the same data label to be queried and different data labels to be queried from the data labels to be queried.
In the embodiment of the invention, the data tags to be queried corresponding to the query requests for acquiring the same query data are used as the same data tags to be queried, and the data tags to be queried corresponding to the query requests for acquiring different query data are used as different data tags to be queried.
In the embodiment of the invention, when the query data corresponding to the data tag to be queried does not exist in the local cache, the query request is indicated to need to acquire the query data from the background database, and the query request is required to be classified in order to prevent the occurrence of cache breakdown, so that the pressure of the background database is reduced, and the possibility of occurrence of the cache breakdown phenomenon is reduced.
And S7, according to the receiving time of the query requests corresponding to the same data tags to be queried, sending a first query request in the query requests corresponding to the same data tags to be queried to a background database, and locking the query requests corresponding to the rest of the same data tags to be queried by using a preset cache refreshing lock.
In the embodiment of the invention, the cache refreshing lock is used for locking the query requests corresponding to the rest of the same data tags to be queried in the local cache so as to prevent the background database from simultaneously receiving a large number of query requests for acquiring the same query data, thereby causing cache breakdown.
In the embodiment of the invention, the background database can write the read data into the local cache, so that the corresponding query request corresponding to the rest data tags to be queried can directly obtain the corresponding query data from the local cache, thereby preventing the occurrence of cache breakdown and reducing the pressure of the background database.
In detail, the S7 includes:
acquiring a time stamp of the query request, selecting a first query request from query requests corresponding to the same data tags to be queried according to the time stamp, locking other query requests except the first query request in the query requests corresponding to the same data tags to be queried by using a preset cache refreshing lock, and sending the first query request to the background database for reading operation;
writing the query data read from the background database into a local cache;
and sending the rest inquiry requests except the first inquiry request in the inquiry requests corresponding to the same data label to be inquired to the local cache.
S8, the query requests corresponding to the different data tags to be queried are placed in a preset message queue, and the query requests corresponding to the different data tags to be queried are sent to a local database according to the message queue.
In the embodiment of the invention, the message queue is used for asynchronously processing the query requests corresponding to different data tags to be queried. And due to asynchronous processing, the pressure of reading data by the background database is effectively reduced, so that the occurrence of cache breakdown is reduced.
In detail, the sending, according to the message queue, the query requests corresponding to the different data tags to be queried to a background database for reading operation includes:
analyzing the query request to obtain time stamps corresponding to the query requests corresponding to the different data tags to be queried;
according to the time stamp, sequentially sending the query requests corresponding to the different data tags to be queried to the background database, and monitoring whether the query requests in the background database exceed a preset threshold value or not at all times;
and stopping sending the query request from the message queue to the background database when the query request in the background database exceeds a preset threshold value until the query request for reading the query data in the background database is smaller than the preset threshold value again.
The anti-cache breakdown method provided by the embodiment of the invention firstly carries out locking treatment on the query requests with the same sources, thereby limiting a user to carry out a large amount of operations in a short time and reducing the occupation of resources and the back-end service pressure caused by repeated requests; secondly, locking is carried out on the query request under the high concurrency condition, so that the database pressure caused by repeated requests is reduced, and cache breakdown caused by overlarge database pressure is prevented; and finally, carrying out asynchronous processing on the high-concurrency and different query requests by utilizing the message queue, thereby reducing the pressure of the database and preventing the database from being blocked.
FIG. 2 is a functional block diagram of the anti-cache breakdown device of the present invention.
The anti-cache breakdown device 100 of the present invention may be installed in an electronic apparatus. Depending on the implementation, the anti-cache breakdown device may include a query request receiving module 101, a homologous query request processing module 102, and a heterologous query request processing module 103, where the modules may also be referred to as units, and refer to a series of computer program segments capable of being executed by a processor of an electronic device and of performing a fixed function, and stored in a memory of the electronic device.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the query request receiving module 101 is configured to, when more than a preset number of query requests are received in a preset period of time, obtain a source of the query requests;
in the embodiment of the present invention, the preset time period may be, for example, 5s; and the preset number can be set according to the maximum access amount which can be born by the background database, such as 20 pieces.
Further, the query request may be a request action sent by the user to acquire related data from the database, for example, in the QQ message interface, the user may acquire the number of messages to be read by dragging the page to slide down.
In the embodiment of the invention, the source of the query request refers to the IP address for sending the query request.
The homologous query request processing module 102 is configured to screen query requests with the same source from the query requests, obtain homologous query requests, select a first query request from the homologous query requests according to the receiving time of the query requests, send the first query request to a local cache, and lock the remaining query requests in the homologous query requests by using a preset anti-replay lock.
In the embodiment of the present invention, the homologous query request refers to a query request from the same IP address, that is, a query request continuously sent by the same IP address in a preset period of time. For example, in the QQ message interface, the multi-dragging page slides down to obtain the number of messages to be read. According to the embodiment of the invention, according to the receiving time of the query requests, a first query request is selected from the homologous query requests to be sent to a local cache, and the rest query requests in the homologous query requests are locked by using a preset anti-replay lock.
It should be appreciated that, in the preset time, for example, 5s, the query requests from the same IP address are usually sent multiple times, so in order to reduce the pressure of the local cache and the background database, the embodiment of the present invention only selects the first query request from the homologous query requests to send to the local cache, and further, in the embodiment of the present invention, the preset anti-replay lock is used to lock the remaining query requests in the homologous query requests, so as to lock the remaining query requests in the homologous query requests, thereby preventing the resource waste caused by obtaining the same query data multiple times.
In detail, the step of screening the query requests with the same sources from the query requests to obtain homologous query requests includes:
analyzing the query request to obtain an address index label corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with the same IP addresses from the query requests to obtain homologous query requests.
Further, the parsing the query request to obtain an address index tag corresponding to the query request includes:
analyzing the query request by using a preset service connector to obtain a request head of the query request;
and extracting the uniform resource locator in the request header, and translating the uniform resource locator to obtain the address index label corresponding to the query request.
In the embodiment of the present invention, the uniform resource locator may be a network address of the user. The address index tag may be an index of a network address corresponding to the query request.
The heterogeneous query request processing module 103 is configured to screen query requests with different sources from the query requests, obtain heterogeneous query requests, extract a query tag in the heterogeneous query requests, when query data corresponding to the query data tag exists in a local cache, send the query request corresponding to the query data tag to the local cache, when query data corresponding to the query data tag does not exist in the local cache, screen out the same query data tag and different query data tag from the query data tag, send a first query request in the query request corresponding to the same query data tag to a background database according to a receiving time of the query request corresponding to the same query data tag, lock the query request corresponding to the remaining query data tag with a preset cache refreshing lock, place the query request corresponding to the different query data tag into a preset message queue, and send the query request corresponding to the different query data tag to the background database according to the message queue.
In the embodiment of the present invention, the heterogeneous query request refers to a query request from different IP addresses, that is, a query request sent by a plurality of IP addresses in the preset time period, for example, in a WeChat interface, a query request that a plurality of users want to acquire the same WeChat article in the same time period.
In the embodiment of the present invention, the data tag to be queried may be an index of query data corresponding to the query request. For example, if the query data corresponding to the query request of the user is an article, the data tag to be queried may be an ID of the article, or a character such as a title, which can identify the query data.
In the embodiment of the present invention, the step of screening query requests with different sources from the query requests to obtain heterogeneous query requests includes:
analyzing the query request to obtain an address index label corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with different IP addresses from the query requests to obtain heterogeneous query requests.
In the embodiment of the invention, the query data, namely the data required by the user, is generally stored in a local cache and a background database. The data in the local cache is generally limit data, i.e. data which is automatically deleted after a period of storage.
In detail, the determining whether the query data corresponding to the to-be-queried data tag exists in the local cache includes:
extracting index codes in the data labels to be queried;
inquiring whether the local cache contains query data corresponding to the index code according to the index code;
when the query data corresponding to the data tag to be queried exists in the local cache, the query request corresponding to the data tag to be queried is sent to the local cache.
In the embodiment of the invention, when the local cache has query data corresponding to the data tag to be queried, the corresponding query data can be directly obtained from the local cache, so that all query requests corresponding to the data tag to be queried are sent to the local cache.
When the query data corresponding to the data tag to be queried does not exist in the local cache, the same data tag to be queried and different data tags to be queried are screened out from the data tags to be queried.
In the embodiment of the invention, the data tags to be queried corresponding to the query requests for acquiring the same query data are used as the same data tags to be queried, and the data tags to be queried corresponding to the query requests for acquiring different query data are used as different data tags to be queried.
In the embodiment of the invention, when the query data corresponding to the data tag to be queried does not exist in the local cache, the query request is indicated to need to acquire the query data from the background database, and the query request is required to be classified in order to prevent the occurrence of cache breakdown, so that the pressure of the background database is reduced, and the possibility of occurrence of the cache breakdown phenomenon is reduced.
In the embodiment of the invention, the cache refreshing lock is used for locking the query requests corresponding to the rest of the same data tags to be queried in the local cache so as to prevent the background database from simultaneously receiving a large number of query requests for acquiring the same query data, thereby causing cache breakdown.
In the embodiment of the invention, the background database can write the read data into the local cache, so that the corresponding query request corresponding to the rest data tags to be queried can directly obtain the corresponding query data from the local cache, thereby preventing the occurrence of cache breakdown and reducing the pressure of the background database.
In detail, according to the receiving time of the query requests corresponding to the same data tags to be queried, sending a first query request in the query requests corresponding to the same data tags to be queried to a background database, and locking the query requests corresponding to the rest of the same data tags to be queried by using a preset cache refreshing lock, including:
Acquiring a time stamp of the query request, selecting a first query request from query requests corresponding to the same data tags to be queried according to the time stamp, locking other query requests except the first query request in the query requests corresponding to the same data tags to be queried by using a preset cache refreshing lock, and sending the first query request to the background database for reading operation;
writing the query data read from the background database into a local cache;
and sending the rest inquiry requests except the first inquiry request in the inquiry requests corresponding to the same data label to be inquired to the local cache.
In the embodiment of the invention, the message queue is used for asynchronously processing the query requests corresponding to different data tags to be queried. And due to asynchronous processing, the pressure of reading data by the background database is effectively reduced, so that the occurrence of cache breakdown is reduced.
In detail, the sending, according to the message queue, the query requests corresponding to the different data tags to be queried to a background database for reading operation includes:
analyzing the query request to obtain time stamps corresponding to the query requests corresponding to the different data tags to be queried;
According to the time stamp, sequentially sending the query requests corresponding to the different data tags to be queried to the background database, and monitoring whether the query requests in the background database exceed a preset threshold value or not at all times;
and stopping sending the query request from the message queue to the background database when the query request in the background database exceeds a preset threshold value until the query request for reading the query data in the background database is smaller than the preset threshold value again.
Fig. 3 is a schematic structural diagram of an electronic device for implementing the anti-cache breakdown method according to the present invention.
The electronic device may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a cache breakdown prevention program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, such as a mobile hard disk of the electronic device. The memory 11 may in other embodiments also be an external storage device of the electronic device, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only for storing application software installed in an electronic device and various data, such as codes of a cache breakdown prevention program, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device and processes data by running or executing programs or modules (e.g., a cache breakdown prevention program, etc.) stored in the memory 11, and calling data stored in the memory 11.
The communication bus 12 may be a peripheral component interconnect standard (perIPheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The communication bus 12 is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
Fig. 3 shows only an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 3 is not limiting of the electronic device and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power source (such as a battery) for supplying power to the respective components, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device may further include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described herein.
Optionally, the communication interface 13 may comprise a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the electronic device and other electronic devices.
Optionally, the communication interface 13 may further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The anti-cache breakdown program stored by the memory 11 in the electronic device is a combination of a plurality of computer programs, which when run in the processor 10, can implement:
when more than a preset number of inquiry requests are received within a preset time period, acquiring the sources of the inquiry requests;
screening query requests with the same sources from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests to send the first query request to a local cache according to the receiving time of the query requests, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
Screening query requests with different sources from the query requests to obtain a heterogeneous query request, and extracting a label to be queried in the heterogeneous query request;
judging whether query data corresponding to the data tag to be queried exists in a local cache;
if the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
if the query data corresponding to the data tag to be queried does not exist in the local cache, screening the same data tag to be queried and different data tags to be queried from the data tags to be queried;
according to the receiving time of the query requests corresponding to the same data tags to be queried, a first query request in the query requests corresponding to the same data tags to be queried is sent to a background database, and the query requests corresponding to the rest of the same data tags to be queried are locked by using a preset cache refreshing lock;
and placing the query requests corresponding to the different data tags to be queried into a preset message queue, and sending the query requests corresponding to the different data tags to be queried to a background database according to the message queue.
In particular, the specific implementation method of the processor 10 on the computer program may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
Further, the electronic device integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. The computer readable medium may be non-volatile or volatile. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, may implement:
when more than a preset number of inquiry requests are received within a preset time period, acquiring the sources of the inquiry requests;
screening query requests with the same sources from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests to send the first query request to a local cache according to the receiving time of the query requests, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
Screening query requests with different sources from the query requests to obtain a heterogeneous query request, and extracting a label to be queried in the heterogeneous query request;
judging whether query data corresponding to the data tag to be queried exists in a local cache;
if the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
if the query data corresponding to the data tag to be queried does not exist in the local cache, screening the same data tag to be queried and different data tags to be queried from the data tags to be queried;
according to the receiving time of the query requests corresponding to the same data tags to be queried, a first query request in the query requests corresponding to the same data tags to be queried is sent to a background database, and the query requests corresponding to the rest of the same data tags to be queried are locked by using a preset cache refreshing lock;
and placing the query requests corresponding to the different data tags to be queried into a preset message queue, and sending the query requests corresponding to the different data tags to be queried to a background database according to the message queue.
Further, the computer-usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (9)

1. The method for preventing cache breakdown is characterized by comprising the following steps:
when more than a preset number of inquiry requests are received within a preset time period, acquiring the sources of the inquiry requests;
screening query requests with the same sources from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests to send the first query request to a local cache according to the receiving time of the query requests, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
Screening query requests with different sources from the query requests to obtain a heterogeneous query request, and extracting a data tag to be queried in the heterogeneous query request;
judging whether query data corresponding to the data tag to be queried exists in a local cache;
if the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
if the query data corresponding to the data tag to be queried does not exist in the local cache, screening the same data tag to be queried and different data tags to be queried from the data tags to be queried;
according to the receiving time of the query requests corresponding to the same data tags to be queried, a first query request in the query requests corresponding to the same data tags to be queried is sent to a background database, and the query requests corresponding to the rest of the same data tags to be queried are locked by using a preset cache refreshing lock;
the query requests corresponding to the different data tags to be queried are put into a preset message queue, and the query requests corresponding to the different data tags to be queried are sent to a background database according to the message queue;
The step of sending a first query request in the query requests corresponding to the same data tags to be queried to a background database according to the receiving time of the query requests corresponding to the same data tags to be queried, and locking the query requests corresponding to the rest of the same data tags to be queried by using a preset cache refreshing lock comprises the following steps: acquiring a time stamp of the query request, selecting a first query request from query requests corresponding to the same data tags to be queried according to the time stamp, locking other query requests except the first query request in the query requests corresponding to the same data tags to be queried by using a preset cache refreshing lock, and sending the first query request to the background database for reading operation; writing the query data read from the background database into a local cache; and sending the rest inquiry requests except the first inquiry request in the inquiry requests corresponding to the same data label to be inquired to the local cache.
2. The method for preventing cache breakdown according to claim 1, wherein the step of screening out the query requests with the same source from the query requests to obtain the homologous query requests includes:
Analyzing the query request to obtain an address index label corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with the same IP addresses from the query requests to obtain homologous query requests.
3. The method of preventing cache breakdown according to claim 2, wherein the parsing the query request to obtain the address index tag corresponding to the query request includes:
analyzing the query request by using a preset service connector to obtain a request head of the query request;
and extracting the uniform resource locator in the request header, and translating the uniform resource locator to obtain the address index label corresponding to the query request.
4. The method for preventing cache breakdown according to claim 1, wherein the step of screening out query requests with different sources from the query requests to obtain a heterogeneous query request includes:
analyzing the query request to obtain an address index label corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with different IP addresses from the query requests to obtain heterogeneous query requests.
5. The method of claim 1, wherein the determining whether the query data corresponding to the data tag to be queried exists in the local cache comprises:
extracting index codes in the data labels to be queried;
inquiring whether the local cache contains query data corresponding to the index code according to the index code;
when the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
when the query data corresponding to the data tag to be queried does not exist in the local cache, the same data tag to be queried and different data tags to be queried are screened out from the data tags to be queried.
6. The method for preventing cache breakdown according to claim 1, wherein the sending, according to the message queue, the query request corresponding to the different data tags to be queried to the background database includes:
analyzing the query request to obtain time stamps corresponding to the query requests corresponding to the different data tags to be queried;
according to the time stamp, sequentially sending the query requests corresponding to the different data tags to be queried to the background database, and monitoring whether the query requests in the background database exceed a preset threshold value or not at all times;
And stopping sending the query request from the message queue to the background database when the query request in the background database exceeds a preset threshold value until the query request for reading the query data in the background database is smaller than the preset threshold value again.
7. A buffer breakdown prevention apparatus for implementing the buffer breakdown prevention method according to any one of claims 1 to 6, comprising:
the query request receiving module is used for acquiring the sources of the query requests when the query requests exceeding the preset number are received within the preset time period;
the homologous query request processing module is used for screening query requests with the same sources from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests to send the first query request to a local cache according to the receiving time of the query requests, and locking the rest query requests in the homologous query requests by utilizing a preset anti-replay lock;
the heterogeneous query request processing module is used for screening query requests with different sources from the query requests to obtain heterogeneous query requests, extracting query data labels in the heterogeneous query requests, sending the query requests corresponding to the query data labels to a local cache when query data corresponding to the query data labels exist in the local cache, screening the same query data labels and different query data labels from the query data labels when the query data corresponding to the query data labels do not exist in the local cache, sending a first query request in the query requests corresponding to the same query data labels to a background database according to the receiving time of the query requests corresponding to the same query data labels, locking the query requests corresponding to the rest of the same query data labels by utilizing a preset cache refreshing lock, placing the query requests corresponding to the different query data labels into a preset message queue, and sending the query requests corresponding to the different query data labels to the background database according to the message queue.
8. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores computer program instructions executable by the at least one processor to enable the at least one processor to perform the anti-cache breakdown method of any one of claims 1 to 6.
9. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the anti-cache breakdown method according to any one of claims 1 to 6.
CN202111533275.8A 2021-12-15 2021-12-15 Anti-cache breakdown method, device, equipment and readable storage medium Active CN114201466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111533275.8A CN114201466B (en) 2021-12-15 2021-12-15 Anti-cache breakdown method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111533275.8A CN114201466B (en) 2021-12-15 2021-12-15 Anti-cache breakdown method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114201466A CN114201466A (en) 2022-03-18
CN114201466B true CN114201466B (en) 2024-02-23

Family

ID=80653869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111533275.8A Active CN114201466B (en) 2021-12-15 2021-12-15 Anti-cache breakdown method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114201466B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661718B (en) * 2022-03-28 2023-04-25 北京海量数据技术股份有限公司 Method and system for online creation of local partition index under Opengauss platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108132958A (en) * 2016-12-01 2018-06-08 阿里巴巴集团控股有限公司 A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device
CN110928904A (en) * 2019-10-31 2020-03-27 北京浪潮数据技术有限公司 Data query method and device and related components
CN111049882A (en) * 2019-11-11 2020-04-21 支付宝(杭州)信息技术有限公司 Cache state processing system, method, device and computer readable storage medium
CN111339148A (en) * 2020-03-13 2020-06-26 深圳前海环融联易信息科技服务有限公司 Method and device for preventing cache breakdown service, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817524B2 (en) * 2017-04-10 2020-10-27 Servicenow, Inc. Systems and methods for querying time series data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108132958A (en) * 2016-12-01 2018-06-08 阿里巴巴集团控股有限公司 A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device
CN110928904A (en) * 2019-10-31 2020-03-27 北京浪潮数据技术有限公司 Data query method and device and related components
CN111049882A (en) * 2019-11-11 2020-04-21 支付宝(杭州)信息技术有限公司 Cache state processing system, method, device and computer readable storage medium
CN111339148A (en) * 2020-03-13 2020-06-26 深圳前海环融联易信息科技服务有限公司 Method and device for preventing cache breakdown service, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于语义缓存技术的Hibernate查询缓存机制研究;徐凯声;王荣存;;交通与计算机;20060830(第04期);全文 *

Also Published As

Publication number Publication date
CN114201466A (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN112541745B (en) User behavior data analysis method and device, electronic equipment and readable storage medium
CN112653760B (en) Cross-server file transmission method and device, electronic equipment and storage medium
CN112329419A (en) Document editing method, device, server, terminal and storage medium
CN112015815B (en) Data synchronization method, device and computer readable storage medium
CN112671921A (en) Data transmission method and device, electronic equipment and readable storage medium
CN112702228B (en) Service flow limit response method, device, electronic equipment and readable storage medium
CN112084486A (en) User information verification method and device, electronic equipment and storage medium
CN113688923A (en) Intelligent order abnormity detection method and device, electronic equipment and storage medium
CN113868528A (en) Information recommendation method and device, electronic equipment and readable storage medium
CN113419856A (en) Intelligent current limiting method and device, electronic equipment and storage medium
CN114201466B (en) Anti-cache breakdown method, device, equipment and readable storage medium
CN114640707A (en) Message asynchronous processing method and device, electronic equipment and storage medium
CN114491646A (en) Data desensitization method and device, electronic equipment and storage medium
CN112464619B (en) Big data processing method, device and equipment and computer readable storage medium
CN111858604B (en) Data storage method and device, electronic equipment and storage medium
CN115002062B (en) Message processing method, device, equipment and readable storage medium
CN112540839B (en) Information changing method, device, electronic equipment and storage medium
CN114448930A (en) Short address generation method and device, electronic equipment and computer readable storage medium
CN114611046A (en) Data loading method, device, equipment and medium
CN114911479A (en) Interface generation method, device, equipment and storage medium based on configuration
CN113065086A (en) Webpage text extraction method and device, electronic equipment and storage medium
CN113364848A (en) File caching method and device, electronic equipment and storage medium
CN115002100B (en) File transmission method and device, electronic equipment and storage medium
CN115174698B (en) Market data decoding method, device, equipment and medium based on table entry index
CN113810414B (en) Mobile client domain name filtering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant