CN114201466A - Method, device and equipment for preventing cache breakdown and readable storage medium - Google Patents

Method, device and equipment for preventing cache breakdown and readable storage medium Download PDF

Info

Publication number
CN114201466A
CN114201466A CN202111533275.8A CN202111533275A CN114201466A CN 114201466 A CN114201466 A CN 114201466A CN 202111533275 A CN202111533275 A CN 202111533275A CN 114201466 A CN114201466 A CN 114201466A
Authority
CN
China
Prior art keywords
query
query requests
queried
data
requests
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111533275.8A
Other languages
Chinese (zh)
Other versions
CN114201466B (en
Inventor
庄志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111533275.8A priority Critical patent/CN114201466B/en
Publication of CN114201466A publication Critical patent/CN114201466A/en
Application granted granted Critical
Publication of CN114201466B publication Critical patent/CN114201466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • G06F16/1767Concurrency control, e.g. optimistic or pessimistic approaches
    • G06F16/1774Locking methods, e.g. locking methods for file systems allowing shared and concurrent access to files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the field of data storage, and discloses a cache breakdown prevention method, which comprises the following steps: when a query request is received, screening out homologous query requests from the query requests, sending a first query request in the homologous query requests to a local cache, and locking remaining query requests in the homologous query requests; screening out a heterogeneous query request from the query requests, and judging whether query data corresponding to the heterogeneous query request exists in the local cache or not; and when the local cache does not have query data corresponding to the heterogeneous query request, sending a first query request in the query requests corresponding to the same to-be-queried data label in the heterogeneous query request to a background database, and locking the query requests corresponding to the remaining same to-be-queried data labels. The invention also provides a device, equipment and a storage medium for preventing cache breakdown. The method and the device can reduce the probability of cache breakdown of the background database.

Description

Method, device and equipment for preventing cache breakdown and readable storage medium
Technical Field
The invention relates to the field of data storage, in particular to a method and a device for preventing cache breakdown, electronic equipment and a readable storage medium.
Background
Under normal conditions, when a user acquires data, the IO unit firstly checks whether the data required by the user exists in the cache, and when the data required by the user does not exist in the cache, the IO unit enters the database again to acquire the data.
When a plurality of users request to acquire data which is not in the cache but in the database at the same time, the concurrent users read the cache to not read the data and go to the database to acquire the data at the same time, so that the pressure of the database is increased instantly, and the cache breakdown is caused. The pressure of the database caused by cache breakdown affects the speed of the user for acquiring data, and even a server is down.
Disclosure of Invention
The invention provides a method and a device for preventing cache breakdown, electronic equipment and a computer readable storage medium, and aims to reduce the probability of cache breakdown of a background database.
In order to achieve the above object, the present invention provides a method for preventing cache breakdown, including:
when receiving query requests with more than a preset number in a preset time period, acquiring the source of the query requests;
screening query requests with the same source from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests according to the receiving time of the query requests, sending the first query request to a local cache, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
screening query requests with different sources from the query requests to obtain heterogeneous query requests, and extracting tags to be queried in the heterogeneous query requests;
judging whether query data corresponding to the data tag to be queried exists in a local cache or not;
if the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
if the local cache does not have query data corresponding to the data tags to be queried, screening the same data tags to be queried and different data tags to be queried from the data tags to be queried;
according to the receiving time of the query requests corresponding to the same data tags to be queried, sending a first query request in the query requests corresponding to the same data tags to be queried to a background database, and locking the query requests corresponding to the remaining same data tags to be queried by using a preset cache refresh lock;
and the query requests corresponding to the different data tags to be queried are put into a preset message queue, and the query requests corresponding to the different data tags to be queried are sent to a background database according to the message queue.
Optionally, the screening out query requests with the same source from the query requests to obtain homologous query requests includes:
analyzing the query request to obtain an address index tag corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with the same IP addresses from the query requests to obtain homologous query requests.
Optionally, the analyzing the query request to obtain an address index tag corresponding to the query request includes:
analyzing the query request by using a preset service connector to obtain a request head of the query request;
and extracting the uniform resource locator in the request header, and translating the uniform resource locator to obtain an address index label corresponding to the query request.
Optionally, screening out query requests with different sources from the query requests to obtain a heterogeneous query request, including:
analyzing the query request to obtain an address index tag corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with different IP addresses from the query requests to obtain different source query requests.
Optionally, the sending, according to the receiving time of the query requests corresponding to the same data tag to be queried, a first query request of the query requests corresponding to the same data tag to be queried to a background database, and locking the remaining query requests corresponding to the same data tag to be queried by using a preset cache refresh lock, includes:
acquiring a timestamp of the query request, selecting a first query request from the query requests corresponding to the same data tag to be queried according to the timestamp, locking the rest query requests except the first query request in the query requests corresponding to the same data tag to be queried by using a preset cache refresh lock, and sending the first query request to the background database for reading;
writing query data read from the background database into a local cache;
and sending the rest query requests except the first query request in the query requests corresponding to the same data tag to be queried to the local cache.
Optionally, the determining whether query data corresponding to the to-be-queried data tag exists in the local cache includes:
extracting index codes in the data tags to be inquired;
inquiring whether the local cache contains inquiry data corresponding to the index code or not according to the index code;
when the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
and when the local cache does not have the query data corresponding to the data tags to be queried, screening the same data tags to be queried and different data tags to be queried from the data tags to be queried.
Optionally, the sending, according to the message queue, the query requests corresponding to the different data tags to be queried to a background database for reading includes:
analyzing the query request to obtain timestamps corresponding to the query requests corresponding to different data tags to be queried;
according to the time stamp, sequentially sending the query requests corresponding to the different data tags to be queried to the background database, and constantly monitoring whether the query requests in the background database exceed a preset threshold value;
and when the query request in the background database exceeds a preset threshold, stopping sending the query request from the message queue to the background database until the query request for reading the query data in the background database is smaller than the preset threshold again.
In order to solve the above problem, the present invention further provides a device for preventing cache breakdown, including:
the system comprises a homologous query request processing module, a local cache and a remote cache module, wherein the homologous query request processing module is used for screening out query requests with the same source from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests according to the receiving time of the query requests, sending the first query request to the local cache, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
a heterogeneous query request processing module, configured to screen query requests with different sources from the query requests to obtain heterogeneous query requests, extract tags to be queried in the heterogeneous query requests, send query requests corresponding to the data tags to be queried to a local cache when query data corresponding to the data tags to be queried exists in the local cache, screen the same data tags to be queried and different data tags to be queried from the data tags to be queried when query data corresponding to the data tags to be queried does not exist in the local cache, send a first query request in the query requests corresponding to the same data tags to be queried to a database according to receiving time of the query requests corresponding to the same data tags to be queried, and use a preset cache refresh lock to lock query requests corresponding to remaining same data tags to be queried in a background, and putting the query requests corresponding to the different data tags to be queried into a preset message queue, and sending the query requests corresponding to the different data tags to be queried to a background database according to the message queue.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
and the processor executes the computer program stored in the memory to realize the cache breakdown prevention method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the above-mentioned cache breakdown prevention method.
According to the method and the device for preventing cache breakdown, the electronic equipment and the readable storage medium, provided by the embodiment of the invention, the query requests with the same source are firstly locked, so that a user is limited to perform a large number of operations in a short time, and the resource occupation and the back-end service pressure caused by repeated requests are reduced; secondly, by locking the query requests under the condition of high concurrency, the database pressure caused by repeated requests is reduced, so that cache breakdown caused by overlarge database pressure is prevented; and finally, the message queue is used for carrying out asynchronous processing on the high-concurrency and different query requests, so that the pressure of the database is reduced, and the database is prevented from being blocked.
Drawings
Fig. 1 is a schematic flow chart of a method for preventing cache breakdown according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a device for preventing cache breakdown according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an internal structure of an electronic device implementing a method for preventing cache breakdown according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a cache breakdown prevention method. The execution subject of the cache breakdown prevention method includes, but is not limited to, at least one of electronic devices that can be configured to execute the method provided by the embodiments of the present application, such as a server, a terminal, and the like. In other words, the cache breakdown prevention method may be performed by software or hardware installed in the terminal device or the server device, and the software may be a block chain platform. The server may include an independent server, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data and an artificial intelligence platform.
Referring to fig. 1, which is a schematic flow diagram of a method for preventing cache breakdown according to an embodiment of the present invention, in an embodiment of the present invention, the method for preventing cache breakdown includes:
s1, when receiving the inquiry requests with more than the preset number in the preset time period, obtaining the sources of the inquiry requests.
In this embodiment of the present invention, the preset time period may be, for example, 5 s; and the preset number can be set according to the maximum access amount which can be borne by the background database, such as 20.
Further, the query request may be a request action sent by the user to obtain related data from the database, for example, in the QQ message interface, the user may obtain the number of messages to be read by dragging the page to slide down.
In the embodiment of the present invention, the source of the query request refers to an IP address for sending the query request.
S2, screening out query requests with the same source from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests according to the receiving time of the query requests, sending the first query request to a local cache, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock.
In the embodiment of the present invention, the homologous query request refers to a query request from the same IP address, that is, a query request continuously sent by the same IP address within a preset time period. For example, in the QQ message interface, drag the page down multiple times to get the number of messages to be read. According to the receiving time of the query requests, the first query request is selected from the homologous query requests and sent to the local cache, and the rest query requests in the homologous query requests are locked by using the preset anti-replay lock.
It should be understood that, within the preset time, for example, within 5s, query requests from the same IP address are usually sent multiple times, and therefore, in order to reduce the pressure of the local cache and the background database, in the embodiment of the present invention, only the first query request is selected from the homologous query requests and sent to the local cache, and further, in the embodiment of the present invention, a preset replay-preventing lock is used to lock the remaining query requests in the homologous query requests, so as to lock the remaining query requests in the homologous query requests, thereby preventing resource waste due to multiple times of obtaining the same query data.
In detail, the screening out the query requests with the same source from the query requests to obtain the homologous query requests includes:
analyzing the query request to obtain an address index tag corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with the same IP addresses from the query requests to obtain homologous query requests.
Further, the analyzing the query request to obtain an address index tag corresponding to the query request includes:
analyzing the query request by using a preset service connector to obtain a request head of the query request;
and extracting the uniform resource locator in the request header, and translating the uniform resource locator to obtain an address index label corresponding to the query request.
In this embodiment of the present invention, the uniform resource locator may be a network address of the user. The address index tag may be an index of a network address corresponding to the query request.
S3, screening out query requests with different sources from the query requests to obtain heterogeneous query requests, and extracting the labels to be queried in the heterogeneous query requests.
In the embodiment of the present invention, the different source query request refers to a query request from different IP addresses, that is, a query request sent by multiple IP addresses in the preset time period, for example, a query request that multiple users want to obtain the same WeChat article in the same time period on a WeChat interface.
In the embodiment of the present invention, the to-be-queried data tag may be an index of query data corresponding to the query request. For example, if the query data corresponding to the query request of the user is an article, the tag of the data to be queried may be an ID of the article, or a character such as a title that can identify the query data.
In the embodiment of the present invention, the screening out query requests with different sources from the query requests to obtain different source query requests includes:
analyzing the query request to obtain an address index tag corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with different IP addresses from the query requests to obtain different source query requests.
And S4, judging whether the local cache has the query data corresponding to the data label to be queried.
In the embodiment of the invention, the query data, namely the data required by the user, is generally stored in a local cache and a background database. The data in the local cache is generally deadline-limited data, that is, data that is automatically deleted after being stored for a period of time.
In detail, the determining whether query data corresponding to the to-be-queried data tag exists in the local cache includes:
extracting index codes in the data tags to be inquired;
inquiring whether the local cache contains inquiry data corresponding to the index code or not according to the index code;
and when the query data corresponding to the data tag to be queried exists in the local cache, the step S5 is performed, and the query request corresponding to the data tag to be queried is sent to the local cache.
In the embodiment of the present invention, when the query data corresponding to the to-be-queried data tag exists in the local cache, the corresponding query data can be directly obtained from the local cache, so that all query requests corresponding to the to-be-queried data tag are sent to the local cache.
And when the query data corresponding to the data tag to be queried does not exist in the local cache, the step S6 is performed, and the same data tag to be queried and different data tags to be queried are screened out from the data tags to be queried.
In the embodiment of the invention, the data tags to be inquired corresponding to the inquiry requests for acquiring the same inquiry data are used as the same data tags to be inquired, and the data tags to be inquired corresponding to the inquiry requests for acquiring different inquiry data are used as different data tags to be inquired.
In the embodiment of the invention, when the query data corresponding to the to-be-queried data tag does not exist in the local cache, the query request is indicated to need to acquire the query data from the background database, and the query request needs to be classified in order to prevent cache breakdown, so that the pressure of the background database is reduced, and the possibility of cache breakdown is reduced.
And S7, sending a first query request in the query requests corresponding to the same data tag to be queried to a background database according to the receiving time of the query requests corresponding to the same data tag to be queried, and locking the remaining query requests corresponding to the same data tag to be queried by using a preset cache refresh lock.
In the embodiment of the invention, the cache refreshing lock has the function of locking the query requests corresponding to the remaining same data tags to be queried in the local cache so as to prevent the background database from simultaneously receiving a large number of query requests for acquiring the same query data, thereby causing cache breakdown.
In the embodiment of the invention, the background database can write the read data into the local cache, so that the query requests corresponding to the remaining same data tags to be queried can directly acquire corresponding query data from the local cache, the cache breakdown phenomenon is prevented, and the pressure of the background database is reduced.
In detail, the S7 includes:
acquiring a timestamp of the query request, selecting a first query request from the query requests corresponding to the same data tag to be queried according to the timestamp, locking the rest query requests except the first query request in the query requests corresponding to the same data tag to be queried by using a preset cache refresh lock, and sending the first query request to the background database for reading;
writing query data read from the background database into a local cache;
and sending the rest query requests except the first query request in the query requests corresponding to the same data tag to be queried to the local cache.
S8, the query requests corresponding to the different data tags to be queried are put into a preset message queue, and the query requests corresponding to the different data tags to be queried are sent to a local database according to the message queue.
In the embodiment of the invention, the message queue is used for asynchronously processing the query requests corresponding to different data tags to be queried. Due to the asynchronous processing, the pressure of reading data of the background database is effectively reduced, and the cache breakdown phenomenon is reduced.
In detail, the sending, according to the message queue, the query requests corresponding to the different data tags to be queried to a background database for reading includes:
analyzing the query request to obtain timestamps corresponding to the query requests corresponding to different data tags to be queried;
according to the time stamp, sequentially sending the query requests corresponding to the different data tags to be queried to the background database, and constantly monitoring whether the query requests in the background database exceed a preset threshold value;
and when the query request in the background database exceeds a preset threshold, stopping sending the query request from the message queue to the background database until the query request for reading the query data in the background database is smaller than the preset threshold again.
According to the cache breakdown prevention method provided by the embodiment of the invention, the query requests with the same source are firstly locked, so that a user is limited to perform a large amount of operations in a short time, and the resource occupation and the back-end service pressure caused by repeated requests are reduced; secondly, by locking the query requests under the condition of high concurrency, the database pressure caused by repeated requests is reduced, so that cache breakdown caused by overlarge database pressure is prevented; and finally, the message queue is used for carrying out asynchronous processing on the high-concurrency and different query requests, so that the pressure of the database is reduced, and the database is prevented from being blocked.
Fig. 2 is a functional block diagram of the apparatus for preventing cache breakdown according to the present invention.
The device 100 for preventing cache breakdown according to the present invention may be installed in an electronic device. According to the implemented functions, the apparatus for preventing cache breakdown may include an inquiry request receiving module 101, a homologous inquiry request processing module 102, and a heterologous inquiry request processing module 103, which may also be referred to as a unit, and refer to a series of computer program segments that can be executed by a processor of an electronic device and can perform a fixed function, and are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the query request receiving module 101 is configured to, when receiving query requests exceeding a preset number within a preset time period, obtain sources of the query requests;
in this embodiment of the present invention, the preset time period may be, for example, 5 s; and the preset number can be set according to the maximum access amount which can be borne by the background database, such as 20.
Further, the query request may be a request action sent by the user to obtain related data from the database, for example, in the QQ message interface, the user may obtain the number of messages to be read by dragging the page to slide down.
In the embodiment of the present invention, the source of the query request refers to an IP address for sending the query request.
The homologous query request processing module 102 is configured to screen out query requests from the same source from the query requests to obtain homologous query requests, select a first query request from the homologous query requests to send to a local cache according to the receiving time of the query requests, and lock remaining query requests in the homologous query requests by using a preset anti-replay lock.
In the embodiment of the present invention, the homologous query request refers to a query request from the same IP address, that is, a query request continuously sent by the same IP address within a preset time period. For example, in the QQ message interface, drag the page down multiple times to get the number of messages to be read. According to the receiving time of the query requests, the first query request is selected from the homologous query requests and sent to the local cache, and the rest query requests in the homologous query requests are locked by using the preset anti-replay lock.
It should be understood that, within the preset time, for example, within 5s, query requests from the same IP address are usually sent multiple times, and therefore, in order to reduce the pressure of the local cache and the background database, in the embodiment of the present invention, only the first query request is selected from the homologous query requests and sent to the local cache, and further, in the embodiment of the present invention, a preset replay-preventing lock is used to lock the remaining query requests in the homologous query requests, so as to lock the remaining query requests in the homologous query requests, thereby preventing resource waste due to multiple times of obtaining the same query data.
In detail, the screening out the query requests with the same source from the query requests to obtain the homologous query requests includes:
analyzing the query request to obtain an address index tag corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with the same IP addresses from the query requests to obtain homologous query requests.
Further, the analyzing the query request to obtain an address index tag corresponding to the query request includes:
analyzing the query request by using a preset service connector to obtain a request head of the query request;
and extracting the uniform resource locator in the request header, and translating the uniform resource locator to obtain an address index label corresponding to the query request.
In this embodiment of the present invention, the uniform resource locator may be a network address of the user. The address index tag may be an index of a network address corresponding to the query request.
The heterogeneous query request processing module 103 is configured to screen query requests with different sources from the query requests to obtain heterogeneous query requests, extract tags to be queried in the heterogeneous query requests, send query requests corresponding to the data tags to be queried to a local cache when query data corresponding to the data tags to be queried exists in the local cache, screen the same data tags to be queried and different data tags to be queried from the data tags to be queried when query data corresponding to the data tags to be queried does not exist in the local cache, send a first query request in the query requests corresponding to the same data tags to be queried to a background database according to receiving time of the query requests corresponding to the same data tags to be queried, and lock query requests corresponding to the remaining same data tags to be queried by using a preset cache refresh lock, and putting the query requests corresponding to the different data tags to be queried into a preset message queue, and sending the query requests corresponding to the different data tags to be queried to a background database according to the message queue.
In the embodiment of the present invention, the different source query request refers to a query request from different IP addresses, that is, a query request sent by multiple IP addresses in the preset time period, for example, a query request that multiple users want to obtain the same WeChat article in the same time period on a WeChat interface.
In the embodiment of the present invention, the to-be-queried data tag may be an index of query data corresponding to the query request. For example, if the query data corresponding to the query request of the user is an article, the tag of the data to be queried may be an ID of the article, or a character such as a title that can identify the query data.
In the embodiment of the present invention, the screening out query requests with different sources from the query requests to obtain different source query requests includes:
analyzing the query request to obtain an address index tag corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with different IP addresses from the query requests to obtain different source query requests.
In the embodiment of the invention, the query data, namely the data required by the user, is generally stored in a local cache and a background database. The data in the local cache is generally deadline-limited data, that is, data that is automatically deleted after being stored for a period of time.
In detail, the determining whether query data corresponding to the to-be-queried data tag exists in the local cache includes:
extracting index codes in the data tags to be inquired;
inquiring whether the local cache contains inquiry data corresponding to the index code or not according to the index code;
and when the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache.
In the embodiment of the present invention, when the query data corresponding to the to-be-queried data tag exists in the local cache, the corresponding query data can be directly obtained from the local cache, so that all query requests corresponding to the to-be-queried data tag are sent to the local cache.
And when the local cache does not have the query data corresponding to the data tags to be queried, screening the same data tags to be queried and different data tags to be queried from the data tags to be queried.
In the embodiment of the invention, the data tags to be inquired corresponding to the inquiry requests for acquiring the same inquiry data are used as the same data tags to be inquired, and the data tags to be inquired corresponding to the inquiry requests for acquiring different inquiry data are used as different data tags to be inquired.
In the embodiment of the invention, when the query data corresponding to the to-be-queried data tag does not exist in the local cache, the query request is indicated to need to acquire the query data from the background database, and the query request needs to be classified in order to prevent cache breakdown, so that the pressure of the background database is reduced, and the possibility of cache breakdown is reduced.
In the embodiment of the invention, the cache refreshing lock has the function of locking the query requests corresponding to the remaining same data tags to be queried in the local cache so as to prevent the background database from simultaneously receiving a large number of query requests for acquiring the same query data, thereby causing cache breakdown.
In the embodiment of the invention, the background database can write the read data into the local cache, so that the query requests corresponding to the remaining same data tags to be queried can directly acquire corresponding query data from the local cache, the cache breakdown phenomenon is prevented, and the pressure of the background database is reduced.
In detail, the sending a first query request of the query requests corresponding to the same data tag to be queried to a background database according to the receiving time of the query requests corresponding to the same data tag to be queried and locking the remaining query requests corresponding to the same data tag to be queried by using a preset cache refresh lock includes:
acquiring a timestamp of the query request, selecting a first query request from the query requests corresponding to the same data tag to be queried according to the timestamp, locking the rest query requests except the first query request in the query requests corresponding to the same data tag to be queried by using a preset cache refresh lock, and sending the first query request to the background database for reading;
writing query data read from the background database into a local cache;
and sending the rest query requests except the first query request in the query requests corresponding to the same data tag to be queried to the local cache.
In the embodiment of the invention, the message queue is used for asynchronously processing the query requests corresponding to different data tags to be queried. Due to the asynchronous processing, the pressure of reading data of the background database is effectively reduced, and the cache breakdown phenomenon is reduced.
In detail, the sending, according to the message queue, the query requests corresponding to the different data tags to be queried to a background database for reading includes:
analyzing the query request to obtain timestamps corresponding to the query requests corresponding to different data tags to be queried;
according to the time stamp, sequentially sending the query requests corresponding to the different data tags to be queried to the background database, and constantly monitoring whether the query requests in the background database exceed a preset threshold value;
and when the query request in the background database exceeds a preset threshold, stopping sending the query request from the message queue to the background database until the query request for reading the query data in the background database is smaller than the preset threshold again.
Fig. 3 is a schematic structural diagram of an electronic device implementing the method for preventing cache breakdown according to the present invention.
The electronic device may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as an anti-cache-punch-through program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of a cache breakdown prevention program, but also to temporarily store data that has been output or will be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., cache breakdown prevention programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication bus 12 may be a PerIPheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The communication bus 12 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Fig. 3 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 3 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Optionally, the communication interface 13 may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which is generally used to establish a communication connection between the electronic device and other electronic devices.
Optionally, the communication interface 13 may further include a user interface, which may be a Display (Display), an input unit (such as a Keyboard (Keyboard)), and optionally, a standard wired interface, or a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The cache breakdown prevention program stored in the memory 11 of the electronic device is a combination of a plurality of computer programs, and when running in the processor 10, can realize:
when receiving query requests with more than a preset number in a preset time period, acquiring the source of the query requests;
screening query requests with the same source from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests according to the receiving time of the query requests, sending the first query request to a local cache, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
screening query requests with different sources from the query requests to obtain heterogeneous query requests, and extracting tags to be queried in the heterogeneous query requests;
judging whether query data corresponding to the data tag to be queried exists in a local cache or not;
if the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
if the local cache does not have query data corresponding to the data tags to be queried, screening the same data tags to be queried and different data tags to be queried from the data tags to be queried;
according to the receiving time of the query requests corresponding to the same data tags to be queried, sending a first query request in the query requests corresponding to the same data tags to be queried to a background database, and locking the query requests corresponding to the remaining same data tags to be queried by using a preset cache refresh lock;
and the query requests corresponding to the different data tags to be queried are put into a preset message queue, and the query requests corresponding to the different data tags to be queried are sent to a background database according to the message queue.
Specifically, the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, the computer program may implement:
when receiving query requests with more than a preset number in a preset time period, acquiring the source of the query requests;
screening query requests with the same source from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests according to the receiving time of the query requests, sending the first query request to a local cache, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
screening query requests with different sources from the query requests to obtain heterogeneous query requests, and extracting tags to be queried in the heterogeneous query requests;
judging whether query data corresponding to the data tag to be queried exists in a local cache or not;
if the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
if the local cache does not have query data corresponding to the data tags to be queried, screening the same data tags to be queried and different data tags to be queried from the data tags to be queried;
according to the receiving time of the query requests corresponding to the same data tags to be queried, sending a first query request in the query requests corresponding to the same data tags to be queried to a background database, and locking the query requests corresponding to the remaining same data tags to be queried by using a preset cache refresh lock;
and the query requests corresponding to the different data tags to be queried are put into a preset message queue, and the query requests corresponding to the different data tags to be queried are sent to a background database according to the message queue.
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A method for preventing cache breakdown, the method comprising:
when receiving query requests with more than a preset number in a preset time period, acquiring the source of the query requests;
screening query requests with the same source from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests according to the receiving time of the query requests, sending the first query request to a local cache, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
screening query requests with different sources from the query requests to obtain heterogeneous query requests, and extracting tags to be queried in the heterogeneous query requests;
judging whether query data corresponding to the data tag to be queried exists in a local cache or not;
if the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
if the local cache does not have query data corresponding to the data tags to be queried, screening the same data tags to be queried and different data tags to be queried from the data tags to be queried;
according to the receiving time of the query requests corresponding to the same data tags to be queried, sending a first query request in the query requests corresponding to the same data tags to be queried to a background database, and locking the query requests corresponding to the remaining same data tags to be queried by using a preset cache refresh lock;
and the query requests corresponding to the different data tags to be queried are put into a preset message queue, and the query requests corresponding to the different data tags to be queried are sent to a background database according to the message queue.
2. The method of claim 1, wherein the screening out the query requests from the same source to obtain the homologous query requests comprises:
analyzing the query request to obtain an address index tag corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with the same IP addresses from the query requests to obtain homologous query requests.
3. The method of claim 2, wherein the parsing the query request to obtain an address index tag corresponding to the query request comprises:
analyzing the query request by using a preset service connector to obtain a request head of the query request;
and extracting the uniform resource locator in the request header, and translating the uniform resource locator to obtain an address index label corresponding to the query request.
4. The method of claim 1, wherein the screening out the query requests with different sources from the query requests to obtain different source query requests comprises:
analyzing the query request to obtain an address index tag corresponding to the query request;
extracting the IP address of the query request from the address index tag;
and selecting the query requests with different IP addresses from the query requests to obtain different source query requests.
5. The method for preventing cache breakdown according to claim 1, wherein the sending a first query request of the query requests corresponding to the same data tag to be queried to a background database according to the receiving time of the query requests corresponding to the same data tag to be queried and locking the query requests corresponding to the remaining same data tag to be queried by using a preset cache refresh lock comprises:
acquiring a timestamp of the query request, selecting a first query request from the query requests corresponding to the same data tag to be queried according to the timestamp, locking the rest query requests except the first query request in the query requests corresponding to the same data tag to be queried by using a preset cache refresh lock, and sending the first query request to the background database for reading;
writing query data read from the background database into a local cache;
and sending the rest query requests except the first query request in the query requests corresponding to the same data tag to be queried to the local cache.
6. The method for preventing cache breakdown as claimed in claim 1, wherein said determining whether the query data corresponding to the data tag to be queried exists in the local cache includes:
extracting index codes in the data tags to be inquired;
inquiring whether the local cache contains inquiry data corresponding to the index code or not according to the index code;
when the query data corresponding to the data tag to be queried exists in the local cache, sending the query request corresponding to the data tag to be queried to the local cache;
and when the local cache does not have the query data corresponding to the data tags to be queried, screening the same data tags to be queried and different data tags to be queried from the data tags to be queried.
7. The method for preventing cache breakdown according to claim 1, wherein the sending, according to the message queue, the query requests corresponding to the different data tags to be queried to a background database for a read operation includes:
analyzing the query request to obtain timestamps corresponding to the query requests corresponding to different data tags to be queried;
according to the time stamp, sequentially sending the query requests corresponding to the different data tags to be queried to the background database, and constantly monitoring whether the query requests in the background database exceed a preset threshold value;
and when the query request in the background database exceeds a preset threshold, stopping sending the query request from the message queue to the background database until the query request for reading the query data in the background database is smaller than the preset threshold again.
8. An apparatus for preventing buffer breakdown, comprising:
the system comprises a query request receiving module, a query request processing module and a query request processing module, wherein the query request receiving module is used for acquiring the sources of query requests when receiving query requests with more than a preset number in a preset time period, screening the query requests with the same source from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests to send to a local cache according to the receiving time of the query requests, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
the system comprises a homologous query request processing module, a local cache and a remote cache module, wherein the homologous query request processing module is used for screening out query requests with the same source from the query requests to obtain homologous query requests, selecting a first query request from the homologous query requests according to the receiving time of the query requests, sending the first query request to the local cache, and locking the rest query requests in the homologous query requests by using a preset anti-replay lock;
a heterogeneous query request processing module, configured to screen query requests with different sources from the query requests to obtain heterogeneous query requests, extract tags to be queried in the heterogeneous query requests, send query requests corresponding to the data tags to be queried to a local cache when query data corresponding to the data tags to be queried exists in the local cache, screen the same data tags to be queried and different data tags to be queried from the data tags to be queried when query data corresponding to the data tags to be queried does not exist in the local cache, send a first query request in the query requests corresponding to the same data tags to be queried to a database according to receiving time of the query requests corresponding to the same data tags to be queried, and use a preset cache refresh lock to lock query requests corresponding to remaining same data tags to be queried in a background, and putting the query requests corresponding to the different data tags to be queried into a preset message queue, and sending the query requests corresponding to the different data tags to be queried to a background database according to the message queue.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores computer program instructions executable by the at least one processor to enable the at least one processor to perform the method of preventing cache breakdown as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the method of preventing cache breakdown as claimed in any one of claims 1 to 7.
CN202111533275.8A 2021-12-15 2021-12-15 Anti-cache breakdown method, device, equipment and readable storage medium Active CN114201466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111533275.8A CN114201466B (en) 2021-12-15 2021-12-15 Anti-cache breakdown method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111533275.8A CN114201466B (en) 2021-12-15 2021-12-15 Anti-cache breakdown method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114201466A true CN114201466A (en) 2022-03-18
CN114201466B CN114201466B (en) 2024-02-23

Family

ID=80653869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111533275.8A Active CN114201466B (en) 2021-12-15 2021-12-15 Anti-cache breakdown method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114201466B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661718A (en) * 2022-03-28 2022-06-24 北京海量数据技术股份有限公司 Method and system for creating local partition index on line under Opengauss platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108132958A (en) * 2016-12-01 2018-06-08 阿里巴巴集团控股有限公司 A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device
US20180293284A1 (en) * 2017-04-10 2018-10-11 Servicenow, Inc. Systems and methods for querying time series data
CN110928904A (en) * 2019-10-31 2020-03-27 北京浪潮数据技术有限公司 Data query method and device and related components
CN111049882A (en) * 2019-11-11 2020-04-21 支付宝(杭州)信息技术有限公司 Cache state processing system, method, device and computer readable storage medium
CN111339148A (en) * 2020-03-13 2020-06-26 深圳前海环融联易信息科技服务有限公司 Method and device for preventing cache breakdown service, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108132958A (en) * 2016-12-01 2018-06-08 阿里巴巴集团控股有限公司 A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device
US20180293284A1 (en) * 2017-04-10 2018-10-11 Servicenow, Inc. Systems and methods for querying time series data
CN110928904A (en) * 2019-10-31 2020-03-27 北京浪潮数据技术有限公司 Data query method and device and related components
CN111049882A (en) * 2019-11-11 2020-04-21 支付宝(杭州)信息技术有限公司 Cache state processing system, method, device and computer readable storage medium
CN111339148A (en) * 2020-03-13 2020-06-26 深圳前海环融联易信息科技服务有限公司 Method and device for preventing cache breakdown service, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐凯声;王荣存;: "基于语义缓存技术的Hibernate查询缓存机制研究", 交通与计算机, no. 04, 30 August 2006 (2006-08-30) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661718A (en) * 2022-03-28 2022-06-24 北京海量数据技术股份有限公司 Method and system for creating local partition index on line under Opengauss platform
CN114661718B (en) * 2022-03-28 2023-04-25 北京海量数据技术股份有限公司 Method and system for online creation of local partition index under Opengauss platform

Also Published As

Publication number Publication date
CN114201466B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN112653760B (en) Cross-server file transmission method and device, electronic equipment and storage medium
CN112329419A (en) Document editing method, device, server, terminal and storage medium
CN112671921A (en) Data transmission method and device, electronic equipment and readable storage medium
CN111880948A (en) Data refreshing method and device, electronic equipment and computer readable storage medium
CN112084486A (en) User information verification method and device, electronic equipment and storage medium
CN112631731A (en) Data query method and device, electronic equipment and storage medium
CN113051503A (en) Browser page rendering method and device, electronic equipment and storage medium
CN112256783A (en) Data export method and device, electronic equipment and storage medium
CN114640707A (en) Message asynchronous processing method and device, electronic equipment and storage medium
CN114201466B (en) Anti-cache breakdown method, device, equipment and readable storage medium
CN111858604B (en) Data storage method and device, electronic equipment and storage medium
CN112464619B (en) Big data processing method, device and equipment and computer readable storage medium
CN113868528A (en) Information recommendation method and device, electronic equipment and readable storage medium
CN113722533A (en) Information pushing method and device, electronic equipment and readable storage medium
CN111901224A (en) Method, device and equipment for loading delayed messages and computer readable storage medium
CN114611046A (en) Data loading method, device, equipment and medium
CN114911479A (en) Interface generation method, device, equipment and storage medium based on configuration
CN114448930A (en) Short address generation method and device, electronic equipment and computer readable storage medium
CN115145870A (en) Method and device for positioning reason of failed task, electronic equipment and storage medium
CN113342867A (en) Data distribution and management method and device, electronic equipment and readable storage medium
CN113419718A (en) Data transmission method, device, equipment and medium
CN113364848A (en) File caching method and device, electronic equipment and storage medium
CN112905718A (en) Data management method, system, electronic device and medium based on super-fusion architecture
CN114860349B (en) Data loading method, device, equipment and medium
CN113672565B (en) File marking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant