CN116303590A - Cache data access method, device, equipment and storage medium - Google Patents
Cache data access method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN116303590A CN116303590A CN202310110508.6A CN202310110508A CN116303590A CN 116303590 A CN116303590 A CN 116303590A CN 202310110508 A CN202310110508 A CN 202310110508A CN 116303590 A CN116303590 A CN 116303590A
- Authority
- CN
- China
- Prior art keywords
- data
- cache
- target data
- database
- access request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2308—Concurrency control
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a cache data access method, which is used for solving the problem that when a large number of concurrent data access requests are met by adopting the existing data access method, cache breakdown or cache avalanche can occur, and further service downtime can be caused. The method comprises the following steps: judging whether target data corresponding to the data access request exists in a cache according to the received data access request; when the judgment result is negative, after the first request quantity is determined to be smaller than a preset threshold value, the data access request is sent to a database, and target data corresponding to the data access request is searched in the database; when the judgment result is yes, determining whether the target data in the cache is out of date; when the target data in the cache is determined not to be outdated, the target data is read in the cache; when the target data in the cache is determined to be outdated, searching the target data corresponding to the data access request in the database after the first request quantity is determined to be smaller than a preset threshold value.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for accessing cache data.
Background
With the rapid development of computer technology, the user quantity of various applications also starts to grow rapidly, the user quantity increases rapidly, and meanwhile, the data storage layer of the application system brings great access pressure.
In the prior art, most applications utilize a caching technology to cache data which is frequently accessed by the applications, and further, the pressure of the database is relieved by querying the cached data before the access traffic reaches a data storage layer (such as the database), so that the overall query performance and throughput of the system are improved.
However, the existing data caching technology has the problem that when original data is modified or cached data is required to be updated due to the expiration of the cache, the system can further access the database to perform data query due to the fact that parameters required by the request cannot be acquired in the cache layer, if a large number of concurrent data access requests are just encountered at this time, the data access requests directly pass through the cache to access the rear-end database, so that the pressure of the service or the database is increased sharply, the throughput and the response speed of the system are reduced finally, serious service downtime even occurs, and the phenomenon of cache breakdown or avalanche is caused.
Therefore, there is a need for a cache data access method that can ensure system stability.
Disclosure of Invention
The embodiment of the application provides a cache data access method, which is used for solving the problem that when a large number of concurrent data access requests are met by adopting the existing data access method, cache breakdown or cache avalanche can occur, and further service downtime can be caused.
The embodiment of the application also provides a cache data access device, which is used for solving the problem that when a large number of concurrent data access requests are met by adopting the existing data access method, cache breakdown or cache avalanche can occur, and further service downtime can be caused.
The embodiment of the application also provides a cache data access device, which is used for solving the problem that when a large number of concurrent data access requests are met by adopting the existing data access method, cache breakdown or cache avalanche can occur, and further service downtime can be caused.
The embodiment of the application also provides a computer readable storage medium, which is used for solving the problem that when a large number of concurrent data access requests are met by adopting the existing data access method, cache breakdown or cache avalanche may occur, and further service downtime may be caused.
The embodiment of the application adopts the following technical scheme:
a method of cache data access, comprising: judging whether target data corresponding to the data access request exists in a cache according to the received data access request; when the judgment result is negative, determining the first request quantity aiming at a database at present, after determining that the first request quantity is smaller than a preset threshold value, sending the data access request to the database, searching target data corresponding to the data access request in the database, and feeding the target data back to a data requester corresponding to the data access request, wherein the first request quantity is the quantity of all parallel access requests aiming at the database at present; when the judgment result is yes, determining whether the target data in the cache is out of date; when the target data in the cache is determined not to be outdated, the target data is read in the cache, and the target data is fed back to the data requester; and when the target data in the cache is determined to be outdated, after the first request quantity is determined to be smaller than a preset threshold value, sending the data access request to a database, and searching the target data corresponding to the data access request in the database.
A cache data access apparatus comprising: the cache hit judging unit is used for judging whether target data corresponding to the data access request exists in a cache according to the received data access request; the database query unit is used for determining the first request quantity aiming at the database currently when the judging result obtained by the cache hit judging unit is negative, sending the data access request to the database after determining that the first request quantity is smaller than a preset threshold value, searching target data corresponding to the data access request in the database, and feeding the target data back to a data requester corresponding to the data access request, wherein the first request quantity is the quantity of all parallel access requests aiming at the database currently; the expiration inquiry unit is used for determining whether the target data in the cache is out of date or not when the cache hit judgment result is yes; the cache query unit is used for reading target data in the cache and feeding the target data back to the data requester when the expiration query unit determines that the target data in the cache is not expired; and the database query unit is used for sending the data access request to a database after determining that the first request quantity is smaller than a preset threshold value when the expiration query unit determines that the target data in the cache is expired, and searching the target data corresponding to the data access request in the database.
A cache data access device, comprising:
a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: judging whether target data corresponding to the data access request exists in a cache according to the received data access request; when the judgment result is negative, determining the first request quantity aiming at a database at present, after determining that the first request quantity is smaller than a preset threshold value, sending the data access request to the database, searching target data corresponding to the data access request in the database, and feeding the target data back to a data requester corresponding to the data access request, wherein the first request quantity is the quantity of all parallel access requests aiming at the database at present; when the judgment result is yes, determining whether the target data in the cache is out of date; when the target data in the cache is determined not to be outdated, the target data is read in the cache, and the target data is fed back to the data requester; and when the target data in the cache is determined to be outdated, after the first request quantity is determined to be smaller than a preset threshold value, sending the data access request to a database, and searching the target data corresponding to the data access request in the database.
A computer-readable storage medium storing one or more programs that, when executed by an electronic device comprising a plurality of application programs, cause the electronic device to: judging whether target data corresponding to the data access request exists in a cache according to the received data access request; when the judgment result is negative, determining the first request quantity aiming at a database at present, after determining that the first request quantity is smaller than a preset threshold value, sending the data access request to the database, searching target data corresponding to the data access request in the database, and feeding the target data back to a data requester corresponding to the data access request, wherein the first request quantity is the quantity of all parallel access requests aiming at the database at present; when the judgment result is yes, determining whether the target data in the cache is out of date; when the target data in the cache is determined not to be outdated, the target data is read in the cache, and the target data is fed back to the data requester; and when the target data in the cache is determined to be outdated, after the first request quantity is determined to be smaller than a preset threshold value, sending the data access request to a database, and searching the target data corresponding to the data access request in the database.
The above-mentioned at least one technical scheme that this application embodiment adopted can reach following beneficial effect:
after receiving a data access request, a system firstly judges whether target data corresponding to the data access request exists in a cache, if the target data does not exist, namely the cache is missed, the number of parallel access requests for the data is determined before the cache is penetrated to access a rear-end database, if the number of the parallel access requests for the database is smaller than a preset access threshold, namely the access amount of the database does not reach the threshold at the moment, the newly added access cannot affect the normal operation of the database, the data access request can be sent to the database under the condition, the target data corresponding to the data access request is searched in the database, and the target data is fed back to a data requester corresponding to the data access request; if the target data exist in the cache, namely the cache is hit, whether the target data are outdated is further judged, if the target data are not outdated, the target data are read in the cache, and the target data are fed back to a data requesting party, if the target data are outdated, whether the number of the current parallel access requests for the database is smaller than a preset access threshold value is also required, the data access requests are sent to the database only when the current parallel access requests for the database do not reach the threshold value, the target data corresponding to the data access requests are searched in the database, when the current parallel access requests for the database reach the threshold value, lock-robbing operation is required to be performed on each request for accessing the database, the database can be accessed only when the number of the read locks is fixed, and other new access requests can be acquired only after the request for accessing the database is released, so that after the current parallel access requests for the database reach the threshold value, no new access requests reach the database.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a specific flow diagram of a method for accessing cache data according to an embodiment of the present application;
fig. 2 is a schematic diagram of a specific structure of a cache data access device according to an embodiment of the present application;
fig. 3 is a schematic specific structure diagram of a cache data access device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The execution subject of the cache data access method provided in the embodiment of the present application may be, but is not limited to, at least one of a database management server, an e-commerce server, a social server, a financial server, a video server, or the like.
For convenience of description, an embodiment of the method will be described below taking an execution subject of the method as a background server of a financial application. It will be appreciated that the subject of execution of the method is merely an exemplary illustration of a background server for a financial-type application and should not be construed as limiting the method.
Specifically, a schematic flow chart of a specific implementation of the cache data access method provided in the present application is shown in fig. 1, and mainly includes the following steps:
step 11, judging whether target data corresponding to the data access request exists in a cache according to the received data access request, and executing step 12 if the judging result is negative; when the judgment result is yes, executing the step 15;
the background server can search whether target data corresponding to the data access request exists in the cache according to the identification information carried in the data access request after receiving the data access request.
In this embodiment of the present application, the data in the cache may be obtained by the server from the database in advance according to the received cache instruction and written into the cache, and in one embodiment, the background server may cache all the original data related to the service in the database according to the service requirement. Or, the server can only buffer the response data of the api interface, and the service and the buffer can be decoupled by the buffer mode. The embodiment of the application does not limit which data is specifically cached by the background server.
In this embodiment of the present application, the background server may perform full-size caching of data of the database, or may only perform caching of hot spot data, or may also perform data caching in a manner of fixing the cache size, which is not limited by how large the data is specifically cached by the background server. In addition, in the embodiment of the present application, a least recently used (Least Recently Used, LRU) algorithm may be used to perform elimination update on the content already written into the cache, specifically, the use of the LRU algorithm to perform elimination update on the cache belongs to a common technical means in the art, so a specific technical scheme related to the elimination of the cache by the LRU algorithm is not repeated herein.
In addition, when the background server writes the data into the cache, cache related information is further added to each piece of cache data in the cache data table, where the cache related information includes: buffer expiration time and buffer refresh interval time. The buffer expiration time indicates the effective duration of the buffer data, the time is counted from the writing of the buffer data into the buffer, and when the expiration time is reached, the buffer data in the buffer needs to be updated. And the refresh interval time indicates how long the buffered data needs to be refreshed according to the data in the database. In the embodiment of the present application, the refresh interval time is set to be smaller than the buffer expiration time, for example, the refresh interval time may be set to be 3 minutes, and the buffer expiration time may be set to be 24 hours.
Step 12, when it is judged that the target data corresponding to the data access request does not exist in the cache by executing step 11, the background server may further determine the first request number for the database, execute step 13 when it is determined that the first request number is smaller than the preset threshold, and execute step 14 when it is determined that the first request number is larger than the preset threshold;
the first request quantity is the quantity of all parallel access requests aiming at the database at present, and the preset threshold value is the quantity of the parallel access requests which are preset according to the performance of the database and can be processed by the database maximally under the condition of not affecting the performance of the database.
Step 13, when the first request quantity is determined to be smaller than the preset threshold value by executing step 12, the background server can send the data access request to a database, search target data corresponding to the data access request in the database, and feed back the target data to a data requester corresponding to the data access request;
if the background server determines that the number of parallel access requests for the database is smaller than the preset access threshold, namely that the access quantity for the database does not reach the threshold at the moment, the newly added access cannot affect the normal operation of the database, the data access request can be sent to the database, target data corresponding to the data access request are searched in the database, and the target data are fed back to a data requester corresponding to the data access request.
Step 14, when the first request number is greater than the preset threshold value by executing step 12, the background server further judges whether the data access request acquires the read lock;
if the data access request is determined to acquire the read lock, the data access request can be sent to a database, and target data corresponding to the data access request is searched in the database; and when it is determined that the data access request does not acquire the read lock, a data acquisition failure notification may be fed back to the application, so that the application retries according to the notification. Because the number of the read locks is fixed, only after the occupied read locks are released by the request for completing the database access, other new access requests can acquire the read locks, and further, after the current parallel access request for the database reaches a threshold value, no new access request reaches the database.
Step 15, when the step 11 is executed to determine that the target data corresponding to the data access request exists in the cache, the background server may further determine whether the target data in the cache is out of date, and when the determination result is no, step 16 is executed; when the judgment result is yes, executing the step 17;
In this embodiment of the present application, after the background server queries, according to the identification information, the target data to be accessed by the data access request in the cache, the background server may determine whether the cache data is expired according to the cache expiration time corresponding to the target data.
Step 16, when it is determined by executing step 15 that the target data in the cache is not expired, the target data is read in the cache, and the target data is fed back to the data requester;
it should be noted that, after determining that the target data is not expired, the background server may further determine whether the target data in the cache needs to be refreshed according to the refresh interval time, and complete the reading of the target data according to whether the data needs to be refreshed by the following sub-steps:
sub-step 1601, the background database may determine a time of receipt T1 of the current data access request;
sub-step 1602, the background database determining a time of receipt T2 of a second access request for the target data;
wherein the second access request is an access request for the target data received immediately before the data access request.
Sub-step 1603, determining a time interval between T1 and T2;
Sub-step 1604, judge whether the time interval is greater than the interval time of refreshing that presets, when judging the result is yes, confirm the goal data in the said cache need to refresh; when the judging result is negative, determining that the target data in the cache does not need to be refreshed;
sub-step 1605, when it is determined that the target data in the cache does not need to be refreshed, reading the target data in the cache and feeding back to the data requester;
sub-step 1606, when it is determined that the target data needs to be refreshed, further determining a first number of requests currently directed to the database, and determining whether to update the target data according to the first number of requests;
sub-step 1607, when the first request number is smaller than a preset threshold, reading update data corresponding to the data access request in a database, feeding the update data back to the data requester, and updating the target data in the cache according to the update data;
sub-step 1608, when the first request number is greater than a preset threshold, reading target data in the cache and feeding back the target data to the data requester;
after feeding back the target data to be updated in the cache to the data requesting party, the background server can also judge whether the data access request acquires the update lock, if the data access request acquires the update lock, the background server can read the update data corresponding to the data access request from the database, and update the target data in the cache according to the update data.
Step 17, when the judgment result obtained by executing step 15 is yes, the background server can determine the first request quantity aiming at the database at present, after determining that the first request quantity is smaller than a preset threshold value, the background server sends the data access request to the database, searches the database for target data corresponding to the data access request, and feeds back the target data to a data requester corresponding to the data access request;
when the background server determines that the number of parallel access requests for the database is greater than the preset threshold, the background server can feed back the outdated cache data to the data requesting party and further determine whether the data access request acquires the update lock, if the data access request acquires the update lock, the background server can read the update data corresponding to the data access request from the database and update the target data in the cache according to the update data.
After receiving a data access request, a background server firstly judges whether target data corresponding to the data access request exists in a cache, if the target data does not exist, namely the cache is missed, the number of parallel access requests for the data is determined before the cache is penetrated to access a rear-end database, if the number of the parallel access requests for the database is smaller than a preset access threshold, namely the access amount of the database does not reach the threshold at the moment, the newly added access does not affect the normal operation of the database, the data access request can be sent to the database under the condition, the target data corresponding to the data access request is searched in the database, and the target data is fed back to a data requesting party corresponding to the data access request; if the target data exist in the cache, namely the cache is hit, whether the target data are outdated is further judged, if the target data are not outdated, the target data are read in the cache, and the target data are fed back to a data requesting party, if the target data are outdated, whether the number of the current parallel access requests for the database is smaller than a preset access threshold value is also required, the data access requests are sent to the database only when the current parallel access requests for the database do not reach the threshold value, the target data corresponding to the data access requests are searched in the database, when the current parallel access requests for the database reach the threshold value, lock-robbing operation is required to be performed on each request for accessing the database, the database can be accessed only when the number of the read locks is fixed, and other new access requests can be acquired only after the request for accessing the database is released, so that after the current parallel access requests for the database reach the threshold value, no new access requests reach the database.
In an implementation manner, the embodiment of the present application further provides a cache data access device, which is configured to solve the problem that when a large number of concurrent data access requests are encountered, cache breakdown or cache avalanche may occur by adopting an existing data access method, and thus service downtime may be caused. The specific structure diagram of the cache data access device is shown in fig. 2, and includes: a cache hit determination unit 21, a database query unit 22, an expiration query unit 23, and a cache query unit 24.
Wherein, the cache hit judging unit 21 is configured to judge whether target data corresponding to the data access request exists in a cache according to the received data access request;
the database query unit 22 is configured to determine, when the determination result obtained by the cache hit determination unit is no, a first number of requests currently directed against a database, send the data access request to the database after determining that the first number of requests is less than a preset threshold, search for target data corresponding to the data access request in the database, and feed back the target data to a data requester corresponding to the data access request, where the first number of requests is the number of all parallel access requests currently directed against the database;
An expiration inquiry unit 23, configured to determine whether the target data in the cache is expired when the cache hit determination result is yes;
a cache inquiry unit 24, configured to read target data in the cache and feed back the target data to the data requester when the expiration inquiry unit determines that the target data in the cache is not expired;
and the database query unit 22 is configured to send the data access request to a database after determining that the first request number is smaller than a preset threshold when the expiration query unit determines that the target data in the cache has expired, and search the database for the target data corresponding to the data access request.
In one embodiment, the cache query unit 24 is specifically configured to: judging whether the target data in the cache needs to be refreshed or not; when the target data is judged not to need refreshing, the target data is read in the cache; when the target data is judged to need to be refreshed, determining whether to update the target data according to the first request quantity; when the first request quantity is smaller than a preset threshold value, reading update data corresponding to the data access request in a database, feeding the update data back to the data requesting party, and updating target data in a cache according to the update data; and when the first request quantity is larger than a preset threshold value, reading target data in the cache, and feeding back the target data to the data requesting party.
In one embodiment, the device further comprises a refresh unit, specifically configured to: judging whether the data access request acquires an update lock or not; and when the judgment result is yes, reading the update data corresponding to the data access request in the database, and updating the target data in the cache according to the update data.
In one embodiment, the refresh unit is specifically configured to: determining a time interval between a second access request for reading the target data and the data access request, wherein the second access request is a received access request for the target data before the data access request; judging whether the time interval is larger than a preset refreshing interval time or not; when the judgment result is yes, determining that the target data in the cache needs to be refreshed; and when the judgment result is negative, determining that the target data in the cache does not need to be refreshed.
In one embodiment, the database query unit 22 is further configured to: when the target data in the cache is determined to be outdated and the first request quantity is determined to be larger than a preset threshold value, judging whether the data access request acquires a read lock or not; and when the judgment result is yes, the data access request is sent to a database, and target data corresponding to the data access request is searched in the database.
After receiving a data access request, the cache data access device provided by the embodiment of the application firstly judges whether target data corresponding to the data access request exists in a cache, if the target data does not exist, namely the cache is missed, the number of parallel access requests for the data is determined before the cache is penetrated to access a rear-end database, if the number of the parallel access requests for the database is smaller than a preset access threshold, namely the access amount of the database does not reach the threshold at the moment, the newly added access does not affect the normal operation of the database, the data access request can be sent to the database under the condition, the target data corresponding to the data access request is searched in the database, and the target data is fed back to a data requester corresponding to the data access request; if the target data exist in the cache, namely the cache is hit, whether the target data are outdated is further judged, if the target data are not outdated, the target data are read in the cache, and the target data are fed back to a data requesting party, if the target data are outdated, whether the number of the current parallel access requests for the database is smaller than a preset access threshold value is also required, the data access requests are sent to the database only when the current parallel access requests for the database do not reach the threshold value, the target data corresponding to the data access requests are searched in the database, when the current parallel access requests for the database reach the threshold value, lock-robbing operation is required to be performed on each request for accessing the database, the database can be accessed only when the number of the read locks is fixed, and other new access requests can be acquired only after the request for accessing the database is released, so that after the current parallel access requests for the database reach the threshold value, no new access requests reach the database.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 3, at the hardware level, the electronic device includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 3, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory to the memory and then runs, and forms a cache data access device on a logic level. The processor is used for executing the programs stored in the memory and is specifically used for executing the following operations:
judging whether target data corresponding to the data access request exists in a cache according to the received data access request; when the judgment result is negative, determining the first request quantity aiming at a database at present, after determining that the first request quantity is smaller than a preset threshold value, sending the data access request to the database, searching target data corresponding to the data access request in the database, and feeding the target data back to a data requester corresponding to the data access request, wherein the first request quantity is the quantity of all parallel access requests aiming at the database at present; when the judgment result is yes, determining whether the target data in the cache is out of date; when the target data in the cache is determined not to be outdated, the target data is read in the cache, and the target data is fed back to the data requester; and when the target data in the cache is determined to be outdated, after the first request quantity is determined to be smaller than a preset threshold value, sending the data access request to a database, and searching the target data corresponding to the data access request in the database.
The method performed by the cached data access electronic device disclosed in the embodiment shown in fig. 3 of the present application may be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
Of course, other implementations, such as a logic device or a combination of hardware and software, are not excluded from the electronic device of the present application, that is, the execution subject of the following processing flow is not limited to each logic unit, but may be hardware or a logic device.
The present embodiments also provide a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, enable the portable electronic device to perform the method of the embodiment of fig. 1, and in particular to:
judging whether target data corresponding to the data access request exists in a cache according to the received data access request; when the judgment result is negative, determining the first request quantity aiming at a database at present, after determining that the first request quantity is smaller than a preset threshold value, sending the data access request to the database, searching target data corresponding to the data access request in the database, and feeding the target data back to a data requester corresponding to the data access request, wherein the first request quantity is the quantity of all parallel access requests aiming at the database at present; when the judgment result is yes, determining whether the target data in the cache is out of date; when the target data in the cache is determined not to be outdated, the target data is read in the cache, and the target data is fed back to the data requester; and when the target data in the cache is determined to be outdated, after the first request quantity is determined to be smaller than a preset threshold value, sending the data access request to a database, and searching the target data corresponding to the data access request in the database.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.
Claims (10)
1. A method for accessing cached data, comprising:
judging whether target data corresponding to the data access request exists in a cache according to the received data access request;
when the judgment result is negative, determining the first request quantity aiming at a database at present, after determining that the first request quantity is smaller than a preset threshold value, sending the data access request to the database, searching target data corresponding to the data access request in the database, and feeding the target data back to a data requester corresponding to the data access request, wherein the first request quantity is the quantity of all parallel access requests aiming at the database at present;
When the judgment result is yes, determining whether the target data in the cache is out of date;
when the target data in the cache is determined not to be outdated, the target data is read in the cache, and the target data is fed back to the data requester;
and when the target data in the cache is determined to be outdated, after the first request quantity is determined to be smaller than a preset threshold value, sending the data access request to a database, and searching the target data corresponding to the data access request in the database.
2. The method according to claim 1, wherein when it is determined that the target data in the cache is not expired, reading the target data in the cache and feeding back the target data to the data requester, specifically comprising:
judging whether the target data in the cache needs to be refreshed or not;
when the target data is judged not to need refreshing, the target data is read in the cache;
when the target data is judged to need to be refreshed, determining whether to update the target data according to the first request quantity;
when the first request quantity is smaller than a preset threshold value, reading update data corresponding to the data access request in a database, feeding the update data back to the data requesting party, and updating target data in a cache according to the update data;
And when the first request quantity is larger than a preset threshold value, reading target data in the cache, and feeding back the target data to the data requesting party.
3. The method according to claim 2, wherein when the first request number is greater than a preset threshold, after reading target data in the cache and feeding back the target data to the data requester, further comprising:
judging whether the data access request acquires an update lock or not;
and when the judgment result is yes, reading the update data corresponding to the data access request in the database, and updating the target data in the cache according to the update data.
4. The method according to claim 2, wherein the determining whether the target data in the cache needs to be refreshed specifically comprises:
determining a time interval between a second access request for reading the target data and the data access request, wherein the second access request is a received access request for the target data before the data access request;
judging whether the time interval is larger than a preset refreshing interval time or not;
When the judgment result is yes, determining that the target data in the cache needs to be refreshed;
and when the judgment result is negative, determining that the target data in the cache does not need to be refreshed.
5. The method as recited in claim 1, further comprising:
when the target data in the cache is determined to be outdated and the first request quantity is determined to be larger than a preset threshold value, judging whether the data access request acquires a read lock or not;
and when the judgment result is yes, the data access request is sent to a database, and target data corresponding to the data access request is searched in the database.
6. A cache data access apparatus, comprising:
the cache hit judging unit is used for judging whether target data corresponding to the data access request exists in a cache according to the received data access request;
the database query unit is used for determining the first request quantity aiming at the database currently when the judging result obtained by the cache hit judging unit is negative, sending the data access request to the database after determining that the first request quantity is smaller than a preset threshold value, searching target data corresponding to the data access request in the database, and feeding the target data back to a data requester corresponding to the data access request, wherein the first request quantity is the quantity of all parallel access requests aiming at the database currently;
The expiration inquiry unit is used for determining whether the target data in the cache is out of date or not when the cache hit judgment result is yes;
the cache query unit is used for reading target data in the cache and feeding the target data back to the data requester when the expiration query unit determines that the target data in the cache is not expired;
and the database query unit is used for sending the data access request to a database after determining that the first request quantity is smaller than a preset threshold value when the expiration query unit determines that the target data in the cache is expired, and searching the target data corresponding to the data access request in the database.
7. The apparatus of claim 6, wherein the cache query unit is specifically configured to:
judging whether the target data in the cache needs to be refreshed or not;
when the target data is judged not to need refreshing, the target data is read in the cache;
when the target data is judged to need to be refreshed, determining whether to update the target data according to the first request quantity;
when the first request quantity is smaller than a preset threshold value, reading update data corresponding to the data access request in a database, feeding the update data back to the data requesting party, and updating target data in a cache according to the update data;
And when the first request quantity is larger than a preset threshold value, reading target data in the cache, and feeding back the target data to the data requesting party.
8. The apparatus of claim 7, further comprising a refresh unit, in particular for:
judging whether the data access request acquires an update lock or not;
and when the judgment result is yes, reading the update data corresponding to the data access request in the database, and updating the target data in the cache according to the update data.
9. A cache data access device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
judging whether target data corresponding to the data access request exists in a cache according to the received data access request;
when the judgment result is negative, determining the first request quantity aiming at a database at present, after determining that the first request quantity is smaller than a preset threshold value, sending the data access request to the database, searching target data corresponding to the data access request in the database, and feeding the target data back to a data requester corresponding to the data access request, wherein the first request quantity is the quantity of all parallel access requests aiming at the database at present;
When the judgment result is yes, determining whether the target data in the cache is out of date;
when the target data in the cache is determined not to be outdated, the target data is read in the cache, and the target data is fed back to the data requester;
and when the target data in the cache is determined to be outdated, after the first request quantity is determined to be smaller than a preset threshold value, sending the data access request to a database, and searching the target data corresponding to the data access request in the database.
10. A computer readable storage medium storing one or more programs, which when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310110508.6A CN116303590A (en) | 2023-02-06 | 2023-02-06 | Cache data access method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310110508.6A CN116303590A (en) | 2023-02-06 | 2023-02-06 | Cache data access method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116303590A true CN116303590A (en) | 2023-06-23 |
Family
ID=86833385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310110508.6A Pending CN116303590A (en) | 2023-02-06 | 2023-02-06 | Cache data access method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116303590A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116708579A (en) * | 2023-08-04 | 2023-09-05 | 浪潮电子信息产业股份有限公司 | Data access method, device, electronic equipment and computer readable storage medium |
CN117573572A (en) * | 2024-01-12 | 2024-02-20 | 北京开源芯片研究院 | Method, device, equipment and storage medium for processing refill data |
CN118332054A (en) * | 2024-04-29 | 2024-07-12 | 深圳市路特创新科技有限公司 | Real-time data warehouse management method and system |
-
2023
- 2023-02-06 CN CN202310110508.6A patent/CN116303590A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116708579A (en) * | 2023-08-04 | 2023-09-05 | 浪潮电子信息产业股份有限公司 | Data access method, device, electronic equipment and computer readable storage medium |
CN116708579B (en) * | 2023-08-04 | 2024-01-12 | 浪潮电子信息产业股份有限公司 | Data access method, device, electronic equipment and computer readable storage medium |
CN117573572A (en) * | 2024-01-12 | 2024-02-20 | 北京开源芯片研究院 | Method, device, equipment and storage medium for processing refill data |
CN118332054A (en) * | 2024-04-29 | 2024-07-12 | 深圳市路特创新科技有限公司 | Real-time data warehouse management method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109240946B (en) | Multi-level caching method of data and terminal equipment | |
CN116303590A (en) | Cache data access method, device, equipment and storage medium | |
WO2016177283A1 (en) | Cache directory refreshing method and device | |
US8949535B1 (en) | Cache updating | |
US20090070526A1 (en) | Using explicit disk block cacheability attributes to enhance i/o caching efficiency | |
CN109150930B (en) | Configuration information loading method and device and service processing method and device | |
CN107430551B (en) | Data caching method, storage control device and storage equipment | |
CN111382206B (en) | Data storage method and device | |
CN107301215B (en) | Search result caching method and device and search method and device | |
US9465743B2 (en) | Method for accessing cache and pseudo cache agent | |
CN112214178B (en) | Storage system, data reading method and data writing method | |
CN109062717B (en) | Data caching method, data caching system, data caching disaster tolerance method, data caching disaster tolerance system and data caching system | |
CN109582233A (en) | A kind of caching method and device of data | |
CN111382179B (en) | Data processing method and device and electronic equipment | |
CN111522509B (en) | Caching method and equipment for distributed storage system | |
CN111694806B (en) | Method, device, equipment and storage medium for caching transaction log | |
CN115934583B (en) | Hierarchical caching method, device and system | |
CN110941595A (en) | File system access method and device | |
CN116633616A (en) | Data access method, system, equipment and storage medium | |
CN115309671A (en) | Data processing method, data processing device, storage medium and computer equipment | |
KR101884726B1 (en) | Method, apparatus, and computer program stored in computer readable medium for reading block in database system | |
US20210294749A1 (en) | Caching assets in a multiple cache system | |
CN111367697B (en) | Error processing method and device | |
CN115509437A (en) | Storage system, network card, processor, data access method, device and system | |
US20140215158A1 (en) | Executing Requests from Processing Elements with Stacked Memory Devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |