CN113806408A - Data caching method, system, equipment and storage medium - Google Patents

Data caching method, system, equipment and storage medium Download PDF

Info

Publication number
CN113806408A
CN113806408A CN202111136168.1A CN202111136168A CN113806408A CN 113806408 A CN113806408 A CN 113806408A CN 202111136168 A CN202111136168 A CN 202111136168A CN 113806408 A CN113806408 A CN 113806408A
Authority
CN
China
Prior art keywords
data
retrieval
cache
database
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111136168.1A
Other languages
Chinese (zh)
Inventor
魏胜杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Inspur Data Technology Co Ltd
Original Assignee
Jinan Inspur Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Data Technology Co Ltd filed Critical Jinan Inspur Data Technology Co Ltd
Priority to CN202111136168.1A priority Critical patent/CN113806408A/en
Publication of CN113806408A publication Critical patent/CN113806408A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24553Query execution of query operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a data caching method, which comprises the following steps: generating a task carrying a task type and task data according to the received request; judging whether the task type is data retrieval; if so, taking the task data as a retrieval index, and judging whether a retrieval result corresponding to the retrieval index exists in the cache; if the data exists in the cache, decompressing the data in the cache to obtain a retrieval result corresponding to the retrieval index, and feeding back the retrieval result; if the cache does not exist, judging whether a retrieval result corresponding to the retrieval index exists in the database or not; and if the search index exists in the database, feeding back the search result corresponding to the search index in the database. By applying the scheme of the application, the overall response speed of the system can be effectively improved. The application also provides a data caching system, a device and a storage medium, and the data caching system, the device and the storage medium have corresponding technical effects.

Description

Data caching method, system, equipment and storage medium
Technical Field
The present invention relates to the field of storage technologies, and in particular, to a data caching method, system, device, and storage medium.
Background
There has long been a conflict in computer systems: the conflict between a high-speed processor and a relatively high-speed memory, and the conflict between a relatively high-speed memory and a low-speed hard disk. Because the processing speed of the processor is far higher than the memory reading and writing speed, which is far higher than the hard disk, a large amount of time is wasted on waiting for reading and writing data. To alleviate this conflict, a cache is currently integrated within a processor to adjust the speed conflict between the processor and the memory to some extent, and similarly, in some higher-end storage devices, a cache is also built in to alleviate the speed conflict between the memory and the hard disk.
In the Web system, such a contradiction also exists. In a high concurrency scene, a database often becomes a performance bottleneck, because the database is directly stored on a hard disk, reading and writing of the database mean reading and writing of the hard disk, and the speed of a CPU and a memory is far higher than the reading and writing speed of the hard disk, when a large number of requests reach a server, the CPU can finish processing the requests quickly, but the speed of accessing the hard disk to read the data is slow, so that the performance of the database is reduced sharply, and the database can be shut down directly under severe conditions. Therefore, a cache system is usually designed to filter and intercept requests, a cache layer is located in a memory, the read-write speed of the memory is far faster than that of a hard disk, if data exists in the cache, the data is directly returned from the cache, and if the data does not exist, the database is accessed, so that most of the pressure of the database is shared by the cache system. However, the cache space is much smaller than the storage space of the hard disk, so that the method can only improve the response speed to a certain extent.
In summary, how to further increase the response speed is a technical problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
The invention aims to provide a data caching method, a data caching system, data caching equipment and a data caching storage medium, so as to further improve the response speed.
In order to solve the technical problems, the invention provides the following technical scheme:
a data caching method, comprising:
generating a task carrying a task type and task data according to the received request;
judging whether the task type is data retrieval;
if so, taking the task data as a retrieval index, and judging whether a retrieval result corresponding to the retrieval index exists in a cache;
if the data exists in the cache, decompressing the data in the cache to obtain a retrieval result corresponding to the retrieval index, and feeding back the retrieval result;
if the cache does not exist, judging whether a retrieval result corresponding to the retrieval index exists in the database or not;
and if the search index exists in the database, feeding back the search result corresponding to the search index in the database.
Preferably, after determining that the search result corresponding to the search index exists in the database, the method further includes:
compressing and storing retrieval results corresponding to the retrieval indexes in the database into a cache;
the data caching method further comprises the following steps:
and eliminating the compressed data stored in the cache by a preset eviction strategy.
Preferably, the elimination of the compressed data stored in the cache by the preset eviction policy includes:
and eliminating the compressed data stored in the cache through a preset LRU strategy.
Preferably, the method further comprises the following steps:
when the task type is judged to be data updating, the task data is used as updating data, data updating is carried out on the database, and the updating data is compressed and stored into a cache;
when the task type is judged to be data insertion, taking the task data as insertion data, performing data insertion on the database, compressing the insertion data and storing the insertion data into a cache;
and when the task type is judged to be data deletion, determining a deletion object based on the task data and deleting data from the database.
Preferably, the feeding back the search result corresponding to the search index in the database includes:
feeding back a retrieval result corresponding to the retrieval index in the database through a first thread
Compressing and storing the retrieval result corresponding to the retrieval index in the database into a cache, wherein the compressing comprises the following steps:
compressing and storing the retrieval result corresponding to the retrieval index in the database into a cache through a second thread.
Preferably, after judging that the database does not have the retrieval result corresponding to the retrieval index, the method further comprises the following steps;
and feeding back prompt information indicating that the retrieval fails.
Preferably, after obtaining the retrieval result corresponding to the retrieval index by decompressing the data in the cache, the method further includes:
verifying whether the retrieval result obtained after decompression is correct or not according to a consistency check rule;
if yes, feeding back the retrieval result;
and if not, re-executing the operation of obtaining the retrieval result corresponding to the retrieval index by decompressing the data in the cache.
A data caching system comprising:
the task receiving unit is used for generating a task carrying a task type and task data according to the received request;
the first judging unit is used for judging whether the task type is data retrieval;
if yes, executing a second judging unit, wherein the second judging unit is used for taking the task data as a retrieval index and judging whether a retrieval result corresponding to the retrieval index exists in a cache or not;
if the cache exists, executing a first feedback unit, which is used for decompressing the data in the cache to obtain a retrieval result corresponding to the retrieval index and feeding back the retrieval result;
if the cache does not exist, executing a third judging unit for judging whether a retrieval result corresponding to the retrieval index exists in the database or not;
and if the search index exists in the database, executing a second feedback unit for feeding back the search result corresponding to the search index in the database.
A data caching device comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the data caching method as defined in any one of the above.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the data caching method as defined in any one of the preceding claims
By applying the technical scheme provided by the embodiment of the invention, when the task type is data retrieval and the retrieval result corresponding to the retrieval index exists in the cache is judged, the retrieval result corresponding to the retrieval index can be directly obtained from the cache and fed back, namely, the response speed is higher than that of feeding back the retrieval result corresponding to the retrieval index in the database. Of course, if the search result does not exist in the cache, the search result corresponding to the search index in the database is fed back. In addition, in the scheme of the application, the retrieval result corresponding to the retrieval index is obtained by decompressing the data in the cache, that is, the data in the cache is compressed and stored, so that the cache space can be effectively saved, that is, more data can be stored in the cache, and the overall response speed of the system is improved. Because the data in the cache is compressed and stored, certain time consumption exists in the process of decompressing the data, but as can be seen from the description above, the processing speed of the processor is far higher than the memory reading and writing speed, and the memory reading and writing speed is far higher than the hard disk, so that the time consumption caused by decompressing the data is increased very little, but a large amount of cache space can be saved, and the overall response speed of the system can be effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of a data caching method according to the present invention;
fig. 2 is a schematic structural diagram of a data caching system according to the present invention.
Detailed Description
The core of the invention is to provide a data caching method, and the overall response speed of the system can be effectively improved.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a data caching method according to the present invention, where the data caching method may include the following steps:
step S101: and generating a task carrying the task type and the task data according to the received request.
Specifically, the task receiving unit may receive a request initiated by the client, and then encapsulates the request into a task, where the task obtained after encapsulation at least needs to carry a task type and task data, and certainly, in other specific embodiments, the task obtained after encapsulation may also carry other information, for example, an ID of the task, information of an initiator of the task, and the like, and may be set and adjusted according to actual needs.
Task types are typically 4 types, data retrieval, data update, data insertion, and data deletion. The task data indicates the data content specifically related to the task. For example, when the task type is data retrieval, the task data is a retrieval index, i.e., a retrieved key, and when the task type is data deletion, the task data is a deleted key. When the task type is data updating or data inserting, the task data needs data to be updated/inserted in addition to information of an updating position/inserting position.
Step S102: and judging whether the task type is data retrieval. If so, step S103 is performed.
The task receiving unit can send the encapsulated task to the task scheduling module, and the task scheduling module can judge whether the task type of the received task is data retrieval or not through analysis.
Step S103: taking the task data as a retrieval index, and judging whether a retrieval result corresponding to the retrieval index exists in a cache or not; if so, step S104 is performed.
Since the task type is data retrieval, it may be determined whether the retrieval result corresponding to the retrieval index exists in the cache by using the task data as the retrieval index, that is, as a key for retrieval, and specifically, it may be determined whether the retrieval result corresponding to the retrieval index exists in the cache by traversing.
Step S104: decompressing the data in the cache to obtain a retrieval result corresponding to the retrieval index, and feeding back the retrieval result; if not, step S105 is performed.
If the retrieval result corresponding to the retrieval index exists in the cache, feedback can be realized directly based on the cache without accessing the database.
In addition, since the data stored in the cache is compressed data, the retrieval result corresponding to the retrieval index can be obtained by decompressing the data in the cache, and then the retrieval result is fed back to the client.
Further, in an embodiment of the present invention, in order to ensure the accuracy of the data obtained after decompression, after obtaining the search result corresponding to the search index by decompressing the data in the cache described in step S104, the method may further include:
verifying whether the retrieval result obtained after decompression is correct or not according to a consistency check rule;
if yes, feeding back the retrieval result;
if not, the operation of obtaining the retrieval result corresponding to the retrieval index through decompressing the data in the cache is executed again.
The algorithm specifically selected by the consistency check rule is set according to actual needs, for example, consistency check is realized through a simple check bit. In this embodiment, when the consistency check fails, it indicates that an exception may occur in the compression process, and therefore, the operation of obtaining the search result corresponding to the search index by decompressing the data in the cache may be performed again. In addition, in other occasions, if the data in the cache is still abnormal after being repeated for a plurality of times, the step S105 may be executed to obtain the retrieval result corresponding to the retrieval index from the database.
Step S105: and judging whether a retrieval result corresponding to the retrieval index exists in the database. If it exists in the database, step S106 is executed
When the retrieval result corresponding to the retrieval index does not exist in the cache, it is necessary to determine whether the retrieval result corresponding to the retrieval index exists in the database.
Usually, the database has a retrieval result corresponding to the retrieval index, and in a few cases, the database does not have a retrieval result corresponding to the retrieval index, which may be caused by data loss in the database or by an error request sent by the client. In an embodiment of the present invention, after determining that there is no search result corresponding to the search index in the database, the method may further include; and feeding back prompt information representing the retrieval failure so as to remind the client of the condition.
Step S106: and feeding back the retrieval result corresponding to the retrieval index in the database.
When the retrieval result corresponding to the retrieval index exists in the database, the retrieval result corresponding to the retrieval index in the database can be fed back.
In an embodiment of the present invention, after determining that the search result corresponding to the search index exists in the database, the method may further include:
compressing and storing retrieval results corresponding to the retrieval indexes in the database into a cache;
the data caching method further comprises the following steps:
and eliminating the compressed data stored in the cache by a preset eviction strategy.
The data stored in the cache of the present application may be fixed data or changed data, that is, the cache may be updated, and of course, part of the data may also be selected to be fixed, and the rest may be updated. For example, in one scenario, the worker knows that a part of data is used more frequently, so that the part of data is compressed and stored in the cache and is not eliminated by the eviction policy, and the data stored in the remaining space of the cache can be updated.
In this embodiment, after the search result corresponding to the search index is determined to exist in the database, it is indicated that the probability that the search result corresponding to the search index is used recently is high, and therefore, the search result corresponding to the search index in the database is compressed and stored in the cache.
In addition, when the data is compressed and stored in the cache, consistency check may also be performed, that is, after the consistency check passes, the compressed data is stored in the cache.
In this kind of implementation of this application, eliminate the compressed data that stores in the cache through predetermined eviction strategy, for example consider that LRU (Least Recently Used) strategy is comparatively ripe, use extensively, eliminate the compressed data that stores in the cache through predetermined eviction strategy, can specifically be: and eliminating the compressed data stored in the cache through a preset LRU strategy. When the LRU strategy is adopted to eliminate the compressed data stored in the cache, the data which is the latest and least used is eliminated.
The preset eviction policy can be maintained by a cache tool, for example, a cache is selected as the cache tool, a Daemon thread Daemon is maintained to monitor data in the cache, and when the data meets the eviction policy, the Daemon thread removes the data from the cache, so as to guarantee timeliness and availability of the cache.
Of course, in other occasions, other types of eviction strategies may be adopted to eliminate the compressed data stored in the cache according to actual needs, but it is understood that, in general, the lower the frequency of data being used, the longer the data is not used, the more the data should be eliminated.
In an embodiment of the present invention, the method may further include:
when the task type is judged to be data updating, the task data is used as updating data, data updating is carried out on the database, and the updating data is compressed and stored into a cache;
when the task type is judged to be data insertion, taking the task data as insertion data, performing data insertion on a database, compressing the insertion data and storing the compressed insertion data into a cache;
and when the task type is judged to be data deletion, determining a deletion object based on the task data and deleting the data of the database.
In this embodiment, when the task types are data update, data insertion, and data deletion, respectively, corresponding operations may be performed according to different task types. And for data updating and data inserting, the probability that the corresponding data is used recently is high, and the embodiment compresses and stores the updating data into the cache and also compresses and stores the inserting data into the cache.
In addition, when it is determined that the task type is data deletion, in addition to determining a deletion target based on the task data and performing data deletion on the database, in other cases, if compressed data of the deletion target also exists in the cache, the compressed data of the deletion target in the cache may be deleted.
In an embodiment of the present invention, step S106 may specifically include:
feeding back a retrieval result corresponding to the retrieval index in the database through the first thread
Compressing and storing the retrieval result corresponding to the retrieval index in the database into the cache, which may specifically include:
and compressing and storing the retrieval result corresponding to the retrieval index in the database into a cache through the second thread.
In this embodiment, the compressing and storing of the retrieval result corresponding to the retrieval index in the database into the cache is performed by the second thread, and the operation of the first thread for feeding back the retrieval result corresponding to the retrieval index in the database is not affected, that is, the cache updating adopts an asynchronous updating mechanism, and the compressing operation is not perceived by the user, and does not prolong the response time.
It will be further understood that, in the above embodiment, the data update and data insertion may also involve updating the cache, and may also be performed by the second thread, that is, the portions involved in updating the cache are all executed asynchronously, so as to improve the response speed of the system.
The benefits and overhead of the present solution for the entire cache layer are analyzed by a simple example. Taking the example that the compression algorithm is specifically selected as lzo (Lempel-Ziv-oberhimer, lossless compression), the compression rate of data is about 20%, i.e. a data set of 100KB, which is about 80KB after compression. Assume that the system sets a buffer size of 1000 and that every data stored in the buffer is 100 KB. Of course, in practice, a common Java object may have only a few hundred bytes, and the compressed size in this example is 0.8 × 105KB ≈ 78.13MB, saving 19.53 MB. The lzo algorithm decompression rate is about 400MB/s, then the retrieval duration grows on average 100KB 400MB/s 0.2 ms. That is, the saved space provides an additional 20% of the memory logic capacity at the cost of only 0.2ms of search duration.
By applying the technical scheme provided by the embodiment of the invention, when the task type is data retrieval and the retrieval result corresponding to the retrieval index exists in the cache is judged, the retrieval result corresponding to the retrieval index can be directly obtained from the cache and fed back, namely, the response speed is higher than that of feeding back the retrieval result corresponding to the retrieval index in the database. Of course, if the search result does not exist in the cache, the search result corresponding to the search index in the database is fed back. In addition, in the scheme of the application, the retrieval result corresponding to the retrieval index is obtained by decompressing the data in the cache, that is, the data in the cache is compressed and stored, so that the cache space can be effectively saved, that is, more data can be stored in the cache, and the overall response speed of the system is improved. Because the data in the cache is compressed and stored, certain time consumption exists in the process of decompressing the data, but as can be seen from the description above, the processing speed of the processor is far higher than the memory reading and writing speed, and the memory reading and writing speed is far higher than the hard disk, so that the time consumption caused by decompressing the data is increased very little, but a large amount of cache space can be saved, and the overall response speed of the system can be effectively improved.
Corresponding to the above method embodiments, the embodiments of the present invention further provide a data caching system, which can be referred to in correspondence with the above.
Referring to fig. 2, a schematic structural diagram of a data caching system in the present invention is shown, including:
a task receiving unit 201, configured to generate a task carrying a task type and task data according to the received request;
a first judging unit 202, configured to judge whether the task type is data retrieval;
if yes, executing a second judging unit 203, configured to take the task data as a retrieval index, and judge whether a retrieval result corresponding to the retrieval index exists in the cache;
if the data exists in the cache, a first feedback unit 204 is executed, configured to obtain a retrieval result corresponding to the retrieval index through decompressing the data in the cache, and perform feedback;
if the cache does not exist, a third judging unit 205 is executed for judging whether a retrieval result corresponding to the retrieval index exists in the database;
if the search result exists in the database, a second feedback unit 206 is executed for feeding back the search result corresponding to the search index in the database.
In one embodiment of the present invention, the method further comprises:
a cache updating unit configured to compress and store the retrieval result corresponding to the retrieval index in the database into the cache after the third judging unit 205 judges that the retrieval result corresponding to the retrieval index exists in the database; and eliminating the compressed data stored in the cache by a preset eviction strategy.
In a specific embodiment of the present invention, the cache updating unit eliminates the compressed data stored in the cache through a preset eviction policy, specifically:
and eliminating the compressed data stored in the cache through a preset LRU strategy.
In one embodiment of the present invention, the method further comprises:
a first execution unit configured to, when it is determined that the task type is data update, take the task data as update data and perform data update on the database, and compress and store the update data into the cache;
a second execution unit configured to, when it is determined that the task type is data insertion, take the task data as insertion data and perform data insertion to the database, and compress and store the insertion data into the cache;
and the third execution unit is used for determining a deletion object based on the task data and deleting data from the database when the task type is judged to be the data deletion.
In an embodiment of the present invention, the second feedback unit 206 is specifically configured to:
and feeding back the retrieval result corresponding to the retrieval index in the database through the first thread.
The cache updating unit compresses the retrieval result corresponding to the retrieval index in the database and stores the compressed retrieval result into the cache, specifically:
the cache updating unit compresses and stores the retrieval result corresponding to the retrieval index in the database into the cache through the second thread.
In a specific embodiment of the present invention, the method further comprises;
a prompt information output unit for feeding back prompt information indicating that the retrieval has failed after the third judgment unit 205 judges that the retrieval result corresponding to the retrieval index does not exist in the database.
In an embodiment of the present invention, the apparatus further includes a consistency checking unit, configured to:
after the first feedback unit 204 decompresses the data in the cache to obtain the retrieval result corresponding to the retrieval index, verifying whether the retrieval result obtained after decompression is correct according to the consistency check rule;
if yes, feeding back the retrieval result;
if not, the operation of obtaining the retrieval result corresponding to the retrieval index through decompressing the data in the cache is executed again.
Corresponding to the above method and system embodiments, the embodiments of the present invention further provide a data caching device and a computer readable storage medium, which can be referred to in correspondence with the above.
The data caching device may include:
a memory for storing a computer program;
a processor for executing a computer program for implementing the steps of the data caching method as in any one of the above embodiments.
The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the data caching method as in any one of the above embodiments. A computer-readable storage medium as referred to herein may include Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The principle and the implementation of the present invention are explained in the present application by using specific examples, and the above description of the embodiments is only used to help understanding the technical solution and the core idea of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. A method for caching data, comprising:
generating a task carrying a task type and task data according to the received request;
judging whether the task type is data retrieval;
if so, taking the task data as a retrieval index, and judging whether a retrieval result corresponding to the retrieval index exists in a cache;
if the data exists in the cache, decompressing the data in the cache to obtain a retrieval result corresponding to the retrieval index, and feeding back the retrieval result;
if the cache does not exist, judging whether a retrieval result corresponding to the retrieval index exists in the database or not;
and if the search index exists in the database, feeding back the search result corresponding to the search index in the database.
2. The data caching method according to claim 1, wherein after determining that the search result corresponding to the search index exists in the database, the method further comprises:
compressing and storing retrieval results corresponding to the retrieval indexes in the database into a cache;
the data caching method further comprises the following steps:
and eliminating the compressed data stored in the cache by a preset eviction strategy.
3. The data caching method according to claim 2, wherein the elimination of the compressed data stored in the cache by a preset eviction policy comprises:
and eliminating the compressed data stored in the cache through a preset LRU strategy.
4. The data caching method of claim 2, further comprising:
when the task type is judged to be data updating, the task data is used as updating data, data updating is carried out on the database, and the updating data is compressed and stored into a cache;
when the task type is judged to be data insertion, taking the task data as insertion data, performing data insertion on the database, compressing the insertion data and storing the insertion data into a cache;
and when the task type is judged to be data deletion, determining a deletion object based on the task data and deleting data from the database.
5. The data caching method of claim 2, wherein feeding back the search result in the database corresponding to the search index comprises:
feeding back a retrieval result corresponding to the retrieval index in the database through a first thread
Compressing and storing the retrieval result corresponding to the retrieval index in the database into a cache, wherein the compressing comprises the following steps:
compressing and storing the retrieval result corresponding to the retrieval index in the database into a cache through a second thread.
6. The data caching method according to claim 1, wherein after determining that there is no search result corresponding to the search index in the database, further comprising;
and feeding back prompt information indicating that the retrieval fails.
7. The data caching method according to any one of claims 1 to 5, wherein after the retrieving result corresponding to the retrieval index is obtained by decompressing the data in the cache, the method further comprises:
verifying whether the retrieval result obtained after decompression is correct or not according to a consistency check rule;
if yes, feeding back the retrieval result;
and if not, re-executing the operation of obtaining the retrieval result corresponding to the retrieval index by decompressing the data in the cache.
8. A data caching system, comprising:
the task receiving unit is used for generating a task carrying a task type and task data according to the received request;
the first judging unit is used for judging whether the task type is data retrieval;
if yes, executing a second judging unit, wherein the second judging unit is used for taking the task data as a retrieval index and judging whether a retrieval result corresponding to the retrieval index exists in a cache or not;
if the cache exists, executing a first feedback unit, which is used for decompressing the data in the cache to obtain a retrieval result corresponding to the retrieval index and feeding back the retrieval result;
if the cache does not exist, executing a third judging unit for judging whether a retrieval result corresponding to the retrieval index exists in the database or not;
and if the search index exists in the database, executing a second feedback unit for feeding back the search result corresponding to the search index in the database.
9. A data caching apparatus, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the data caching method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the data caching method as claimed in any one of claims 1 to 7.
CN202111136168.1A 2021-09-27 2021-09-27 Data caching method, system, equipment and storage medium Pending CN113806408A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136168.1A CN113806408A (en) 2021-09-27 2021-09-27 Data caching method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136168.1A CN113806408A (en) 2021-09-27 2021-09-27 Data caching method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113806408A true CN113806408A (en) 2021-12-17

Family

ID=78896767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136168.1A Pending CN113806408A (en) 2021-09-27 2021-09-27 Data caching method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113806408A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693308A (en) * 2012-05-24 2012-09-26 北京迅奥科技有限公司 Cache method for real time search
CN105512129A (en) * 2014-09-24 2016-04-20 中国移动通信集团江苏有限公司 Method and device for mass data retrieval, and method and device for storing mass data
CN105512232A (en) * 2015-11-30 2016-04-20 北京金山安全软件有限公司 Data storage method and device
CN106649544A (en) * 2016-10-27 2017-05-10 国家电网公司信息通信分公司 Electricity information data retrieving method and device
CN108737556A (en) * 2018-05-29 2018-11-02 郑州云海信息技术有限公司 A kind of method, apparatus and equipment of processing REST requests
CN111367673A (en) * 2020-03-05 2020-07-03 山东中创软件商用中间件股份有限公司 Static resource acquisition method, device and related equipment
CN112286903A (en) * 2020-09-27 2021-01-29 苏州浪潮智能科技有限公司 Containerization-based relational database optimization method and device
CN112395440A (en) * 2020-11-24 2021-02-23 华中科技大学 Caching method, efficient image semantic retrieval method and system
CN112445834A (en) * 2019-08-30 2021-03-05 阿里巴巴集团控股有限公司 Distributed query system, query method, device, and storage medium
CN113271359A (en) * 2021-05-19 2021-08-17 北京百度网讯科技有限公司 Method and device for refreshing cache data, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693308A (en) * 2012-05-24 2012-09-26 北京迅奥科技有限公司 Cache method for real time search
CN105512129A (en) * 2014-09-24 2016-04-20 中国移动通信集团江苏有限公司 Method and device for mass data retrieval, and method and device for storing mass data
CN105512232A (en) * 2015-11-30 2016-04-20 北京金山安全软件有限公司 Data storage method and device
CN106649544A (en) * 2016-10-27 2017-05-10 国家电网公司信息通信分公司 Electricity information data retrieving method and device
CN108737556A (en) * 2018-05-29 2018-11-02 郑州云海信息技术有限公司 A kind of method, apparatus and equipment of processing REST requests
CN112445834A (en) * 2019-08-30 2021-03-05 阿里巴巴集团控股有限公司 Distributed query system, query method, device, and storage medium
CN111367673A (en) * 2020-03-05 2020-07-03 山东中创软件商用中间件股份有限公司 Static resource acquisition method, device and related equipment
CN112286903A (en) * 2020-09-27 2021-01-29 苏州浪潮智能科技有限公司 Containerization-based relational database optimization method and device
CN112395440A (en) * 2020-11-24 2021-02-23 华中科技大学 Caching method, efficient image semantic retrieval method and system
CN113271359A (en) * 2021-05-19 2021-08-17 北京百度网讯科技有限公司 Method and device for refreshing cache data, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10564850B1 (en) Managing known data patterns for deduplication
US9727479B1 (en) Compressing portions of a buffer cache using an LRU queue
CN108268219B (en) Method and device for processing IO (input/output) request
US9772949B2 (en) Apparatus, system and method for providing a persistent level-two cache
US8621143B2 (en) Elastic data techniques for managing cache storage using RAM and flash-based memory
CN109977129A (en) Multi-stage data caching method and equipment
EP2478442A1 (en) Caching data between a database server and a storage system
CN107888687B (en) Proxy client storage acceleration method and system based on distributed storage system
EP4321980A1 (en) Method and apparatus for eliminating cache memory block, and electronic device
CN107430551B (en) Data caching method, storage control device and storage equipment
WO2014188528A1 (en) Memory device, computer system, and memory device control method
EP3278229B1 (en) Compressed pages having data and compression metadata
CN109726264B (en) Method, apparatus, device and medium for index information update
CN111857574A (en) Write request data compression method, system, terminal and storage medium
CN108829345B (en) Data processing method of log file and terminal equipment
US8732404B2 (en) Method and apparatus for managing buffer cache to perform page replacement by using reference time information regarding time at which page is referred to
US10430115B2 (en) System and method for optimizing multiple packaging operations in a storage system
CN111913913B (en) Access request processing method and device
CN115470157A (en) Prefetching method, electronic device, storage medium, and program product
CN113806408A (en) Data caching method, system, equipment and storage medium
CN111694806B (en) Method, device, equipment and storage medium for caching transaction log
CN114461590A (en) Database file page prefetching method and device based on association rule
US20140115246A1 (en) Apparatus, system and method for managing empty blocks in a cache
CN112015791A (en) Data processing method and device, electronic equipment and computer storage medium
US11481143B2 (en) Metadata management for extent-based storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination