CN113806389A - Data processing method and device, computing equipment and storage medium - Google Patents

Data processing method and device, computing equipment and storage medium Download PDF

Info

Publication number
CN113806389A
CN113806389A CN202111107116.1A CN202111107116A CN113806389A CN 113806389 A CN113806389 A CN 113806389A CN 202111107116 A CN202111107116 A CN 202111107116A CN 113806389 A CN113806389 A CN 113806389A
Authority
CN
China
Prior art keywords
data
cache
database
modified
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111107116.1A
Other languages
Chinese (zh)
Inventor
王福源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weikun Shanghai Technology Service Co Ltd
Original Assignee
Weikun Shanghai Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weikun Shanghai Technology Service Co Ltd filed Critical Weikun Shanghai Technology Service Co Ltd
Priority to CN202111107116.1A priority Critical patent/CN113806389A/en
Publication of CN113806389A publication Critical patent/CN113806389A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/235Update request formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2379Updates performed during online database operations; commit processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Abstract

The application provides a data processing method, which comprises the following steps: the server obtains a change request of the client, the change request indicates that data to be modified in the database is modified, the data to be modified is any data in the database, the data to be modified in the database is modified according to the change request, and the data to be modified in the cache is modified according to the change request under the condition that the data to be modified exists in the cache. After the data in the database is changed, the corresponding data in the cache can be updated without additional instruction operation, so that the consistency between the cache and the data in the database is ensured.

Description

Data processing method and device, computing equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method and apparatus, a computing device, and a storage medium.
Background
In the internet era, both the complexity of requests and the complexity of data have risen dramatically, and the consumption of resources has also increased more and more. When a request is made to read and write a database, a disk needs to be removed to retrieve corresponding data, which is a relatively slow process. In order to accelerate the loading of data, the data is often stored in a cache, and the data is directly fetched from the cache and returned to a user, so that the processing speed of the request is greatly improved.
In a traditional cache mode, part of data of a database is stored in a redis cache, but the use and maintenance of the redis cache are independently performed, and when the data in the database changes, a single instruction is required to update the cache one by one, so that the workload is increased, and the data in the database and the cache are easily inconsistent.
Disclosure of Invention
The application provides a data processing method, which directly updates the data in the cache after the data in the database is changed, does not need additional instruction operation, ensures the consistency of the data and reduces the workload of the operation. In addition, the data in the cache is regularly cleaned according to the access times and the storage time of the data, so that the cost of the cache is saved.
In a first aspect, the present application provides a data processing method, including: acquiring a change request of a client, wherein the change request indicates to modify data to be modified in a database; modifying the data to be modified in the database according to the modification request; and under the condition that the data to be modified exists in the cache, modifying the data to be modified in the cache according to the modification request.
And after the data to be modified in the database is modified according to the modification request, comparing the modified data with the main key of each data in the cache according to the target main key, if the data to be modified exists in the cache, updating the data to be modified in the cache according to the modification request, synchronizing the modification of the data to be modified to the cache, and ensuring the consistency between the data in the cache and the data in the database.
In one possible implementation, the method further includes: acquiring a query request of a client, and searching data to be queried in a cache according to the query request, wherein the data to be queried is data searched by the query request; and under the condition that the data to be queried does not exist in the cache, searching the data to be queried in the database according to the query request, sending the data to be queried searched in the database to the client, and writing the data to be queried searched in the database into the cache.
When the data to be queried cannot be searched in the cache, the data to be queried can be searched in the database, and if the data to be queried is not written into the cache, the data to be queried needs to be accessed into the database every time, so that the processing speed of the request is reduced. Therefore, the data to be queried is loaded and returned to the client side in the database, and simultaneously, the data to be queried is written into the cache, so that the time delay of subsequent access to the data to be queried can be reduced.
In one possible implementation, the method further includes: acquiring the access times corresponding to target data in a cache, wherein the access times refer to the access times to the data within a first preset time length, and the target data is any one data in the cache; under the condition that the access times corresponding to the target data are larger than or equal to a first threshold value, the target data are reserved in the cache; and deleting the target data in the cache under the condition that the access times corresponding to the target data are less than a first threshold value.
The server can check the access times of the data stored in the cache in a preset period, if the access times of the data are smaller than a first threshold value, the data are considered as low-frequency access data recently, and the data are deleted from the cache, so that sufficient space is reserved for the cache to store the high-frequency access data; if the access times of the data are larger than the first threshold value, the data are considered to be frequently used data with high frequency access recently, and the data are reserved in a cache to respond more quickly during query.
In one possible implementation, the method further includes: acquiring the storage time of the target data in the cache by using a second preset time length, wherein the storage time refers to the time length of the data stored in the cache, and the target data is any one data in the cache; under the condition that the storage time of the target data is less than or equal to a second threshold value, retaining the target data in the cache; and deleting the target data in the cache under the condition that the storage time of the target data is greater than a second threshold value.
And regarding the data stored in the cache, if the storage time of the data is less than a second threshold, the data is considered to be newer data, and the data is reserved in the cache so as to respond more quickly during query. And if the storage time of the data is greater than the second threshold, the data is considered to be earlier data, and the data is deleted from the cache so as to leave enough space for storing the high-frequency access data for the cache.
In one possible implementation, writing data to be queried in the database to a cache includes: and acquiring the access times corresponding to the data to be queried, and writing the data to be queried into the cache under the condition that the access times corresponding to the data to be queried are greater than a third threshold value.
The storage overhead in the cache is significant and, to save costs, not all the data in the database can be written to the cache. And recording the number of times of data access, and when the number of times of access is greater than a third threshold, writing the data into the cache to accelerate the processing speed of the request, wherein the data are high-frequency access data.
In a possible implementation manner, before modifying the data to be modified in the database according to the modification request, the method further includes: analyzing a change request, and determining a target primary key in the change request; and comparing the target main key with the main key in the database, and if the first main key in the database is consistent with the target main key, determining that the data corresponding to the first main key is the data to be modified by the modification request.
In a second aspect, the present application provides a data processing apparatus, including an obtaining unit, a changing unit, and a synchronizing unit: the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a change request of a client, the change request indicates that data to be modified in a database is modified, and the data to be modified is one of the data in the database; the change unit is used for modifying the data to be modified in the database according to the change request; and the synchronization unit is used for modifying the data to be modified in the cache according to the modification request under the condition that the data to be modified exists in the cache.
In a possible implementation manner, the apparatus further includes a statistics unit: and the counting unit is used for acquiring the access times corresponding to the target data, wherein the access times refer to the access times to the data within a first preset time length, and the target data is any one data in the cache. And the changing unit is also used for deleting the target data in the cache under the condition that the access times corresponding to the target data are less than the first threshold.
In a third aspect, the present application provides a computing device comprising a processor and a memory; the memory is for storing instructions and the processor is for executing the instructions, and when the processor executes the instructions, the computing device performs the method as in the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer storage medium storing a computer program which, when executed by a processor, implements a method as in the first aspect or any possible implementation manner of the first aspect.
According to the method and the system, through the association of the database and the cache, unnecessary operation instructions are not needed when the database is changed, the change of the data is synchronized to the cache, the workload of cache maintenance is saved, the response speed of the system to the client is improved, meanwhile, the data in the cache is regularly updated and cleaned, and the storage cost of the cache is reduced.
Drawings
FIG. 1 is a process diagram of a data processing system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating steps of a data processing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
First, an application scenario of the present application is described, and some terms related to the present application are explained to facilitate understanding of the technical solution of the present application. It is worthy to note that the terminology used in the description of the embodiments section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application.
The basic flow of data query is that a page sends a request to a service interface, a database is accessed through the service interface, the loaded data is returned to the service interface, and the service interface pushes the data to the page for display. When a request is made to read and write a database, the database needs to go to a disk to retrieve corresponding data, which is a very time-consuming process. In order to increase the response speed of query, a common method is to load part of data in the database into a cache for storage, and a user directly reads the data from the cache when querying the data, so that the processing speed of a request is increased, and the system response is increased.
The data is put in the cache, namely directly put in the memory, and the data in the memory is directly read during query, although the response speed is accelerated, the pressure of the database is also reduced, and the use and maintenance of the cache in the traditional mode are independent. Cache usage includes querying and loading data from the cache, and cache maintenance includes replenishment and updating of data in the cache. When data is modified and added, the operation changes the data in the database, but the data in the database cannot be automatically synchronized in the cache. At this time, the expired data is read by using the cache, and for the newly added data, the newly added data needs to be read from the database because the data is not written in the cache, so that the request response speed is greatly reduced. To update the data in the cache, a separate instruction is needed to modify and add the data in the cache, which not only increases the workload, but also easily causes inconsistency of the used and maintained data.
In addition, the overhead of using the memory for data storage in the cache is large, and if all the data are stored in the cache, the storage cost for processing the request is increased, which is limited to the cost reason, and it is necessary to select common data to store in the cache.
Referring to fig. 1, a client sends a write request for changing data a and adding data B to a database, modifies data a in the database and adds data B in the database. Since the cache and the database are not updated synchronously, additional operations are required to update the cache. Before updating the data in the cache, receiving a read request of reading data A and data B sent by a client, reading out the overdue data A in the cache, reading out the data B, and reading and returning the data B in the database. Before the cache is updated, the reading of the data A is overdue data, and the reading of the data B is required to be read in the database, so that the response time of the system is increased. The updating of the cache needs a single instruction, which increases a lot of workload and affects the use of the cache, and when there are too many cache demands, the disadvantage is more serious.
In order to solve the problems, when data is changed, synchronous change of the database and the cache is completed through one request. A data processing method provided in the embodiment of the present application is described below, and referring to fig. 2, the method includes S201 to S203.
S201, the server obtains a change request of the client.
The modification request indicates that the data to be modified in the database is modified, and the data to be modified is any data in the database.
S202, the server modifies the data to be modified in the database according to the modification request.
After receiving the modification request, the server analyzes the modification request, specifically, analyzes a query keyword in the query request, determines a target main key of the searched data by performing hash operation based on the query keyword, namely, determines a target main key of the data to be modified, compares the target main key with main keys of all data in the database, finds a main key consistent with the target main key, determines data corresponding to the main key consistent with the target main key as the data to be modified, and modifies the data to be modified based on the modification request.
In the embodiment of the application, the data comprises a main key and a data part when being stored in the cache and the database, and the main key is used for identifying the data when being stored, and the data is used as a query index of the data. The main key is formed by combining the system identification of the data and the hash value, wherein the system identification is globally unique in a deployment environment, and the system identification can ensure that different data are called by multiple systems without conflict. The hash value is obtained by performing hash operation on the data and mapping a unique hash value for each data. The Hash algorithm is a one-way cipher system, which is irreversible mapping from plaintext to ciphertext, and only has an encryption process and no decryption process. Meanwhile, the hash algorithm can change the input with any length to obtain the output with fixed length. This one-way feature of the hash algorithm and the fixed-length feature of the output data enable it to generate messages or data. According to the embodiment of the scheme, the hash operation can be performed on the data query method through the hash algorithms such as MD4, MD5 and SHA-1, and the corresponding hash value is obtained.
S203, the server modifies the data to be modified in the cache according to the modification request under the condition that the data to be modified exists in the cache.
And after the data to be modified in the database is modified according to the modification request, synchronizing the modification of the data to be modified to the cache. And the server compares the target main key with the main key of each data in the cache, and does not execute data modification under the condition that the data to be modified does not exist in the cache. And under the condition that the data to be modified exists in the cache, modifying the data to be modified according to the modification request.
After the data to be modified in the database is modified according to the modification request, comparing the data to be modified with the main key of each data in the cache according to the target main key, if the data to be modified exists in the cache, updating the data to be modified in the cache according to the modification request, synchronizing the modification of the data to be modified in the database to the cache, reducing the workload when the data in the cache is modified additionally, and ensuring the consistency between the data in the cache and the data in the database.
The embodiment of the present application further provides a data processing method, see fig. 3.
S301, acquiring a query request of the client, and searching data to be queried in the cache according to the query request.
And the data to be inquired is the data searched by the inquiry request. And after the query request of the client is acquired, analyzing the query request, and querying whether the data to be queried exists in the cache or not based on the query request. Specifically, the query keyword in the query request is analyzed, hash operation is performed based on the query keyword to determine a target main key of the searched data, and whether corresponding data to be queried exists in the cache is checked. And comparing the target main key with the main key of each data in the cache, and determining that the data to be queried searched by the query request exists in the cache if the main key of a certain data in the cache is consistent with the target main key. And finding out the primary key consistent with the target primary key, and determining the data corresponding to the primary key consistent with the target primary key as the data to be inquired. And sending the data to be queried to the client as an access result.
In the embodiment of the application, the data comprises a main key and a data part when being stored in the cache and the database, and the main key is used for identifying the data when being stored, and the data is used as a query index of the data. The main key is formed by combining the system identification of the data and the hash value, wherein the system identification is globally unique in a deployment environment, and the system identification can ensure that different data are called by multiple systems without conflict. The hash value is obtained by performing hash operation on the data and mapping a unique hash value for each data. The Hash algorithm is a one-way cipher system, which is irreversible mapping from plaintext to ciphertext, and only has an encryption process and no decryption process. Meanwhile, the hash algorithm can change the input with any length to obtain the output with fixed length. This one-way feature of the hash algorithm and the fixed-length feature of the output data enable it to generate messages or data. According to the embodiment of the scheme, the hash operation can be performed on the data query method through the hash algorithms such as MD4, MD5 and SHA-1, and the corresponding hash value is obtained.
S302, under the condition that the data to be queried does not exist in the cache, the data to be queried is searched in the database according to the query request, the data to be queried searched in the database is sent to the client, and the data to be queried searched in the database is written into the cache.
And comparing the target main key with the main keys of all data in the database under the condition that the data to be inquired does not exist in the cache, and determining whether the data to be inquired searched by the inquiry request exists in the database or not according to the condition that whether the main key of certain data in the database is consistent with the target main key or not. And if the primary key consistent with the target primary key is found, determining the data corresponding to the primary key consistent with the target primary key as the data to be inquired. And sending the searched data to be queried to the client as an access result. If the primary key consistent with the target primary key cannot be found, the data to be queried searched by the query request does not exist in the database, and a null value is returned as an access result and sent to the client.
Because the data to be queried is not stored in the cache, the data to be queried can be searched in the database when the data to be queried can not be searched in the cache, and if the data to be queried is not written into the cache, the data to be queried needs to be accessed into the database every time, so that the processing speed of the request is reduced. Therefore, the searched data to be queried is loaded in the database and returned to the client, and simultaneously, the searched data to be queried is written into the cache, so that the cache is updated and maintained. After the query request cannot find the corresponding data in the cache, the data is loaded in the database and written into the cache at the same time, the data to be queried can be read from the cache when being searched next time, the response time of the request is reduced, and the workload of writing into the cache through another instruction is reduced by executing the operation of writing into the cache while loading and returning.
The data is put in the cache, so that the response speed of the system to the client is improved, and the pressure of the database is also reduced, but the overhead of storing the data in the cache is high, and the common data needs to be selected to be stored in the cache. Storing the high-frequency access data frequently queried by the client in a cache, and storing the low-frequency access data with less query times in a database. And because the query requirement of the client on the data is changed, the high-frequency access data and the low-frequency access data are not kept unchanged, and the data in the cache needs to be updated and cleaned regularly.
In a possible implementation manner, after the query request of the client is obtained, the access times of the data to be modified are recorded, where the access times refer to the access times of the data within a preset time length. Analyzing the query keywords in the query request, performing hash operation based on the query keywords to determine the target main key of the searched data, recording the target main key of the query request and counting the occurrence times of the target main key. The frequency of the target main key is the frequency of the data to be modified corresponding to the target main key being inquired, and the access frequency of the data to be modified is recorded according to the frequency of the target main key.
And the server acquires the access times corresponding to each data in the cache in a preset period and cleans the cache according to the access times corresponding to each data in the cache. Under the condition that the access times corresponding to the target data are larger than a first threshold value, the target data are reserved in the cache, and the target data are any data in the cache; deleting the target data in the cache under the condition that the access times corresponding to the target data are smaller than a first threshold; and clearing the access times corresponding to the target data, and recalculating the request times corresponding to the target data.
Specifically, if the preset first threshold is 6, after a period of time, the access frequency of the data stored in the cache is checked, if the access frequency of the data X is 3 and is smaller than the first threshold, the data X is considered to be low-frequency access data recently, and the data X is deleted from the cache, if the access frequency of the data X is 10 and is larger than the first threshold, the data X is considered to be common data accessed at high frequency recently, and the data X is retained in the cache to respond more quickly during query.
According to the number of times that data is queried recently, hot-point data with higher query frequency is reserved in the cache, data which is not frequently queried is deleted, excessive redundancy of the cache is prevented, the response speed of the cache is higher, and the storage cost of the cache is saved.
In a possible implementation manner, the server may also clean the cache according to the data storage time in the cache. Specifically, the server obtains the storage duration of each data in the cache in a preset period, and under the condition that the storage time of the target data is smaller than a second threshold, the target data is reserved in the cache; and deleting the target data in the cache under the condition that the storage time of the target data is greater than a second threshold value.
Specifically, if the preset second threshold is 2 days, regarding the data stored in the cache, if the storage time of the data Y is 1 day and is less than the second threshold, the data Y is considered to be newer data, and the data Y is retained in the cache to respond faster during query. And if the storage time of the data Y is 3 days and is greater than the second threshold, the data Y is considered to be earlier data, and the data Y is deleted in the cache.
According to the time for storing the data into the cache, the recently written data is reserved in the cache, and the early data in the cache is deleted, so that excessive redundancy of the cache is prevented, the response speed of the cache is higher, and the storage cost of the cache is saved.
The present application further provides a data processing method, referring to fig. 4, and fig. 4 is a data processing step provided in an embodiment of the present application.
And searching data to be searched in a database according to the query request, analyzing query keywords in the query request, and performing Hash operation based on the query keywords to determine a target main key of the searched data. And comparing the target main key with the main key of each data in the cache, and determining that the data to be queried searched by the query request exists in the cache if the main key of a certain data in the cache is consistent with the target main key. And finding out the primary key consistent with the target primary key, and determining the data corresponding to the primary key consistent with the target primary key as the data to be inquired. And sending the data to be queried to the client as an access result.
And under the condition that the data to be inquired does not exist in the cache, searching the data to be inquired in the database according to the inquiry request, comparing the target main key with the main key of each data in the database, and determining whether the data to be inquired searched by the inquiry request exists in the cache according to whether the main key of a certain data in the database is consistent with the target main key. If the primary key consistent with the target primary key cannot be found, the data to be queried searched by the query request does not exist in the database, and a null value is returned as an access result and sent to the client. And if the primary key consistent with the target primary key is found, determining the data corresponding to the primary key consistent with the target primary key as the data to be inquired. And sending the data to be queried to the client as an access result.
Because the data to be queried is not stored in the cache, after the data to be queried is loaded in the database, the data to be queried needs to be written into the cache, and the cache is updated and maintained.
In a possible implementation manner, before the data to be queried is written into the cache, the access times of the data to be queried are queried, and whether the data to be queried is written is judged by comparing the access times with a third threshold. And writing the data with more access times into the cache by setting the third threshold value, wherein the third threshold value is a preset value. If the access times of the data to be queried are larger than a third threshold value, writing the data to be queried into a cache; and if the access times of the data to be queried are less than the third threshold value, the operation of writing the data to be queried is not executed.
Specifically, if the preset third threshold is 10, a query request for searching for the data M sent by the client is obtained, a query keyword in the query request is analyzed, a corresponding main key of the data M is determined, and the query on the data M this time is recorded in the access times. And searching the data M in the database under the condition that the data M is not searched in the cache. And returning the data M in the database to the client, and sending the data M to the cache to prepare for writing into the cache. Caching the access times of the query data M before writing, if the access times are 8 and are smaller than a third threshold value, considering the data M as low-frequency access data, ignoring the data M, and not executing the operation of writing into the cache; and if the access times are 11 and are greater than the third threshold, the data M is considered as high-frequency access data, the data M is written into the cache in order to improve the response speed of the system to the client, and the data can be directly loaded from the cache when the data M is queried again.
The cache is modified while the data in the database is changed, so that the workload of updating the cache is saved, and the consistency between the cache and the data in the database is kept. And dynamically managing the data in the cache according to the queried frequency and the storage time of the data, and cleaning the low-frequency data in the cache, so that the storage and the use of the cache are more reasonable.
An embodiment of the present application provides a data processing apparatus, and referring to fig. 5, a data processing apparatus 500 includes an obtaining unit 510, a changing unit 520, and a synchronizing unit 530;
the obtaining unit 510 is configured to obtain a change request of a client, where the change request indicates to modify data to be modified in a database, and the data to be modified is one of data in the database;
the modifying unit 520 is configured to modify the data to be modified in the database according to the modification request;
the synchronization unit 530 is configured to modify the data to be modified in the cache according to the modification request when the data to be modified exists in the cache.
Further, the data query device 500 further comprises a statistic unit 540,
the counting unit 540 is configured to obtain access times corresponding to target data in a cache, where the access times refer to access times to the data within a first preset time duration, and the target data is any data in the cache;
the changing unit 520 is further configured to delete the target data in the cache if the number of accesses corresponding to the target data is less than the first threshold.
Specifically, the process of implementing data processing by the data processing apparatus 500 may refer to the method described in the method embodiment shown in fig. 2, fig. 3, or fig. 4, and is not described herein again.
Fig. 6 is a schematic structural diagram of a computing device provided in an embodiment of the present application, including: one or more processors 610, a communication interface 620, and a memory 630. Optionally, the processor 610, the communication interface 620 and the memory 630 are connected to each other through a bus 640.
The processor 610 may be implemented in various ways, for example, the processor 610 may be a central processing unit or an image processor, the processor 610 may also be a single-core processor or a multi-core processor, and the processor 610 may also be a combination of a CPU and a hardware chip.
The communication interface 620 may be a wired interface, such as ethernet interface, Local Interconnect Network (LIN), or the like, or a wireless interface, such as a cellular network interface or a wireless lan interface, for communicating with other modules or devices.
In the embodiment of the present application, the communication interface 620 may be specifically configured to perform the operations in S201 to S203 in fig. 2. Specifically, the actions performed by the communication interface 620 may refer to the above method embodiments, and are not described herein again.
The memory 630 may be a non-volatile memory, such as a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Memory 630 may also be volatile memory, which may be Random Access Memory (RAM), which acts as external cache memory.
Memory 630 may also be used for storing instructions and data. In addition, server 600 may contain more or fewer components than shown in FIG. 6, or have a different arrangement of components.
The bus 640 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 640 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
Optionally, the server 600 may further include an input/output interface 650, and the input/output interface 650 is connected with an input/output device for receiving input information and outputting an operation result.
The embodiments provided herein may be implemented in any one or combination of hardware, software, firmware, or solid state logic circuitry, and may be implemented in connection with signal processing, control, and/or application specific circuitry. Particular embodiments of the present application provide an apparatus or device that may include one or more processors (e.g., microprocessors, controllers, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), etc.) that process various computer-executable instructions to control the operation of the apparatus or device. Particular embodiments of the present application provide an apparatus or device that can include a system bus or data transfer system that couples the various components together. A system bus can include any of a variety of different bus structures or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. The devices or apparatuses provided in the embodiments of the present application may be provided separately, or may be part of a system, or may be part of other devices or apparatuses.
Particular embodiments provided herein may include or be combined with computer-readable storage media, such as one or more storage devices capable of providing non-transitory data storage. The computer-readable storage medium/storage device may be configured to store data, programmers and/or instructions that, when executed by a processor of an apparatus or device provided by embodiments of the present application, cause the apparatus or device to perform operations associated therewith. The computer-readable storage medium/storage device may include one or more of the following features: volatile, non-volatile, dynamic, static, read/write, read-only, random access, sequential access, location addressability, file addressability, and content addressability. In one or more exemplary embodiments, the computer-readable storage medium/storage device may be integrated into a device or apparatus provided in the embodiments of the present application or belong to a common system. The computer-readable storage medium/memory device may include optical, semiconductor, and/or magnetic memory devices, etc., and may also include Random Access Memory (RAM), flash memory, read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, a recordable and/or rewriteable Compact Disc (CD), a Digital Versatile Disc (DVD), a mass storage media device, or any other form of suitable storage media.
The above is an implementation manner of the embodiments of the present application, and it should be noted that the steps in the method described in the embodiments of the present application may be sequentially adjusted, combined, and deleted according to actual needs. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. It is to be understood that the embodiments of the present application and the structures shown in the drawings are not to be construed as particularly limiting the devices or systems concerned. In other embodiments of the present application, an apparatus or system may include more or fewer components than the specific embodiments and figures, or may combine certain components, or may separate certain components, or may have a different arrangement of components. Those skilled in the art will understand that various modifications and changes may be made in the arrangement, operation, and details of the methods and apparatus described in the specific embodiments without departing from the spirit and scope of the embodiments herein; without departing from the principles of embodiments of the present application, several improvements and modifications may be made, and such improvements and modifications are also considered to be within the scope of the present application.

Claims (10)

1. A data processing method, comprising:
acquiring a change request of a client, wherein the change request indicates that data to be modified in a database is modified;
modifying the data to be modified in the database according to the modification request;
and under the condition that the data to be modified exists in the cache, modifying the data to be modified in the cache according to the modification request.
2. The method of claim 1, further comprising:
acquiring a query request of a client, and searching data to be queried in the cache according to the query request;
and under the condition that the data to be queried does not exist in the cache, searching the data to be queried in the database according to the query request, sending the data to be queried searched in the database to the client, and writing the data to be queried searched in the database into the cache.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring the access times corresponding to target data in the cache, wherein the access times refer to the access times to the data within a first preset time length, and the target data is any one data in the cache;
when the access times corresponding to the target data are larger than or equal to a first threshold value, the target data are reserved in the cache;
and deleting the target data in the cache under the condition that the access times corresponding to the target data are smaller than the first threshold.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring the storage time of target data in the cache by using a second preset time length, wherein the storage time refers to the time length of the data stored in the cache, and the target data is any one data in the cache;
if the storage time of the target data is less than or equal to a second threshold value, retaining the target data in the cache;
and deleting the target data in the cache under the condition that the storage time of the target data is greater than the second threshold value.
5. The method of claim 2, wherein writing the data to be queried in the database to the cache comprises:
and acquiring the access times corresponding to the data to be queried, and writing the data to be queried into the cache under the condition that the access times corresponding to the data to be queried are greater than a third threshold value.
6. The method of claim 1, prior to modifying the data to be modified in the database according to the change request, the method further comprising:
analyzing the change request, and determining a target primary key in the change request;
and comparing the target main key with the main key in the database, and if the first main key in the database is consistent with the target main key, determining that the data corresponding to the first main key is the data to be modified by the modification request.
7. A data processing apparatus includes an acquisition unit, a modification unit, and a synchronization unit:
the obtaining unit is configured to obtain a change request of a client, where the change request indicates to modify data to be modified in a database, and the data to be modified is one of the data in the database;
the change unit is used for modifying the data to be modified in the database according to the change request;
and the synchronization unit is used for modifying the data to be modified in the cache according to the modification request under the condition that the data to be modified exists in the cache.
8. The apparatus of claim 7, further comprising a statistics unit:
the statistical unit is configured to obtain access times corresponding to target data in the cache, where the access times refer to access times to data within a first preset time duration, and the target data is any one of the data in the cache;
the changing unit is further configured to delete the target data in the cache if the number of accesses corresponding to the target data is less than the first threshold.
9. A computing device, wherein the server comprises a processor and a memory, wherein the memory is configured to store instructions, wherein the processor is configured to execute the instructions, and wherein the processor, when executing the instructions, performs the method of any of claims 1 to 6.
10. A computer storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 6.
CN202111107116.1A 2021-09-22 2021-09-22 Data processing method and device, computing equipment and storage medium Pending CN113806389A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111107116.1A CN113806389A (en) 2021-09-22 2021-09-22 Data processing method and device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111107116.1A CN113806389A (en) 2021-09-22 2021-09-22 Data processing method and device, computing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113806389A true CN113806389A (en) 2021-12-17

Family

ID=78939884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111107116.1A Pending CN113806389A (en) 2021-09-22 2021-09-22 Data processing method and device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113806389A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472293A (en) * 2023-12-27 2024-01-30 荣耀终端有限公司 Data storage method, electronic equipment and computer readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043653A (en) * 2010-12-23 2011-05-04 中国农业银行股份有限公司 Cache system and methods for modifying cache configuration and operating and querying cache data
CN107122140A (en) * 2017-05-02 2017-09-01 郑州云海信息技术有限公司 A kind of file intelligent storage method based on metadata information
CN109325056A (en) * 2018-08-21 2019-02-12 中国平安财产保险股份有限公司 A kind of big data processing method and processing device, communication equipment
CN110795457A (en) * 2019-09-24 2020-02-14 苏宁云计算有限公司 Data caching processing method and device, computer equipment and storage medium
CN110990439A (en) * 2019-12-13 2020-04-10 深圳前海环融联易信息科技服务有限公司 Cache-based quick query method and device, computer equipment and storage medium
CN111046106A (en) * 2019-12-19 2020-04-21 杭州中恒电气股份有限公司 Cache data synchronization method, device, equipment and medium
CN111176560A (en) * 2019-12-17 2020-05-19 腾讯科技(深圳)有限公司 Cache management method and device, computer equipment and storage medium
CN112035766A (en) * 2020-08-05 2020-12-04 北京三快在线科技有限公司 Webpage access method and device, storage medium and electronic equipment
CN112559573A (en) * 2020-12-24 2021-03-26 京东数字科技控股股份有限公司 Data caching method, device, equipment and computer readable medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043653A (en) * 2010-12-23 2011-05-04 中国农业银行股份有限公司 Cache system and methods for modifying cache configuration and operating and querying cache data
CN107122140A (en) * 2017-05-02 2017-09-01 郑州云海信息技术有限公司 A kind of file intelligent storage method based on metadata information
CN109325056A (en) * 2018-08-21 2019-02-12 中国平安财产保险股份有限公司 A kind of big data processing method and processing device, communication equipment
CN110795457A (en) * 2019-09-24 2020-02-14 苏宁云计算有限公司 Data caching processing method and device, computer equipment and storage medium
CN110990439A (en) * 2019-12-13 2020-04-10 深圳前海环融联易信息科技服务有限公司 Cache-based quick query method and device, computer equipment and storage medium
CN111176560A (en) * 2019-12-17 2020-05-19 腾讯科技(深圳)有限公司 Cache management method and device, computer equipment and storage medium
CN111046106A (en) * 2019-12-19 2020-04-21 杭州中恒电气股份有限公司 Cache data synchronization method, device, equipment and medium
CN112035766A (en) * 2020-08-05 2020-12-04 北京三快在线科技有限公司 Webpage access method and device, storage medium and electronic equipment
CN112559573A (en) * 2020-12-24 2021-03-26 京东数字科技控股股份有限公司 Data caching method, device, equipment and computer readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472293A (en) * 2023-12-27 2024-01-30 荣耀终端有限公司 Data storage method, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
JP6916751B2 (en) Hybrid memory module and its operation method
US10198363B2 (en) Reducing data I/O using in-memory data structures
US11307769B2 (en) Data storage method, apparatus and storage medium
CN108459826B (en) Method and device for processing IO (input/output) request
US8719206B2 (en) Pattern-recognition processor with matching-data reporting module
US20130097402A1 (en) Data prefetching method for distributed hash table dht storage system, node, and system
CN111309720A (en) Time sequence data storage method, time sequence data reading method, time sequence data storage device, time sequence data reading device, electronic equipment and storage medium
CN110555001B (en) Data processing method, device, terminal and medium
US20160246724A1 (en) Cache controller for non-volatile memory
US20190220443A1 (en) Method, apparatus, and computer program product for indexing a file
WO2022156650A1 (en) Data access method and apparatus
CN112148736B (en) Method, device and storage medium for caching data
CN110910249A (en) Data processing method and device, node equipment and storage medium
CN113806389A (en) Data processing method and device, computing equipment and storage medium
US11455117B2 (en) Data reading method, apparatus, and system, avoiding version rollback issues in distributed system
WO2019120226A1 (en) Data access prediction method and apparatus
US20080256296A1 (en) Information processing apparatus and method for caching data
CN101459599B (en) Method and system for implementing concurrent execution of cache data access and loading
US20170147508A1 (en) Device, system and method of accessing data stored in a memory
US20170199819A1 (en) Cache Directory Processing Method for Multi-Core Processor System, and Directory Controller
JP6189266B2 (en) Data processing apparatus, data processing method, and data processing program
CN116701246A (en) Method, device, equipment and storage medium for improving cache bandwidth
CN115934583A (en) Hierarchical caching method, device and system
CN111290700A (en) Distributed data reading and writing method and system
JP2011165093A (en) Memory access examination device, memory access examination method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination