CN111177161B - Data processing method, device, computing equipment and storage medium - Google Patents

Data processing method, device, computing equipment and storage medium Download PDF

Info

Publication number
CN111177161B
CN111177161B CN201911082799.2A CN201911082799A CN111177161B CN 111177161 B CN111177161 B CN 111177161B CN 201911082799 A CN201911082799 A CN 201911082799A CN 111177161 B CN111177161 B CN 111177161B
Authority
CN
China
Prior art keywords
data
database
cache
update
binary log
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911082799.2A
Other languages
Chinese (zh)
Other versions
CN111177161A (en
Inventor
吴双桥
王珏
杨繁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911082799.2A priority Critical patent/CN111177161B/en
Publication of CN111177161A publication Critical patent/CN111177161A/en
Application granted granted Critical
Publication of CN111177161B publication Critical patent/CN111177161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases

Abstract

The application discloses a data processing method, a data processing device, a computing device and a storage medium, which are used for avoiding complex logic for realizing cache updating in a business process and avoiding dirty data from being read as far as possible. The data processing method comprises the following steps: updating the persistent database in response to the data update request; and updating a cache database based on the binary log information sent after the persistent database is successfully updated, wherein the sent binary log information carries the operation corresponding to the data update request and the description information of the data to be updated, and the cache database is configured to respond to the read request of the read data and provide the data requested by the read request.

Description

Data processing method, device, computing equipment and storage medium
Technical Field
The present application relates to the field of database technologies, and in particular, to a data processing method, apparatus, computing device, and storage medium.
Background
This section is intended to provide a background or context to the embodiments of the application that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
Relational databases, which refer to databases that employ relational models to organize data, store data in rows and columns for ease of understanding by users. A relational model can be understood simply as a two-dimensional tabular model, and a relational database is a data organization consisting of two-dimensional tables and relationships between them.
Typically, a business developer will store metadata within a relational database. In order to access data more quickly, memory caching techniques are applied before the database, and a portion of the data that is often used is stored in memory. In this way, program logic need not access the database every time, but rather directly reads the desired data from memory. Thus, on one hand, the response of the program is quicker, and on the other hand, the requirements on the performance and other aspects of the database are greatly reduced, and the architecture is more reasonable.
The key point is the cache update strategy and implementation thereof. However, there are several drawbacks to existing cache update strategies, such as bypass cache scheme (Cache Aside Pattern), read/Write Through Pattern, or Write-Back scheme (Back) cache Pattern. For example, the bypass cache scheme and the read-through scheme have the problem that the cache update and the database update are almost simultaneously performed, and the simultaneous concurrent update of read data causes dirty data to be read, that is, serious data inconsistency exists, the write-back scheme has the problem that serious data inconsistency exists necessarily due to the fact that the cache update and the database update adopt asynchronous batch update, the method also has the problem of complex realization logic, and most of applicable scenes are write-many-read-few scenes. If the strong consistency of the data is ensured, the cost of improving the overall complexity of the system and reducing the performance is necessarily met; if the performance is improved or the logic is simplified, the problem of dirty data reading possibly occurs in the concurrent read-write scene.
Therefore, how to improve the cache update policy in the context of more reads and less writes, so as to avoid complex logic for implementing cache update in the service flow, and avoid reading dirty data as much as possible, is one of the technical problems to be solved in the prior art.
Disclosure of Invention
The application aims to provide a data processing method, a data processing device, a computing device and a storage medium, so as to solve the technical problems.
In a first aspect, an embodiment of the present application provides a data processing method, where the method includes:
updating the persistent database in response to the data update request;
based on the binary log information sent after the persistent database is successfully updated, updating a cache database, wherein the sent binary log information carries operation corresponding to the data update request and description information of data to be updated, and the cache database is configured to respond to a read request for reading data and provide the data requested by the read request.
In a second aspect, an embodiment of the present application provides a data processing apparatus, including:
a first updating unit for updating the persistent database in response to the data update request; and is combined with the other components of the water treatment device,
the second updating unit is used for updating the cache database based on the binary log information sent after the persistent database is successfully updated, the sent binary log information carries description information of operations corresponding to the data updating request and data to be updated, and the cache database is configured to respond to the reading request of the read data and provide the data requested by the reading request for the data reading unit.
In one embodiment, the data reading unit is for: if the data requested by the read request is not obtained in the cache database, obtaining the requested data from the persistence database; and adding an update field to the data requested by the read request in the persistent data after the data requested by the read request is acquired from the persistent database, wherein the update field is used for generating a binary log message of the data requested by the read request by the persistent database, so that the second update unit executes the binary log message sent after the persistent database is successfully updated, and updates a cache database.
In one embodiment, the apparatus further comprises:
a recording unit, configured to record the number of requests for data requested by the read request within a specified duration;
and if the number of requests is greater than a preset number, the data read-write unit executes the step of adding an update field to the data requested by the read request in the persistent data after the data is acquired from the persistent database. In one embodiment, the cache update rule is derived based on user configuration information.
In one embodiment, the binary log is based on a Binlog log format.
In a third aspect, embodiments of the present application further provide a data processing system, the system comprising a persistent database, a data processor, and a cache database, wherein:
the persistent database is used for responding to the data updating request to update the data and sending the binary log information to the data processor after successful updating; the sent binary log information carries the operation corresponding to the data update request and description information of the data to be updated; the data processor is used for analyzing the information of the binary log and then sending the information to the cache database;
the cache database is used for carrying out data update according to the operation carried in the binary log message sent by the data processor and the description information of the data to be updated; the cache database is also used for responding to a read request for reading data and providing the data requested by the read request.
In a fourth aspect, another embodiment of the application also provides a computing device comprising at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute any data processing method provided by the embodiment of the application.
In a fifth aspect, another embodiment of the present application further provides a computer storage medium, where the computer storage medium stores computer executable instructions for causing a computer to perform any one of the data processing methods in the embodiments of the present application.
The data processing method, the device, the computing equipment and the storage medium provided by the embodiment of the application avoid complex logic for realizing cache updating in the service flow and avoid dirty data from being read as far as possible.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a data processing system according to one embodiment of the application;
FIGS. 2A-2E are schematic diagrams of application environments according to embodiments of the present application;
FIG. 3 is a flow chart of a data processing method according to an embodiment of the application;
FIG. 4 is an example of a data processing flow according to one embodiment of the application;
FIG. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a computing device according to one embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
In order to clearly understand the technical solution provided by the embodiments of the present application, the following explanation is provided for terms appearing in the embodiments of the present application, and it should be noted that the explanation of terms in the embodiments of the present application is only for facilitating understanding of the present solution, and is not intended to limit the present solution, and the terms involved include:
CDB: cloud DataBase (Cloud DataBase), a relational DataBase Cloud service.
DTS: the data transmission service (Data Transmission Service) is a data service supporting data interaction between a plurality of heterogeneous data sources such as RDBMS (relational database). The system provides various data transmission capabilities such as data migration, real-time data subscription, real-time data synchronization and the like.
Redis: a cache database.
Cache: in the general term of Cache, the storage system of the computer is layered, and the storage of the upper layer can be regarded as the Cache of the lower layer. Cache may also refer to a Cache database in a specific context.
Binlog log: the binary log of the database system, more commonly called a transaction log, records the change information of the data.
2PC, two-stage commit protocol, an algorithm to ensure that transactional operations commit remains consistent.
Paxos, distributed coherence protocol.
It should be noted that, as used herein, reference to "a plurality of" or "a plurality of" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Furthermore, the terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and embodiments of the present application and features of the embodiments may be combined with each other without conflict.
As previously mentioned, typically, a business developer will store metadata within a relational database. In order to access data more quickly, memory caching techniques are applied before the database, and a portion of the data that is often used is stored in memory. In this way, program logic need not access the database every time, but rather directly reads the desired data from memory. Thus, on one hand, the response of the program is quicker, and on the other hand, the requirements on the performance and other aspects of the database are greatly reduced, and the architecture is more reasonable.
The key point is the cache update strategy and implementation thereof. However, several existing cache update strategies, such as a bypass cache scheme (Cache Aside Pattern), a Read/Write Through Pattern, or a Write-Back scheme (Back) have many drawbacks, such as a case that dirty data is Read due to concurrent update, or a serious data inconsistency, or a logic complex implementation problem. If the strong consistency of the data is ensured, the cost of improving the overall complexity of the system and reducing the performance is necessarily met; if the performance is improved or the logic is simplified, the problem of dirty data reading possibly occurs in the concurrent read-write scene.
Specifically, currently, the cache update strategy commonly used and the defects thereof mainly relate to the following:
1) The bypass caching scheme (Cache Aside Pattern) mainly involves the following important operations:
failure: the application program firstly reads data from a Cache (Cache), when the Cache has no corresponding data or the data is in an invalid state, the data cannot be read to obtain the required data, at the moment, the data is read from a database, and after success, the read data is stored in the Cache;
hit: if the application program reads the data from the Cache, returning after reading the data;
updating: when updating data, firstly storing the data into a database, and after the database is updated successfully, invalidating old data in a cache; then the next time the data is read, the new data is read from the database and written into the cache because the data will not be read due to the data failure, thereby realizing the update of the cache.
The bypass caching scheme, while a standard design model, may also have cases where dirty data is read concurrently. For example, two concurrent requests, one of which is a read operation but does not hit the cache, will read data from the database, at which time the read and write operations, after having written the data, invalidate the cache, and the previous read operation will update the old data read into the cache, then subsequent requests will read the dirty data before update, but not the latest data. In a real industrial implementation, the bypass caching scheme is a standard design model. Of course, consistency can be ensured by 2PC or Paxos protocols, or the probability of dirty data being generated at the time of concurrency can be reduced. The general optimization direction is that the latter reduces the probability of dirty data, however 2PC is too slow to meet the performance requirement, and Paxos protocol is too complex and the performance loss is large.
2) The Read-through scheme (Read/Write Through Pattern) mainly involves the following important operations:
updating: updating the cache in a query operation; if the cache is not hit during updating, the database is directly updated, and then the database is returned; if the Cache is hit, the Cache is updated, and then the Cache synchronously updates the database;
the cache service loads data by itself, and an application party is transparent;
the pressure of the mode to the bottom layer lasting database is minimum, and the cache service loads the cache when the cache is invalid; the load cache may look at a read operation, which may lead to dirty data problems caused by concurrent updates as in Cache Aside Pattern. Meanwhile, the Cache synchronizes the database by itself in the mode, so that the situation that the Cache data is successfully written, but the database is failed to be written, and the serious data inconsistency problem is caused.
3) The Write Back scheme (Back bearing) mainly involves the following important operations:
updating: only updating the cache, not updating the database, and asynchronously updating the database in batches;
since the update-only cache does not update the database when updating the data, the database is updated asynchronously. The mode has the advantages of high performance, capability of combining multiple continuous operations on the same data, high possibility of asynchronous update failure, and incapability of recovering the data simply through the cache if the cache data is lost due to failure when the update failure is detected. In addition, this mode of implementation is relatively complex and is typically only considered when the write capacity of the complex system is improved.
In view of this, an embodiment of the present application proposes a data processing scheme, in which an automatic cache update scheme is designed, and by using a binary log of a database system, a cache database is automatically updated according to configuration information set by a user, so that complex logic for implementing cache update in a service flow is avoided, so that a service can more conveniently obtain a simple and stable cache service based on the database, and development and operation and maintenance of the user are greatly simplified.
In one embodiment, the data processing scheme of the present application is applicable to a read-many-write-few scenario, i.e., a scenario of a database system using a combination of a memory cache database and a persistent database. The data processing scheme of the present application will be described in detail below with reference to the accompanying drawings and embodiments.
FIG. 1 is a schematic diagram of a data processing system according to one embodiment of the application.
As shown in FIG. 1, the data processing system of the present application may include a rule mapping module (Mapper), a cache database (e.g., a Redis database), and a persistence database.
In one embodiment, the cache database and the persistent database may be a high performance read-write separated database system as a whole.
The database system may be configured in a read-write separation mode: when a user desires to read data from the database system and triggers a read request, the data requested by the read request is acquired from the cache database in response to the read request, i.e. the read operation only operates the cache database. When a user desires to update the database system and triggers a data update request, the persistent database is updated based on the data update request, i.e., a write operation only operates the persistent database. And after successful update for the persistent database, a binary log message may be sent to enable automatic synchronization of the update for the persistent database into the cache database, i.e., updating the cache database, based on the binary log message. And if the data requested by the read request is not acquired in the cache database when responding to the read request, acquiring the requested data from the persistence database, and not updating the requested data to the cache database. Or, for example, for the data that is read more frequently by the user request, an update field may be added to the persistent data for the data requested by the read request, where the update field is used for generating a message for the binary log of the data requested by the read request by the persistent database, so as to return to the step of updating the cache database based on the message for the binary log sent after the persistent database is successfully updated. In other words, in the data processing scheme of the embodiment of the application, by means of reading and writing separation, when data is read, the data is read from the cache database, when the data is updated, the data is updated in the persistence database, and based on the binary log information sent after the persistence database is successfully updated, the update of the persistence database is synchronized into the cache database.
In this product form, the sent binary log messages are strictly ordered, and for the cached updates, the messages are strictly executed according to the update sequence of the persistent database, and because of the read-write separation, unlike the condition that read-write concurrent processing occurs in the related technology, the condition that data are inconsistent and dirty data are read due to concurrent update can be avoided.
In addition, due to the separation of reading and writing, the update of the persistent database and the update of the cache database only follow the update of the cache database firstly, and then the update of the cache database based on the binary log information sent after the successful update of the persistent database, the update mode logic is simple and easy to realize, and the update logic of the persistent database and the update logic of the complex cache database are not required to be independently developed and realized for a developer, so that the developer can only pay attention to business logic and does not pay attention to the complex logic of the cache update.
In one embodiment, a rule mapping module (Mapper) may obtain configuration information provided by a user and/or developer and load to implement a mapping of a persistent database to a cached database. Wherein a developer may, for example, provide configuration information for program development of the cache database and the persistent database. The user may provide configuration information for caching read and write data of the database and the persistent database. After the user has configured the service, a rule mapping module (Mapper) may determine, based on the configuration information of the user, how the changed data is mapped to the cache database (i.e., the cache update rule) when the data update occurs in the persistent database, so as to implement automatic update of the cache. The user configuration information defines data mapped from the persistent database into the cached database. Accordingly, if the data to be updated in the binary log message accords with the cache updating rule, the cache is updated. Therefore, the user can obtain the database service product meeting the application scene only by simple configuration without providing any complex cache updating strategy.
Fig. 2A-2E are schematic diagrams of application environments according to embodiments of the present application. Wherein the data processing system shown in fig. 1 may be applied in the application environments shown in fig. 2A-2E. Wherein, the connecting line in the drawing indicates that information interaction exists between the two connected, and the connecting line can be a wired connection, a wireless connection or any form of connection capable of carrying out information transmission.
As shown in fig. 2A, for example, a plurality of terminal devices 10 and at least one data processing system 20 may be included in the application environment.
Terminal device 10 is any suitable electronic device that may be used for network access including, but not limited to, a computer, a smart phone, a tablet computer, or other type of terminal. Data processing system 20 may include, for example, a data processor and a database system (including a persistent database and a cache database) as shown in FIG. 1. Terminal device 10 may communicate information with data processing system 20 via network 30. The network 30 may be a broad network for information transfer and may include one or more communication networks such as a wireless communication network, the internet, a private network, a local area network, a metropolitan area network, a wide area network, or a cellular data network. The terminal devices (e.g., between 10_1 and 10_2 or 10_n) may also communicate with each other via the network 30.
It will be appreciated by those skilled in the art that the above-described 1 … N terminal devices are intended to represent a large number of terminals present in a real network, and that the illustrated single data processing system 20 is intended to represent the operation of the data processing system in accordance with aspects of the present application. The details of the particular numbered terminal devices with respect to the data processing system are provided for ease of illustration at least and are not meant to be limiting as to the type or function of the terminal device and the data processing system. And it should be noted that the underlying concepts of the exemplary embodiments of this application are not altered if additional modules are added to or individual modules are removed from the illustrated environment.
In the application environment shown in fig. 2A, a user and/or developer may provide configuration information or read-write requests or other information for a database system through a terminal device. The data processing system 20 may respond to information or requests from the terminal device. For example, system configuration of the database system is implemented based on the configuration information of the developer. Alternatively, based on the user's configuration information, it is determined how the changed data maps into the cache database (i.e., cache update rules) when a data update occurs in the persistent database. Alternatively, data updates to the database system (including the persistent database and the cached database) are implemented in response to requests from the terminal device.
As shown in FIG. 2B, in another embodiment, data processing system 20 may include a persistent database 21, a data processor 22, and a cache database 23. The persistent database 21 may, for example, perform data update in response to a data update request, and send a binary log message to the data processor 22 after successful update. The sent binary log information carries the operation corresponding to the data update request and description information of the data to be updated. The data processor 22 may parse the binary log message and send it to the cache database 23, for example. The cache database 23 may update data according to, for example, operations carried in the binary log message sent by the data processor and description information of data to be updated; the cache database 23 is further configured to provide data requested by a read request for reading data in response to the read request.
As shown in FIG. 2C, in another embodiment, data processor 22 of data processing system 20 may also be comprised of a plurality of functional components/modules via which data processing or functional management of a database system (including a persistent database and a cache database) of data processing system 20 may be implemented. For example, the data processor may include a rule mapping module as shown in FIG. 1 that may obtain configuration information provided by a user and/or developer and load to implement a mapping of a persistent database to a cached database. The data processor may further include a message parsing module, for example, that parses a binary log message sent after a persistent database is successfully updated, and obtains description information in the binary log message, so as to implement data update of the cache database based on the description information. The data processor may further comprise a cache update module, for example, which may implement data update to the cache database based on the description information obtained from the binary log message and based on the cache update rules. The various functional components of the data processor 22 may be separately configured on corresponding physical machines. For example, the cache update module may be configured on a physical machine where the cache database resides; the message analysis module can be configured on a physical machine where the persistent database is located; the rule mapping module may be configured independently, or may also be configured on a physical machine where the cache database or the persistent database is located, which is not limited in this regard by the present application.
In another embodiment, as shown in fig. 2D, the functional components of the data processor 22 (e.g., the message parsing module, the rule mapping module, and the cache update module) may also be packaged as a whole and configured on a corresponding physical machine (which may be a physical machine on which the cache database resides or a physical machine on which the persistent database resides or other physical machines), which the present application is not limited to.
In another embodiment, as shown in fig. 2E, in a specific service scenario, information may be transmitted and received between the terminal device 10 and the data processing system 20 via the network 30 and the server 40. The terminal device 10 can transmit and receive information to and from the server 40 via the network 30. The server 40 may access the data processing system 20 to obtain the content required by the terminal device 10 or implement data processing or function management of the data processing system 20, and the data processing system 20 may perform a corresponding response according to the received information or request, for example, provide the data requested by the read request or implement data update of the database system, which will not be described herein.
It should be understood that the application environments shown in fig. 2A-2E are merely application examples of the data processing system according to the embodiments of the present application, and are not limited in any way, and the data processing system of the present application may have other types of specific implementations, which are not described herein.
Returning to FIG. 1, in one embodiment, information interaction between the cached database and the persistent database may be achieved through sent binary log messages, thereby providing support for automatic updating of the cached database.
Fig. 3 is a flow chart of a data processing method according to an embodiment of the application. Wherein the method may be performed by a data processing module, wherein the data processing module may be a data processor as described above, the data processing module or a sub-module thereof being configured separately and being capable of enabling communication with a cache database and/or a persistent database, respectively, for enabling data processing and/or data updating of both. The data processing module may also be configured in and may enable communication with a cached database or a persistent database. The data processing module may also comprise a subunit or submodule via which the corresponding processing steps are realized.
As shown in fig. 3, in step S310, the persistent database is updated in response to the data update request.
In step S320, the cache database is updated based on the binary log message sent after the persistent database is successfully updated. The information of the binary log sent carries the operation corresponding to the data updating request and description information of the data to be updated, and the cache database is configured to respond to the reading request of the read data to provide the data requested by the reading request.
Both the read request and the data update request may be user-triggered. Wherein the read request may be used to read the requested data from the database system and the data update request may be used to implement a data update to the database system, including, for example, an insert, delete, rewrite, etc.
When a user needs to read data from the database system shown in fig. 1 or needs to update data in the database system, the user may trigger the read request or the data update request through an API interface or a database management interface, for example.
A read request or a data update request from a user is transferred to the corresponding database via two separate access points, read/write, respectively. For example, read requests are passed to the cache database and data update requests are passed to the persistence database, so that corresponding responses can be performed, e.g., to retrieve the requested data from the cache database or to update the data to the persistence database, respectively.
After successful data update to the persistent database, the persistent database may send a binary log message, e.g., in the form of a broadcast message, to, e.g., a data processing module. The transmitted message of the binary log corresponds to a data update request. It should be understood that in the embodiment of the present application, the related functional components/modules have already established communication connection, so that the related functional components/modules can implement information transceiving between each other, which is not described herein.
The data processing module may monitor the binary log message and parse the binary log message, so that in step S320, the cache database can be updated based on the binary log message sent after the persistent database is successfully updated.
In implementation, the message of the sent binary log may carry description information of the operation corresponding to the data update request and the data to be updated, for example, may include a statement for modifying the data or possibly causing a change of the data, and information such as a statement occurrence time, an execution duration, operation data, and the like. When updating the cache, the data update for the persistent database may be synchronized into the cache database based on the descriptive information in the message of the binary log. And if the messages of the plurality of broadcast binary logs are monitored, updating the cache database based on the sequence of the generation time or the receiving time of the plurality of messages in sequence and the description information of each message.
Therefore, through a strictly ordered broadcasting mechanism, the cache database can strictly update data according to the persistence database, so that data inconsistency is avoided, and dirty data is prevented from being read as far as possible.
In one embodiment, if the data requested by the read request is not obtained in the cache database in response to the read request, the requested data may be obtained from the persistence database and not updated to the cache database. At this point, the data retrieved from the persistent database will not be updated into the cached database. Therefore, the write operation only operates the persistent database, the read operation corresponding to the read request only operates the cache database, and the database flow only has one sequential path, so that the problem of concurrent read and write is avoided.
In another embodiment, there is also a more specific application scenario, for example, the user does not update the database system at all, but only the database. At this time, since the storage capacity of the cache database itself is far smaller than that of the persistent database, the data requested by the user's read request is not always acquired from the cache database.
To this end, the database system of the present application may be further configured to: if the data requested by the read request is not obtained in the cache database, obtaining the requested data from the persistence database; and adding an update field for the data requested by the read request in the persistent data after the data requested by the read request is acquired from the persistent database, the update field being used for generating a message for the binary log of the data requested by the read request by the persistent database, so as to return to the step of executing step S320 based on the message for the binary log sent after the persistent database is successfully updated, and updating the cache database.
The update field may be, for example, a field that does not have any influence on the semantics of the data itself, such as a time stamp field, a sequential code field, or the like. When the update field is added or the relevant content of the update field changes, the corresponding data can be regarded as changed and a corresponding binary log message can be generated so as to update the cache. The update field may be cleared after the persistent database generates a message for the binary log of the data requested by the read request, or may be updated the next time a message for the binary log of the data is required, as the application is not limited in this respect.
In implementation, the field may be updated for each piece of data requested by the read request in the persistent database, or each piece of data requested by the read request may not be acquired in the cache database, the field may be updated for the data increment (or update) in the persistent database, or the field may be updated for the data increment (or update) that is read more frequently by the user, and the field may be updated for the data increment (or update) in the persistent database. The request times of the data requested by the read request in the appointed time period can be recorded; if the number of requests is greater than the preset number, the corresponding data may be considered to be data that is read more frequently by the user, and the step of adding an update field to the data in the persistent data after the data requested by the read request is acquired from the persistent database may be performed.
In the embodiment of the application, when the message analysis is carried out, how the binary message is triggered can be determined. If it is determined that the update based on the persistent database was successful, the cache database may be updated based on the cache update rules. If the messages are determined to be the binary log generated based on the update field, the data corresponding to the messages can be updated into the cache database. Alternatively, a message generated based on the update field for the binary log of the data requested by the read request may also be parsed and based on the cache update rules, determine how to update the corresponding data to the cache database. The application is not limited in this regard.
Therefore, the whole function of the database product is not affected, data which is most concerned by business logic can be loaded into the cache, and the whole performance of the database product is improved.
As previously described, the cache update rules may be determined based on a user's configuration. That is, the user may configure the service. After the user has configured the relevant service, the rule mapping module (Mapper) shown in fig. 1 may obtain a cache update rule according to the configuration information provided by the user, for example, may include a mapping rule from the persistent database to the cache database or a specification of the cache database.
That is, the user can configure: when the data update occurs in the persistent database, how the changed data is mapped into the cache database. Such as which data needs to be updated into the cache database, which data does not need to be updated into the cache database, which time data needs to be updated into the cache database, which time data does not need to be updated into the cache database, etc. In the embodiment of the application, the data updating of the cache database can be obtained by the data processing module according to the user configuration and the data updating aiming at the persistent database.
After the user triggers the data updating request and the persistent database is successfully updated, the data processing module realizes automatic updating of the cache database according to the monitored description information in the binary log message and the cache updating rule.
Specifically, the data processing module may parse the binary log message, obtain the description information in the binary log message, and match the obtained description information with the cache update rule to determine whether the data that needs to be updated (i.e., the data that is updated in the persistent database) needs to be updated into the cache database. And if the description information in the binary log information determines that the data to be updated accords with the cache updating rule, updating the data to be updated into the cache database. If the description information in the binary log information determines that the data to be updated does not accord with the cache updating rule, the data can be discarded, i.e. the data to be updated is not updated in the cache database.
Therefore, through the cache updating rule configured by the user, the data interested by the user can be stored in the cache database without considering the cache elimination strategy under the conditions of cache failure or insufficient capacity and the like. When a user has a data reading requirement, the data interested by the user can be directly read from the cache database in response to a reading request, and dirty data cannot be read. Moreover, the user can obtain the database service meeting the use requirement of the user through simple configuration without writing or maintaining a complex cache updating strategy, so that the difficulty and complexity of using the database product by the user are greatly reduced. The user only has to concentrate on the business logic and does not need the update logic of the relational data store.
In one embodiment, the binary log message may be based on a binary log (e.g., binlog) format, and the data processing module may also convert the binary log message to a format supported by the cache database when parsing the binary log message. Thus, the transmitted binary log message and the description information carried in the message can be identified by the cache database and the data update of the cache database can be realized based on the information.
Therefore, the embodiment of the application provides the collaboration among different databases (such as the cache database and the relational database), and compared with an open source community or a self-research service, the scheme can provide a more complete overall solution for the service, simplifies development operation and maintenance work of a developer, reduces writing and operation and maintenance difficulties of users, and improves the competitiveness of database products.
Fig. 4 is an example of a data processing flow according to one embodiment of the application. The flow may be implemented on the basis of the system architecture diagram shown in fig. 1, and the data processing module may include a log processing module and a cache update module, where the log processing module may implement parsing of description information in a message of the binary log, and transfer the parsed data to the cache update module. The cache updating module can automatically update the cache database according to the configuration information of the user and the analysis result of the description information. And, before the following process, the developer and/or user have completed the relevant service configuration.
As shown in fig. 4, after the user has a read-write requirement and completes the relevant service configuration via its terminal device side, corresponding requests, such as a read request or a data update request, are respectively triggered via two separate access points (such as API interfaces) of read/write, wherein the read request is transferred to the cache database to perform a read operation to obtain data requested by the user from the cache database, and the data update request is transferred to the persistence database to perform a data update operation to implement data update on the persistence database.
In step S401, in response to a user-triggered data update request, data update is performed on the persistent database. Taking a data update request as write data as an example, the service data x is written into the persistent database.
In step S402, the description information of the operation corresponding to the data update request and the data to be updated for the persistent database is broadcast outwards in the form of a message of a broadcast binary log (e.g., binlog log), and is monitored by the log processing module.
In step S403, the log processing module parses the binary log message to obtain the operation of the data update request for the persistent database and the description information of the data to be updated. And when the data is analyzed, the information of the binary log or the description information and the like in the binary log can be converted into a data format which can be supported by the cache database.
As an example, if the original table structure in the persistent database is:
CREATE TABLE IF NOT EXISTS`test`(
`id`INT UNSIGNED AUTO_INCREMENT,
`a`VARCHAR(100)NOT NULL,
`b`VARCHAR(40)NOT NULL,
`c`DATE,
PRIMARY KEY(`id`)
)ENGINE=InnoDB DEFAULT CHARSET=utf8;
the format of the message of the binary log is:
id a b c
123 12 11111 2000-00-00 00:00:00
the log processing module may first convert the format of the message of the binary log into an intermediate format, such as json format:
finally, the format is converted into a format supported by a cache database, such as a format in redis, for example, map data types:
In step S404, the log processing module transfers the parsed data to the cache update module.
In step S405, the cache update module may determine whether to update the cache database according to a user' S configuration selection.
If the description information in the binary log information determines that the data to be updated meets the cache update rule, that is, the configuration requirement updates the cache database, in step S406, the cache database is updated according to the configuration of the user.
If the description information in the binary log information determines that the data to be updated does not accord with the cache updating rule, namely the configuration requirement does not update the cache database, discarding the data, and continuing to wait for new data to be transferred to the cache updating module.
For example, if the user configures to update only the test table, only the information of the test table is updated in the cache database, and other information is not updated. Then, if the description information in the binary log message indicates that the data update for the persistent database is an update of the information of the test table, the changed data needs to be updated into the cache database. If the description information in the binary log message indicates that the data update for the persistent database is not an update of the information of the test table, the cache database is not updated.
In step S407, after the user triggers the read request, the data requested by the user may be obtained from the cache database. And in response to the read request, the requested data is obtained from the persistent database if the requested data for the read request is not obtained from the cache database.
In the embodiment of the application, the data which is frequently read and written by the user is regarded as the data which is most interesting to the user at present, and if the data is not frequently read and written, the data can be stored in the cache database, so that the data which is interesting to the user can be always read from the cache database in response to the read request. Thus, the overall performance of the database product can be improved. And the mapping rule between the cache database and the persistence database is obtained through the configuration of the user, and all data interested by the user can be stored in the cache database without considering cache elimination strategies under the conditions of cache failure, insufficient capacity and the like.
As one example, in response to a read request, if the user requested data is not obtained in the cached database, the data may be read directly from the persistent database and the requested data is not updated to the cached database. At this point, the data read from the persistent database will not be updated into the cache database. That is, the data update of the cache database would only be triggered for the data update of the persistent database, and would not update the cache based on the persistent database upon a miss of the cache. Thus, the database flow has only one sequential path, and the problem of concurrent reading and writing does not exist.
As another example, if the data requested by the read request is not obtained in the cache database, the requested data is obtained from the persistent database; and adding an update field to the data requested by the read request in the persistent data after the data requested by the read request is acquired from the persistent database, wherein the update field is used for generating a binary log message of the data requested by the read request by the persistent database so as to return to the step of executing the binary log message sent after the successful update of the persistent database and updating the cache database. Therefore, on the basis of triggering the data update of the cache database based on the data update of the persistence database, a cache update mechanism (similar to a bypass cache scheme) for storing the read data into the cache after the data is successfully read from the database is superimposed, and support is provided for ensuring that a user can always read the data interested in the user.
Thus far, the data processing scheme of the present application has been described in detail in connection with fig. 1-4. The data processing scheme can provide an integral database system for users, the database system can be combined with a cache database and a persistence database, the users only need to configure without writing and maintaining a complex cache updating strategy, the problem of concurrent reading and writing does not exist, the users only need to concentrate on business logic and not need to pay attention to the cache updating logic, and the difficulty and complexity of using database products by the users are greatly reduced.
Based on the same technical conception, the embodiment of the application also provides a data processing device.
Fig. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present application.
As shown in fig. 5, the data processing apparatus 500 may include:
a first updating unit 510, configured to update the persistent database in response to a data update request.
The second updating unit 520 is configured to update a cache database based on a binary log message sent after the persistent database is successfully updated, where the sent binary log message carries an operation corresponding to the data update request and description information of data to be updated, and the cache database is configured to provide data requested by the read request to the data reading unit in response to the read request for reading the data.
In one embodiment, the data reading unit may be configured to:
if the data requested by the read request is not acquired in the cache database, acquiring the requested data from the persistent database and not updating the requested data to the cache database.
In one embodiment, the data reading unit is for: if the data requested by the read request is not obtained in the cache database, obtaining the requested data from the persistence database; and adding an update field to the data requested by the read request in the persistent data after the data requested by the read request is acquired from the persistent database, wherein the update field is used for generating a binary log message of the data requested by the read request by the persistent database, so that the second update unit executes the binary log message sent after the persistent database is successfully updated, and updates a cache database.
In one embodiment, the apparatus further comprises:
a recording unit, configured to record the number of requests for data requested by the read request within a specified duration;
and if the number of requests is greater than a preset number, the data read-write unit executes the step of adding an update field to the data requested by the read request in the persistent data after the data is acquired from the persistent database.
In one embodiment, the second updating unit comprises:
the analysis unit is used for analyzing the information of the binary log and obtaining the description information in the information of the binary log;
and if the description information in the binary log information determines that the data to be updated accords with the cache updating rule, the second updating unit updates the data to be updated into the cache database.
In one embodiment, the parsing unit further comprises:
and the format conversion unit is used for converting the message of the binary log into a format supported by the cache database.
In one embodiment, the cache update rule is derived based on user configuration information.
In one embodiment, the binary log is based on a Binlog log format.
Specific implementation of the functions of the above data processing apparatus may be referred to above in connection with the description of fig. 1 to 4, and will not be described herein.
Having described a data processing method and apparatus of an exemplary embodiment of the present application, next, a computing device according to another exemplary embodiment of the present application is described.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to the application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps in the data processing method according to various exemplary embodiments of the application described in the specification above. For example, the processor may perform steps S310-S320 as shown in FIG. 3 or steps S401-S407 as shown in FIG. 4.
A computing device 130 according to such an embodiment of the application is described below with reference to fig. 6. The computing device 130 shown in fig. 6 is merely an example and should not be taken as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 6, computing device 130 is in the form of a general purpose computing device. Components of computing device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 connecting the various system components, including the memory 132 and the processor 131.
Bus 133 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, and a local bus using any of a variety of bus architectures.
Memory 132 may include readable media in the form of volatile memory such as Random Access Memory (RAM) 1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Computing device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), one or more devices that enable a user to interact with computing device 130, and/or any devices (e.g., routers, modems, etc.) that enable computing device 130 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 135. Moreover, computing device 130 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 136. As shown, network adapter 136 communicates with other modules for computing device 130 over bus 133. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in connection with computing device 130, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
In some possible embodiments, aspects of a data processing method provided by the present application may also be implemented in the form of a program product, which comprises program code for causing a computer device to perform the steps of the data processing method according to the various exemplary embodiments of the present application described above when the program product is run on the computer device, for example, the computer device may perform steps S310 to S320 as shown in fig. 3 or steps S401 to S407 as shown in fig. 4.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for data processing of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code and may run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required to either imply that the operations must be performed in that particular order or that all of the illustrated operations be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (12)

1. A method of data processing, the method comprising:
Updating the persistent database in response to the data update request; the data update request is used for realizing data update of a database system, and the database system comprises the persistent database and a cache database;
updating a cache database based on a binary log message sent after the persistent database is successfully updated, wherein the sent binary log message carries operation corresponding to the data update request and description information of data to be updated, and the cache database is configured to respond to a read request to provide the data requested by the read request, and the read request is used for reading the requested data from the database system;
if the data requested by the read request is not acquired in the cache database, acquiring the requested data from the persistence database, and not updating the requested data to the cache database; and adding an update field to the data requested by the read request in the persistent database after the data requested by the read request is acquired from the persistent database, wherein the update field is used for generating a message of a binary log of the data requested by the read request by the persistent database so as to return to executing the message of the binary log sent after the successful update of the persistent database, and updating the cache database.
2. The method according to claim 1, wherein the method further comprises:
recording the request times of the data requested by the read request in the appointed duration;
and if the number of requests is greater than a preset number, returning to execute the step of adding an update field for the data in the persistent data after the data requested by the read request is acquired from the persistent database.
3. The method of claim 1 or 2, wherein updating the cache database based on the binary log message sent after successful update of the persistent database comprises:
analyzing the binary log information to obtain description information in the binary log information;
and if the description information in the binary log information determines that the data to be updated accords with the cache updating rule, updating the data to be updated into the cache database.
4. The method of claim 3, wherein parsing the message of the binary log further comprises:
and converting the message of the binary log into a format supported by the cache database.
5. A method according to claim 3, wherein the cache update rules are derived based on user configuration information.
6. The method of claim 3, wherein the binary log is based on Binlog log format.
7. A data processing apparatus, comprising:
a first updating unit for updating the persistent database in response to the data update request; the data update request is used for realizing data update of a database system, and the database system comprises the persistent database and a cache database;
the second updating unit is used for updating a cache database based on a binary log message sent after the persistent database is successfully updated, wherein the sent binary log message carries description information of operations corresponding to the data updating request and data to be updated, and the cache database is configured to respond to a reading request to provide the data requested by the reading request for a data reading unit, and the reading request is used for reading the requested data from the database system;
the data reading unit is configured to, if the data requested by the read request is not acquired in the cache database, acquire the requested data from the persistent database, and not update the requested data to the cache database; and adding an update field in the persistent database for the data requested by the read request after the data requested by the read request is acquired from the persistent database, wherein the update field is used for generating a message of a binary log for the data requested by the read request by the persistent database, so that the second update unit executes the message based on the binary log sent after the persistent database is successfully updated to update a cache database.
8. The apparatus of claim 7, wherein the second updating unit comprises:
the analysis unit is used for analyzing the information of the binary log and obtaining the description information in the information of the binary log;
and if the description information in the binary log information determines that the data to be updated accords with the cache updating rule, the second updating unit updates the data to be updated into the cache database.
9. The apparatus of claim 8, wherein the parsing unit further comprises:
and the format conversion unit is used for converting the message of the binary log into a format supported by the cache database.
10. A data processing system comprising a persistent database, a data processor, and a cache database, wherein:
the persistent database is used for responding to the data updating request to update the data and sending the binary log information to the data processor after successful updating; the sent binary log information carries description information of operations corresponding to the data update request and data to be updated, wherein the data update request is used for realizing data update of a database system, and the database system comprises the persistence database and a cache database;
The data processor is used for analyzing the information of the binary log and then sending the information to the cache database;
the cache database is used for carrying out data update according to the operation carried in the binary log message sent by the data processor and the description information of the data to be updated; the cache database is further used for responding to a read request to provide data requested by the read request, and the read request is used for reading the requested data from the database system;
the persistent database is further configured to provide the requested data if the data requested by the read request is not obtained from the cache database, where the requested data is not updated to the cache database; and adding an update field in the persistent database for the data requested by the read request, wherein the update field is used for generating a message of a binary log of the data requested by the read request by the persistent database, so that the data processor executes the message based on the binary log sent after the persistent database is successfully updated, and updating a cache database.
11. A computing device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data processing method according to any one of claims 1-6.
12. A computer storage medium storing computer executable instructions for causing a computer to perform the data processing method according to any one of claims 1-6.
CN201911082799.2A 2019-11-07 2019-11-07 Data processing method, device, computing equipment and storage medium Active CN111177161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911082799.2A CN111177161B (en) 2019-11-07 2019-11-07 Data processing method, device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911082799.2A CN111177161B (en) 2019-11-07 2019-11-07 Data processing method, device, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111177161A CN111177161A (en) 2020-05-19
CN111177161B true CN111177161B (en) 2023-08-15

Family

ID=70650035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911082799.2A Active CN111177161B (en) 2019-11-07 2019-11-07 Data processing method, device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111177161B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111835712A (en) * 2020-06-01 2020-10-27 北京百卓网络技术有限公司 Data transmission method, device, system, equipment and storage medium
CN112286892B (en) * 2020-07-01 2024-04-05 上海柯林布瑞信息技术有限公司 Data real-time synchronization method and device of post-relation database, storage medium and terminal
CN114205368B (en) * 2020-08-27 2023-06-30 腾讯科技(深圳)有限公司 Data storage system, control method, control device, electronic equipment and storage medium
CN112256485B (en) * 2020-10-30 2023-08-04 网易(杭州)网络有限公司 Data backup method, device, medium and computing equipment
CN112579479B (en) * 2020-12-07 2022-07-08 成都海光微电子技术有限公司 Processor and method for maintaining transaction order while maintaining cache coherency
CN113094378B (en) * 2021-03-19 2024-02-06 北京达佳互联信息技术有限公司 Data processing method, device, electronic equipment and storage medium
CN113360319B (en) * 2021-05-14 2022-08-19 山东英信计算机技术有限公司 Data backup method and device
CN113641689A (en) * 2021-07-22 2021-11-12 上海云轴信息科技有限公司 Data processing method and device based on lightweight database

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504145A (en) * 2015-01-05 2015-04-08 浪潮(北京)电子信息产业有限公司 Method and device capable of achieving database reading and writing separation
CN106506704A (en) * 2016-12-29 2017-03-15 北京奇艺世纪科技有限公司 A kind of buffering updating method and device
CN107341212A (en) * 2017-06-26 2017-11-10 努比亚技术有限公司 A kind of buffering updating method and equipment
CN109344157A (en) * 2018-09-20 2019-02-15 深圳市牛鼎丰科技有限公司 Read and write abruption method, apparatus, computer equipment and storage medium
CN109597818A (en) * 2018-11-28 2019-04-09 优刻得科技股份有限公司 Data-updating method, device, storage medium and equipment
CN109871388A (en) * 2019-02-19 2019-06-11 北京字节跳动网络技术有限公司 Data cache method, device, whole electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060271542A1 (en) * 2005-05-25 2006-11-30 Harris Steven T Clustered object state using logical actions

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504145A (en) * 2015-01-05 2015-04-08 浪潮(北京)电子信息产业有限公司 Method and device capable of achieving database reading and writing separation
CN106506704A (en) * 2016-12-29 2017-03-15 北京奇艺世纪科技有限公司 A kind of buffering updating method and device
CN107341212A (en) * 2017-06-26 2017-11-10 努比亚技术有限公司 A kind of buffering updating method and equipment
CN109344157A (en) * 2018-09-20 2019-02-15 深圳市牛鼎丰科技有限公司 Read and write abruption method, apparatus, computer equipment and storage medium
CN109597818A (en) * 2018-11-28 2019-04-09 优刻得科技股份有限公司 Data-updating method, device, storage medium and equipment
CN109871388A (en) * 2019-02-19 2019-06-11 北京字节跳动网络技术有限公司 Data cache method, device, whole electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111177161A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111177161B (en) Data processing method, device, computing equipment and storage medium
US10740196B2 (en) Event batching, output sequencing, and log based state storage in continuous query processing
US8548945B2 (en) Database caching utilizing asynchronous log-based replication
US8401994B2 (en) Distributed consistent grid of in-memory database caches
US7783601B2 (en) Replicating and sharing data between heterogeneous data systems
EP2653986B1 (en) Client-side caching of a database transaction token.
JP2023518374A (en) Database transaction processing method, database transaction processing device, server, and computer program
CN111566635A (en) Storage architecture for heterogeneous multimedia data
EP2767912A2 (en) In-memory real-time synchronized database system and method
CN107818111B (en) Method for caching file data, server and terminal
CN113094430B (en) Data processing method, device, equipment and storage medium
CN108509453B (en) Information processing method and device
CN104199978A (en) System and method for realizing metadata cache and analysis based on NoSQL and method
US7743333B2 (en) Suspending a result set and continuing from a suspended result set for scrollable cursors
CN111177159A (en) Data processing system and method and data updating equipment
KR20140047448A (en) Client and database server for resumable transaction and method thereof
CN114746854A (en) Data provider agnostic change handling in mobile client applications
US20200409969A1 (en) Method for automated query language expansion and indexing
CN115113989B (en) Transaction execution method, device, computing equipment and storage medium
US11789971B1 (en) Adding replicas to a multi-leader replica group for a data set
CN116821232A (en) Data synchronization method and related device
US7613710B2 (en) Suspending a result set and continuing from a suspended result set
US20190171749A1 (en) Data integration framework for multiple data providers using service url
CN108268561A (en) The method and apparatus for inquiring database
US20220229837A1 (en) Data Storage and Data Retrieval Methods and Devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant