CN115878639A - Consistency processing method of secondary cache and distributed service system - Google Patents
Consistency processing method of secondary cache and distributed service system Download PDFInfo
- Publication number
- CN115878639A CN115878639A CN202211091274.7A CN202211091274A CN115878639A CN 115878639 A CN115878639 A CN 115878639A CN 202211091274 A CN202211091274 A CN 202211091274A CN 115878639 A CN115878639 A CN 115878639A
- Authority
- CN
- China
- Prior art keywords
- service node
- cache
- data
- update
- current service
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 19
- 239000000463 material Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present disclosure provides a consistency processing method for a second-level cache, including: the current service node updates the target database based on the data updating request; the current service node obtains the updated data from the target database to update the local cache and the remote cache, and writes the message including the updated cache into the message queue; and at least one other service node except the current service node updates the local cache based on the message which comprises the updating cache in the message queue, so that the local cache and the remote cache keep consistency. The present disclosure also provides a distributed service system.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a consistency processing method for a secondary cache and a distributed service system.
Background
In an enterprise, when a service is faced with a high concurrency problem, a secondary cache design is often adopted, and a mode of 'remote cache + local cache' is used for improving the concurrency capability, but the problem of data consistency is caused by multiple storage of data.
In the related art, after a service node updates a remote cache and a local cache of the node, the other service nodes do not sense the change of the remote cache, and the unchanged local cache is still read, so that the problem of data consistency is caused. When the remote cache is updated and the local cache is not updated, the data of the local cache is updated later, so that the data consistency problem is caused.
Disclosure of Invention
The disclosure provides a consistency processing method of a second-level cache and a distributed service system.
According to an aspect of the present disclosure, a method for processing consistency of a second-level cache is provided, including:
the current service node updates the target database based on the data updating request;
the current service node obtains the updating data from the target database to update the local cache and the remote cache, and writes the message including the updating cache into a message queue;
at least one other service node than the current service node updates the local cache based on a Message (Message) in the Message queue that includes an update cache, such that the local cache is consistent with the remote cache.
According to the consistency processing method of the second-level cache of at least one embodiment of the present disclosure, the updating of the target database by the current service node based on the data updating request includes:
the current service node receives a data updating request;
the current service node acquires a distributed lock;
and the current service node executes data updating operation on the target database based on the distributed lock and the data updating request.
According to the consistency processing method of the second-level cache of at least one embodiment of the present disclosure, the current service node obtains the update data from the target database to update the remote cache, and the method includes:
the current service node updates the remote cache based on the distributed lock and the update data.
According to the consistency processing method of the secondary cache of at least one embodiment of the present disclosure, the current service node synchronously updates the local cache and the remote cache of the current service node based on the acquired update data.
According to the consistency processing method of the second-level cache of at least one embodiment of the present disclosure, the current service node writes the message including the updated cache into the message queue in real time.
According to the consistency processing method of the second-level cache, the cached message further comprises the characteristic identification of the current service node, so that the current service node does not read the message written by the current service node when reading the data in the message queue.
According to the consistency processing method of the second-level cache, the characteristic mark is the IP of the current service node.
According to the consistency processing method of the secondary cache of at least one embodiment of the disclosure, when the current service node and each other service node except the current service node read the data in the message queue, the data are read in series.
According to the consistency processing method of the secondary cache of at least one embodiment of the present disclosure, the distributed lock held by the current service node to be acquired writes a message including an update cache into the message queue.
According to another aspect of the present disclosure, a method for processing consistency of a second level cache is provided, including:
more than two service nodes in the distributed service system receive a data updating request to update a target database;
each service node receiving the data updating request acquires respective distributed locks;
the service nodes sequentially update the target database based on the respective distributed locks and the respective received data update requests to obtain update data;
each service node updates the local cache based on the obtained update data, sequentially updates the remote cache based on the distributed locks of each service node, and sequentially writes the messages including the updated cache into the same message queue based on the distributed locks of each service node;
and each service node updates the local cache of each service node based on the Message (Message) comprising the update cache in the Message queue so as to keep the local cache consistent with the remote cache.
According to yet another aspect of the present disclosure, there is provided a distributed service system including:
a plurality of databases;
message queue means;
a remote caching system;
and each service node performs consistency processing on the local cache, the remote cache in the remote cache system and the target database in the databases based on the consistency processing method of the second-level cache according to any embodiment of the disclosure.
The distributed service system according to at least one embodiment of the present disclosure further includes:
the distributed lock providing device provides distributed locks for all service nodes, so that all the service nodes perform the consistency processing based on the distributed locks.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a method for processing consistency of a second level cache according to an embodiment of the present disclosure.
Fig. 2 shows a block schematic diagram of the structure of the distributed service system of one embodiment of the present disclosure.
FIG. 3 is a flow diagram illustrating a data update operation on a target database based on a distributed lock, according to an embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a method for processing consistency of a level two cache according to another embodiment of the present disclosure.
Detailed Description
The present disclosure will be described in further detail with reference to the drawings and embodiments. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limitations of the present disclosure. It should be further noted that, for the convenience of description, only the portions relevant to the present disclosure are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. Technical solutions of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the illustrated exemplary embodiments/examples are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Accordingly, unless otherwise indicated, features of the various embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concept of the present disclosure.
The use of cross-hatching and/or shading in the drawings is generally used to clarify the boundaries between adjacent components. As such, unless otherwise noted, the presence or absence of cross-hatching or shading does not convey or indicate any preference or requirement for a particular material, material property, size, proportion, commonality between the illustrated components and/or any other characteristic, attribute, property, etc., of a component. Further, in the drawings, the size and relative sizes of components may be exaggerated for clarity and/or descriptive purposes. While example embodiments may be practiced differently, the specific process sequence may be performed in a different order than that described. For example, two consecutively described processes may be performed substantially simultaneously or in an order reverse to the order described. In addition, like reference numerals denote like parts.
When an element is referred to as being "on" or "on," "connected to" or "coupled to" another element, it can be directly on, connected or coupled to the other element or intervening elements may be present. However, when an element is referred to as being "directly on," "directly connected to" or "directly coupled to" another element, there are no intervening elements present. For purposes of this disclosure, the term "connected" may refer to physically, electrically, etc., and may or may not have intermediate components.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising" and variations thereof are used in this specification, the stated features, integers, steps, operations, elements, components and/or groups thereof are stated to be present but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximate terms and not as degree terms, and as such, are used to interpret inherent deviations in measured values, calculated values, and/or provided values that would be recognized by one of ordinary skill in the art.
The following describes the consistency processing method of the second level cache and the distributed service system in detail with reference to fig. 1 to 4.
Fig. 1 is a flowchart illustrating a method for processing consistency of a second level cache according to an embodiment of the present disclosure.
Referring to fig. 1, a method S100 for processing consistency of a second level cache according to this embodiment includes:
s102, the current service node updates the target database based on the data updating request;
s104, the current service node acquires the updated data from the target database to update the local cache and the remote cache, and writes the message including the updated cache into a message queue;
s106, at least one other service node except the current service node updates the local cache based on the Message (Message) including the update cache in the Message queue, so that the local cache and the remote cache keep consistency.
The second level cache described in this disclosure is a local cache at the service node and a remote cache at the remote server, while storing data in the database.
According to the method and the device, the message queue is set, when the current service node updates the target database based on the data updating request, the remote cache and the local cache of the current service node are updated, and meanwhile, the message including the updated cache is written into the message queue, so that when other service nodes except the current service node can consume the data in the message queue, the respective local caches are updated based on the message (namely, the cache change message) including the updated cache in the message queue, and therefore the consistency of the local cache and the remote cache of each service node is guaranteed.
The service node described in the present disclosure may be a service node configured by one server, or may be a service node configured by two or more servers.
In the present disclosure, the server configurations of the respective service nodes may be the same or different, and the present disclosure is not particularly limited thereto.
The consistency processing method of the second-level cache can be applied to a distributed service system, and in some embodiments of the present disclosure, the distributed service system may include a plurality of databases, a plurality of service nodes, and a remote cache system.
Fig. 2 schematically shows a structural schematic block diagram of a distributed service system according to an embodiment of the present disclosure.
The distributed service system shown in fig. 2 shows three databases (database a, database B, database C), four service nodes (service node 01, service node 02, service node 03, service node 04), a remote cache system, and a message queue device, wherein the message queue device is used for storing messages (i.e. messages including update cache) written by the respective service nodes, and the message queue device can assign a timestamp to each written message, thereby forming a message queue.
In some embodiments of the present disclosure, the message queue apparatus may be implemented by a server device, and preferably, the message queue apparatus may automatically delete early data in the message queue based on a timestamp of the written message, thereby obtaining a lightweight message queue to facilitate writing of the message.
With continued reference to fig. 2, for example, the current service node, i.e., service node 01 in fig. 2, receives a Data update request (Data update request), which is directed to database a (i.e., the target database), the current service node (service node 01) updates database a based on the Data update request, and obtains update Data from database a to update the local cache and the remote cache of the current service node (i.e., service node 01), obtains an update cache of the local cache/remote cache, and writes a message including the update cache into the message queue of the message queue apparatus.
And other service nodes (namely the service node 02, the service node 03, the service node 04 and the like) except the current service node update the respective local caches based on the messages including the update caches in the message queue, so that the local caches of the service nodes are kept consistent with the remote caches.
FIG. 3 is a flowchart illustrating a data update operation on a target database based on a distributed lock according to an embodiment of the disclosure.
In the consistency processing method S100 of the second level cache according to some embodiments of the present disclosure, referring to fig. 3, the above-described S102, the current service node updates the target database based on the data update request, including:
s1022, the current service node receives a data updating request;
s1024, the current service node acquires a distributed lock;
and S1026, the current service node executes data updating operation on the target database based on the distributed lock and the data updating request.
With continued reference to fig. 2, an example of taking service node 01 as the current service node is a flow of update operation performed on the target database by the current service node based on the distributed lock.
The service node 01 receives a data updating request; the service node 01 acquires a distributed lock; the service node 01 performs a data update operation on the target database, that is, the database a, based on the acquired distributed lock and the data update request.
In some embodiments of the present disclosure, more than two service nodes (e.g., service node 01, service node 02) may receive data update requests for the same target database (e.g., database a, database B, or database C), and each service node performs a data update operation on the target database based on the received data update requests and the acquired distributed locks, so that the data update on the target database can be smoothly performed.
In the present disclosure, the distributed lock may employ a high available distributed lock based on reds — RedLock.
Referring to fig. 2, the distributed service system of the present disclosure further includes a distributed lock device, where the distributed lock device may be implemented by one server or a server cluster, and the distributed lock device of the present disclosure may adopt various distributed lock configuration schemes in the prior art, which is not particularly limited by the present disclosure and falls within the protection scope of the present disclosure.
According to the consistency processing method of the second-level cache of the preferred embodiment of the present disclosure, the current service node obtains the update data from the target database to update the remote cache, including:
the current service node updates the remote cache based on the distributed lock and the update data.
In some embodiments of the present disclosure, if two or more service nodes modify a certain target database based on the received data update request and the obtained distributed lock, each service node will also maintain the lock (distributed lock) to update the remote cache, so as to ensure the final consistency between the remote cache and the database.
The remote caching system of the present disclosure may be a Redis database or other type of Key-Value database, or the like.
For the consistency processing method S100 of the second-level cache in each of the above embodiments, preferably, the current service node synchronously updates the local cache and the remote cache of the current service node based on the obtained update data.
For the method S100 for processing consistency of the second level cache in each embodiment described above, preferably, the current service node writes the message including the updated cache into the message queue immediately.
For the consistency processing method of the second-level cache in each of the above embodiments, preferably, the message that includes the updated cache further includes a feature identifier of the current service node, so that when the current service node reads data in the message queue, the message that has been written by the current service node is not read.
In some embodiments of the present disclosure, since when the current service node (for example, the service node 01) writes a message including the update cache to the message queue based on the update cache, the written message carries a feature identifier of the current service node, preferably, when the current service node (for example, the service node 01) consumes data in the message queue, the message written by the current service node will be filtered out based on the feature identifier of the current service node, so as to avoid repeated updates of the local cache by the current service node.
Wherein, the characteristic identifier of the current service node may be an IP of the current service node.
The characteristic identification of the service node described in this disclosure may also be the physical address (mac) of the service node.
According to the consistency processing method of the second-level cache in the preferred embodiment of the present disclosure, when the current service node and each other service node except the current service node read the data in the message queue, the data are read in series.
For the consistency processing method of the second level cache in each embodiment described above, preferably, the distributed lock that is obtained by the current service node writes a message that includes the update cache into the message queue, and after the writing is completed, the lock is released.
Fig. 4 is a flowchart illustrating a method for processing the consistency of the second level cache according to another embodiment of the disclosure.
Referring to fig. 4, the method S200 for processing consistency of the second level cache of the present embodiment includes:
s202, more than two service nodes in the distributed service system receive a data updating request to update a target database;
s204, each service node receiving the data updating request acquires respective distributed lock;
s206, each service node sequentially updates the target database based on the respective distributed lock and the respective received data updating request to obtain updating data;
s208, each service node updates the local cache based on the obtained update data, sequentially updates the remote cache based on the distributed locks, and sequentially writes the messages including the updated cache into the same message queue based on the distributed locks;
s210, each service node updates its local cache based on the Message (Message) including the update cache in the Message queue, so that the local cache and the remote cache maintain consistency.
Preferably, when each service node updates its local cache based on the message in the message queue including the update cache, it does not read the message that has been written in each service node.
In some embodiments of the present disclosure, more than two service nodes receive a data update request to expect to modify the same target data of a certain target database, and each service node modifies the target data based on a distributed lock, so that multiple modification operations successively execute modification operations on the target data based on the distributed locks of the service nodes.
Illustratively, referring to fig. 2, for example, when a service node 01 and a service node 02 both receive a data update request for modifying the nth data in the database a, both service nodes acquire a distributed lock, for example, the service node 01 first modifies the nth data in the database a based on the distributed lock, updates the local cache of the service node 01 and maintains the lock to modify the remote cache based on the modified data, and writes the modified data into the message queue described in the present disclosure by the lock, and the service node 02 updates the local cache of the service node 02 based on the modified data written by the service node 01 in the message queue.
Next, the service node 02 modifies the nth data in the database a based on the distributed lock, updates the local cache of the service node 02 based on the modified data and maintains the lock to modify the remote cache, and writes the modified data into the message queue by the lock, and the service node 01 updates the local cache of the service node 01 based on the modified data written by the service node 02 in the message queue. Therefore, the local cache of the service node 01, the local cache of the service node 02, the remote cache and the database A are kept consistent.
In other embodiments of the present disclosure, more than two service nodes receive the data update request to expect to modify different target data of a certain target database, and each service node may modify corresponding target data based on the obtained distributed lock and the received data update request.
Illustratively, with reference to fig. 2 continuously, for example, the service node 01 and the service node 02 respectively receive data update requests for modifying the mth data and the nth data in the database a, both service nodes acquire a distributed lock from the distributed lock providing apparatus, the service node 01 modifies the mth data in the database a based on the distributed lock, updates the local cache of the service node 01 based on the modified data, and holds the lock to modify the remote cache, and holds the lock to write the modified data into the message queue described above in this disclosure, so as to ensure that the service node 01 modifies the local cache, the remote cache, and the message queue synchronously. The service node 02 modifies the nth data in the database a based on the distributed lock, updates the local cache of the service node 02 based on the modified data, holds the lock to modify the remote cache, and writes the modified data into the message queue described in the disclosure so as to ensure that the service node 02 synchronously modifies the local cache, the remote cache and the message queue. Service node 02 will also update the local cache of service node 02 based on the modified data written by service node 01 in the message queue. Service node 01 will also update the local cache of service node 02 based on the modified data written by service node 02 in the message queue.
Therefore, the local cache of the service node 01, the local cache of the service node 02, the remote cache and the database A are kept consistent.
Referring to fig. 2, a distributed service system according to an embodiment of the present disclosure includes:
a plurality of databases;
message queue means;
a remote caching system;
and each service node performs consistency processing on the local cache, the remote cache in the remote cache system and the target database in the databases based on the consistency processing method of the second-level cache of any one of the above-described embodiments of the disclosure.
Preferably, the distributed service system of the present disclosure further includes: and the distributed lock providing device provides distributed locks for the service nodes so that the service nodes perform consistency processing based on the distributed locks.
Any process or method descriptions in flow charts of the present disclosure or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of implementing the embodiments of the present disclosure. The processor performs the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied in a machine-readable medium, such as a memory. In some embodiments, some or all of the software program may be loaded and/or installed via memory and/or a communication interface. When the software program is loaded into memory and executed by a processor, one or more steps of the method described above may be performed. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above by any other suitable means (e.g., by means of firmware).
The logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in the memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps of the method implementing the above embodiments may be implemented by hardware that is instructed to be associated with a program, which may be stored in a readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
In the description of the present specification, reference to the description of "one embodiment/implementation", "some embodiments/implementations", "examples", "specific examples", or "some examples", etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment/implementation or example is included in at least one embodiment/implementation or example of the present application. In this specification, the schematic representations of the terms described above are not necessarily the same embodiment/mode or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/aspects or examples and features of the various embodiments/aspects or examples described in this specification can be combined and combined by one skilled in the art without conflicting therewith.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
It will be understood by those skilled in the art that the foregoing embodiments are merely for clarity of illustration of the disclosure and are not intended to limit the scope of the disclosure. Other variations or modifications may occur to those skilled in the art, based on the foregoing disclosure, and are still within the scope of the present disclosure.
Claims (10)
1. A method for processing consistency of a second-level cache is characterized by comprising the following steps:
the current service node updates the target database based on the data updating request;
the current service node obtains the updating data from the target database to update the local cache and the remote cache, and writes the message including the updating cache into a message queue; and
at least one other service node than the current service node updates the local cache based on a Message (Message) in the Message queue that includes an update cache, such that the local cache is consistent with the remote cache.
2. The method of claim 1, wherein the updating the target database by the current service node based on the data update request comprises:
the current service node receives a data updating request;
the current service node acquires a distributed lock; and
and the current service node executes data updating operation on the target database based on the distributed lock and the data updating request.
3. The method of claim 2, wherein the current service node obtains the update data from the target database to update the remote cache, comprising:
the current service node updates the remote cache based on the distributed lock and the update data.
4. The method as claimed in claim 3, wherein the current service node synchronously updates the local cache and the remote cache of the current service node based on the obtained update data.
5. The method of claim 4, wherein the current service node writes a message including an update buffer into the message queue on the fly.
6. The method of claim 1, wherein the message comprises a signature of the current service node, such that the current service node does not read the message written by the current service node when reading the data in the message queue.
7. The method of claim 6, wherein the feature identifier is an IP of a current serving node;
optionally, when the current service node and each other service node except the current service node read the data in the message queue, reading the data in series;
optionally, the current service node writes a message including an update cache into the message queue in the distributed lock acquired by the current service node.
8. A method for processing consistency of a second-level cache is characterized by comprising the following steps:
more than two service nodes in the distributed service system receive a data updating request to update a target database;
each service node receiving the data updating request acquires respective distributed lock;
the service nodes sequentially update the target database based on the respective distributed locks and the respective received data update requests to obtain update data;
each service node updates the local cache based on the obtained update data, sequentially updates the remote cache based on the distributed locks of each service node, and sequentially writes the messages including the updated cache into the same message queue based on the distributed locks of each service node; and
and each service node updates the local cache of each service node based on the Message (Message) comprising the update cache in the Message queue so as to keep the local cache consistent with the remote cache.
9. A distributed service system, comprising:
a plurality of databases;
message queue means;
a remote caching system; and
a plurality of service nodes, each performing coherency processing for a respective local cache, a remote cache in the remote cache system, and a target database in the plurality of databases based on the processing method of any one of claims 1 to 8.
10. The distributed service system of claim 9, further comprising:
the distributed lock providing device provides distributed locks for all service nodes, so that all the service nodes perform the consistency processing based on the distributed locks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211091274.7A CN115878639B (en) | 2022-09-07 | 2022-09-07 | Consistency processing method of secondary cache and distributed service system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211091274.7A CN115878639B (en) | 2022-09-07 | 2022-09-07 | Consistency processing method of secondary cache and distributed service system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115878639A true CN115878639A (en) | 2023-03-31 |
CN115878639B CN115878639B (en) | 2023-10-24 |
Family
ID=85769787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211091274.7A Active CN115878639B (en) | 2022-09-07 | 2022-09-07 | Consistency processing method of secondary cache and distributed service system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115878639B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030105986A1 (en) * | 2001-10-01 | 2003-06-05 | International Business Machines Corporation | Managing errors detected in processing of commands |
US20030179742A1 (en) * | 2000-03-16 | 2003-09-25 | Ogier Richard G. | Method and apparatus for disseminating topology information and for discovering new neighboring nodes |
US20130144967A1 (en) * | 2011-12-05 | 2013-06-06 | International Business Machines Corporation | Scalable Queuing System |
CN107862040A (en) * | 2017-11-06 | 2018-03-30 | 中国银行股份有限公司 | The update method of data, device and a kind of cluster in a kind of caching of application example |
CN110633320A (en) * | 2018-05-30 | 2019-12-31 | 北京京东尚科信息技术有限公司 | Processing method, system, equipment and storage medium of distributed data service |
CN110866011A (en) * | 2019-11-04 | 2020-03-06 | 金蝶软件(中国)有限公司 | Data table synchronization method and device, computer equipment and storage medium |
US10678697B1 (en) * | 2019-01-31 | 2020-06-09 | Salesforce.Com, Inc. | Asynchronous cache building and/or rebuilding |
CN111611090A (en) * | 2020-05-13 | 2020-09-01 | 浙江创邻科技有限公司 | Distributed message processing method and system |
CN112559560A (en) * | 2019-09-10 | 2021-03-26 | 北京京东振世信息技术有限公司 | Metadata reading method and device, metadata updating method and device, and storage device |
CN112615907A (en) * | 2020-12-04 | 2021-04-06 | 北京齐尔布莱特科技有限公司 | Data synchronization system and method |
CN113448971A (en) * | 2020-03-24 | 2021-09-28 | 北京字节跳动网络技术有限公司 | Data updating method based on distributed system, computing node and storage medium |
CN113836057A (en) * | 2020-06-24 | 2021-12-24 | 三星电子株式会社 | Message queue storage device and interface for flash memory storage controller |
CN114817320A (en) * | 2022-02-24 | 2022-07-29 | 网易(杭州)网络有限公司 | Cache processing method and device |
CN114979249A (en) * | 2022-03-30 | 2022-08-30 | 阿里巴巴(中国)有限公司 | Message handle creating method, message pushing method, related device and system |
-
2022
- 2022-09-07 CN CN202211091274.7A patent/CN115878639B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030179742A1 (en) * | 2000-03-16 | 2003-09-25 | Ogier Richard G. | Method and apparatus for disseminating topology information and for discovering new neighboring nodes |
US20030105986A1 (en) * | 2001-10-01 | 2003-06-05 | International Business Machines Corporation | Managing errors detected in processing of commands |
US20130144967A1 (en) * | 2011-12-05 | 2013-06-06 | International Business Machines Corporation | Scalable Queuing System |
CN107862040A (en) * | 2017-11-06 | 2018-03-30 | 中国银行股份有限公司 | The update method of data, device and a kind of cluster in a kind of caching of application example |
CN110633320A (en) * | 2018-05-30 | 2019-12-31 | 北京京东尚科信息技术有限公司 | Processing method, system, equipment and storage medium of distributed data service |
US10678697B1 (en) * | 2019-01-31 | 2020-06-09 | Salesforce.Com, Inc. | Asynchronous cache building and/or rebuilding |
CN112559560A (en) * | 2019-09-10 | 2021-03-26 | 北京京东振世信息技术有限公司 | Metadata reading method and device, metadata updating method and device, and storage device |
CN110866011A (en) * | 2019-11-04 | 2020-03-06 | 金蝶软件(中国)有限公司 | Data table synchronization method and device, computer equipment and storage medium |
CN113448971A (en) * | 2020-03-24 | 2021-09-28 | 北京字节跳动网络技术有限公司 | Data updating method based on distributed system, computing node and storage medium |
CN111611090A (en) * | 2020-05-13 | 2020-09-01 | 浙江创邻科技有限公司 | Distributed message processing method and system |
CN113836057A (en) * | 2020-06-24 | 2021-12-24 | 三星电子株式会社 | Message queue storage device and interface for flash memory storage controller |
CN112615907A (en) * | 2020-12-04 | 2021-04-06 | 北京齐尔布莱特科技有限公司 | Data synchronization system and method |
CN114817320A (en) * | 2022-02-24 | 2022-07-29 | 网易(杭州)网络有限公司 | Cache processing method and device |
CN114979249A (en) * | 2022-03-30 | 2022-08-30 | 阿里巴巴(中国)有限公司 | Message handle creating method, message pushing method, related device and system |
Non-Patent Citations (2)
Title |
---|
NATHANAËL SENSFELDER 等: "Modeling Cache Coherence to Expose", 《HTTPS://HAL.SCIENCE/HAL-02165139/》, pages 1 - 23 * |
沈之强: "云存储系统中的网络缓存关键技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 4, pages 137 - 45 * |
Also Published As
Publication number | Publication date |
---|---|
CN115878639B (en) | 2023-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106874459B (en) | Streaming data storage method and device | |
CN103607428B (en) | A kind of method and apparatus for accessing shared drive | |
US11314689B2 (en) | Method, apparatus, and computer program product for indexing a file | |
CN115599747B (en) | Metadata synchronization method, system and equipment of distributed storage system | |
CN110531933B (en) | Data processing method and server | |
CN107211003A (en) | Distributed memory system and the method for managing metadata | |
CN113448971A (en) | Data updating method based on distributed system, computing node and storage medium | |
US20170285951A1 (en) | Packed row representation for efficient network serialization with direct column indexing in a network switch | |
CN114297196A (en) | Metadata storage method and device, electronic equipment and storage medium | |
CN112650692A (en) | Heap memory allocation method, device and storage medium | |
CN115878639B (en) | Consistency processing method of secondary cache and distributed service system | |
CN111796772B (en) | Cache management method, cache node and distributed storage system | |
US20080195671A1 (en) | Device Management System Using Log Management Object and Method for Generating and Controlling Logging Data Therein | |
CN114443598A (en) | Data writing method and device, computer equipment and storage medium | |
CN111209304B (en) | Data processing method, device and system | |
CN112269758B (en) | File migration method based on file classification and related device | |
CN114116538A (en) | Mirror cache management method, device, equipment and storage medium | |
CN113542326B (en) | Data caching method and device of distributed system, server and storage medium | |
CN108874560B (en) | Method and communication device for communication | |
CN114463162A (en) | Image cache processing method and device, electronic equipment and storage medium | |
CN111641728A (en) | Calling method and device based on distributed system | |
CN116662603B (en) | Time shaft control method and system based on kafka, electronic equipment and storage medium | |
CN114238518B (en) | Data processing method, device, equipment and storage medium | |
CN118034611B (en) | Method, device, equipment and medium for managing quota of file | |
CN113076292B (en) | File caching method, system, storage medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |