CN115599585A - Memory caching system, method and storage medium - Google Patents

Memory caching system, method and storage medium Download PDF

Info

Publication number
CN115599585A
CN115599585A CN202211336623.7A CN202211336623A CN115599585A CN 115599585 A CN115599585 A CN 115599585A CN 202211336623 A CN202211336623 A CN 202211336623A CN 115599585 A CN115599585 A CN 115599585A
Authority
CN
China
Prior art keywords
data
node
client
nodes
check
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211336623.7A
Other languages
Chinese (zh)
Inventor
罗金飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202211336623.7A priority Critical patent/CN115599585A/en
Publication of CN115599585A publication Critical patent/CN115599585A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore

Abstract

The invention discloses a memory caching system, a memory caching method and a memory caching medium. The system comprises: the main data nodes are used for transmitting the read data to the client; the plurality of replica nodes are used for transmitting the read data to the client side by the replica node corresponding to the main data node under the condition that the main data node fails; and the plurality of check nodes are used for recovering data to obtain recovered data under the condition that the main data node and the replica node corresponding to the main data node simultaneously have faults, and transmitting the recovered data to the client. The balance between the storage overhead and the data transmission speed is realized through a memory cache system comprising a main data node, a copy node and a check node.

Description

Memory caching system, method and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a memory caching system, method, and storage medium.
Background
With the rapid development of information technology, contemporary society has entered the information explosion era, and mass data generation and exponential scale growth bring about two problems: 1. the data storage cost is increased in a large scale, and enterprises have huge cost in the aspect of storing user data every year; 2. the number of concurrent requests of users increases sharply, and enterprises need to bear tens of thousands or even hundreds of thousands of instantaneous concurrent flows.
When a user needs to access data in the database, the user only needs to access an adjacent memory cache node without a complicated and tedious network transmission and forwarding request, so that the delay of the user for acquiring the data can be greatly reduced, and the user experience is ensured. However, for the cache system, since it is a memory cache system, if there is no fault-tolerant mechanism, when one node fails, the user data cached by the node is completely lost, and at this time, the remote database needs to be re-requested to obtain the data required by the user, which may aggravate the load of the remote server, and the system performance may be drastically reduced, causing instability of the system, and on the other hand, increase user delay.
In the aspect of memory cache fault tolerance, an enterprise usually selects a single copy mechanism or a single erasure code for fault tolerance to ensure reliable storage of data, so as to achieve the purpose of fault tolerance. The single copy mechanism has the problem of high storage overhead, and the single erasure code used for fault tolerance has the problem of low data transmission speed.
In the process of implementing the invention, the inventor finds that at least the following technical problems exist in the prior art: in the prior art, the balance between storage overhead and data transmission speed is difficult to realize.
Disclosure of Invention
The invention provides a memory cache system, a memory cache method and a memory medium, which aim to realize the balance of storage overhead and data transmission speed.
According to an aspect of the present invention, there is provided a memory caching system, including:
the main data nodes are used for transmitting the read data to the client;
the plurality of replica nodes are used for transmitting read data to the client by the replica node corresponding to the main data node under the condition that the main data node fails;
and the plurality of check nodes are used for recovering data to obtain recovered data under the condition that the main data node and the replica node corresponding to the main data node simultaneously have faults, and transmitting the recovered data to the client.
According to another aspect of the present invention, there is provided a memory caching system comprising:
the system comprises a plurality of main data nodes, a plurality of data storage nodes and a plurality of data transmission nodes, wherein the main data nodes are used for storing write-in data transmitted by a client;
the plurality of copy nodes are used for backing up and storing write-in data transmitted by the client;
the system comprises a plurality of check nodes and a plurality of storage nodes, wherein the plurality of check nodes are used for receiving write-in data transmitted by a client and encoding the write-in data transmitted by the client to obtain check data, and the check data is used for recovering data.
According to another aspect of the present invention, a memory caching method is performed by the memory caching system according to any one of the embodiments of the present invention, where the memory caching system includes a plurality of primary data nodes, a plurality of replica nodes, and a plurality of check nodes; the method comprises the following steps:
transmitting the read data to a client through the master data node;
under the condition that the main data node fails, reading data are transmitted to a client through a replica node corresponding to the main data node;
and under the condition that the main data node and the replica node corresponding to the main data node simultaneously have faults, performing data recovery through the check node to obtain recovered data, and transmitting the recovered data to a client.
According to another aspect of the present invention, a memory caching method is performed by the memory caching system according to any one of the embodiments of the present invention, where the memory caching system includes a plurality of primary data nodes, a plurality of replica nodes, and a plurality of check nodes; the method comprises the following steps:
storing the write-in data transmitted by the client through the main data node;
backing up and storing write-in data transmitted by a client through the copy node;
and receiving write-in data transmitted by a client through the check node, and encoding the write-in data transmitted by the client to obtain check data, wherein the check data is used for recovering data.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the memory caching method according to any one of the embodiments of the present invention when the computer instructions are executed.
The technical scheme of the embodiment of the invention comprises the following steps: the main data nodes are used for transmitting the read data to the client; the plurality of replica nodes are used for transmitting the read data to the client by the replica node corresponding to the main data node under the condition that the main data node fails; and the plurality of check nodes are used for recovering data under the condition that the main data node and the replica node corresponding to the main data node simultaneously have faults, obtaining recovered data and transmitting the recovered data to the client. The balance between the storage overhead and the data transmission speed is realized through a memory cache system comprising a main data node, a copy node and a check node.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a memory cache system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a hybrid fault-tolerant architecture according to an embodiment of the present invention;
FIG. 3 is a flow chart of reading data of a hybrid fault-tolerant architecture according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a memory cache system according to a second embodiment of the present invention;
FIG. 5 is a flow chart of a data writing process of a hybrid fault-tolerant architecture according to a second embodiment of the present invention;
fig. 6 is a flowchart of a memory caching method according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of a hybrid fault-tolerant architecture according to a third embodiment of the present invention;
fig. 8 is a flowchart of a memory caching method according to a fourth embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a schematic structural diagram of a memory cache system according to an embodiment of the present invention, where this embodiment is applicable to a distributed memory cache system, and the memory cache system may be implemented in a form of hardware and/or software. As shown in fig. 1, the system includes: a plurality of master data nodes 110 for transmitting read data to clients; the plurality of replica nodes 120 are configured to, when the primary data node fails, transmit the read data to the client by the replica node 120 corresponding to the primary data node 110; the check nodes 130 are configured to perform data recovery to obtain recovered data when the primary data node 110 and the replica node 120 corresponding to the primary data node 110 fail at the same time, and transmit the recovered data to the client.
It should be noted that fig. 1 is only an example and does not limit the number of the master data node 110, the replica nodes 120 and the check nodes 130.
In this embodiment, the master data node 110 refers to a master node for caching data. Replica node 120 is a slave node that stores data from the master data node 110. The check node 130 refers to a node that recovers the cache data of the failed node through an erasure code mechanism. The number of the master data node 110, the replica node 120, and the check node 130 is not limited herein, and may be set according to a cache requirement. Any of the primary data node 110, replica node 120, and check node 130 may be connected to a client.
It should be noted that, in the memory cache system of this embodiment, data fault tolerance is implemented through the replica node 120, and reliable storage of data is ensured by using erasure codes to perform fault tolerance through the check nodes, and the balance between storage overhead and data transmission speed is implemented by combining the characteristics of the two fault tolerance modes. The erasure code may be encoded by RS error correction coding. Specifically, the coding rule of RS error correction coding is RS (n, k), where n represents the sum of the main data nodes and the check nodes, and k represents the number of the main data nodes. The specific values of n and k may be set according to the cache requirement, and are not limited herein.
For example, fig. 2 is a schematic structural diagram of a hybrid fault-tolerant architecture provided in this embodiment. The memory cache system adopts a master-slave node double-copy and erasure code coding structure to carry out fault tolerance, as shown in fig. 2, the memory cache system can receive a Set/Get request sent by a client, and the memory cache system comprises master data nodes M1, M2 and M3; each main data node is provided with corresponding copy nodes S1, S2 and S3 respectively and serves as a backup node of the main data node, specifically, S1 is a backup node of M1, S2 is a backup node of M2, and S3 is a backup node of M3; in addition, the memory cache system also comprises check nodes P1 and P2, and the main data node and the check nodes can be fault-tolerant through an RS (5,3) encoding scheme.
In some optional embodiments, the client is configured to determine a master data node where the data to be read is located, and establish a connection with the master data node where the data to be read is located.
Specifically, a user can send a data reading request through a client, and the client positions data to be read to obtain a main data node where the data to be read is located. Illustratively, if the data to be read is not located in the current connection node, a steering operation is performed, the main data node where the data is actually stored is located, a connection is established with the main data node where the data to be read is located, and then the data is read.
For example, fig. 3 is a flow chart of reading data of a hybrid fault-tolerant architecture provided by the present embodiment. Specifically, the main data node judges whether a key in the read request belongs to the current main data node or not under the condition that the main data node receives the read request, if so, a server corresponding to the client and the main data node is established, and if not, the server belonging to the key is found and connection with the client is established. If the client is normally connected with the main data node, reading data from the main data node and sending the read data to the client; if the main data node fails, determining a replica node corresponding to the main data node as a main node by using a main-slave switching mechanism, and sending read data in the replica node to the client. If the main data node and the replica node corresponding to the main data node simultaneously fail, the system enters a degraded reading mode, the check node performs data recovery by using the erasure code to obtain recovered data, and the recovered data is transmitted to the client.
The technical scheme of the embodiment of the invention comprises the following steps: the main data nodes are used for transmitting the read data to the client; the plurality of replica nodes are used for transmitting the read data to the client by the replica node corresponding to the main data node under the condition that the main data node fails; and the plurality of check nodes are used for recovering data under the condition that the main data node and the replica node corresponding to the main data node simultaneously have faults, obtaining recovered data and transmitting the recovered data to the client. The balance between the storage overhead and the data transmission speed is realized through a memory cache system comprising a main data node, a copy node and a check node.
Example two
Fig. 4 is a schematic structural diagram of a memory cache system according to a second embodiment of the present invention, where this embodiment is applicable to a distributed memory cache system, and the memory cache system may be implemented in a form of hardware and/or software. As shown in fig. 4, the system includes: a plurality of master data nodes 210 for storing write data transmitted by the client; a plurality of replica nodes 220 for backing up write data transmitted by the storage client; the plurality of check nodes 230 are configured to receive the write data transmitted by the client, and encode the write data transmitted by the client to obtain check data, where the check data is used to recover the data.
In this embodiment, the master data node 210 refers to a master node for caching data. Replica node 220 refers to a slave node that stores data from the master data node 210. The check node 230 refers to a node that recovers the cache data of the failed node through an erasure code mechanism. The number of the master data node 210, the replica node 220, and the check node 230 is not limited herein, and may be set according to the cache requirement. Any of the primary data node 210, replica node 220, and check node 230 may be connected to a client.
It should be noted that, in the present embodiment, data backup is implemented through the copy node 220, and an encoding operation is performed through the check node 230 to obtain check data for data recovery. The memory cache system of the embodiment combines the advantages of the replica node 220 and the check node 230, thereby achieving the balance between the storage overhead and the data transmission speed.
For example, fig. 5 is a data flow chart of a hybrid fault-tolerant architecture provided in this embodiment. As shown in fig. 5. After receiving a write request sent by a client, judging whether the write request belongs to a current main data node according to a key after hash in the write request, if so, establishing connection between the client and a server corresponding to the current main data node, and if not, searching the server belonging to the key and connecting the server. And if the write is the initial write, writing the write into a local Hash table, if the write is not the initial write, calculating data increment information, sending the data increment information to a check node, carrying out coding operation on the check node, generating check data, and sending feedback information (ACK) to a main data node. And when the primary data node receives the ACK, ending the write operation request.
In some optional embodiments, the primary data node is further configured to: determining data increment information based on the write data transmitted by the client and corresponding data in the main data node under the condition that a data item corresponding to the write data transmitted by the client already exists in the main data node; updating corresponding data in the main data node into write-in data transmitted by the client; and sending the data increment information to the check node. In some optional embodiments, the check node is further configured to: and under the condition that the check node receives the data increment information, receiving the write-in data transmitted by the client.
Specifically, when the data item corresponding to the write data transmitted by the client already exists in the primary data node, it indicates that the current operation is not the initial write operation but a data update operation, and the new value data transmitted by the client and the corresponding old value data in the primary data node may be subjected to xor calculation to obtain data increment information, and then the new value data is updated to the primary data node. Further, the data increment information is sent to the check node, and the check node updates the data after receiving the data increment information, so that the correctness of the data is ensured.
The technical scheme of the embodiment of the invention comprises the following steps: the system comprises a plurality of main data nodes, a plurality of data storage nodes and a plurality of data transmission nodes, wherein the main data nodes are used for storing write-in data transmitted by a client; the plurality of copy nodes are used for backing up and storing write-in data transmitted by the client; and the plurality of check nodes are used for receiving the written data transmitted by the client and coding the data transmitted by the client to obtain check data, wherein the check data is used for recovering the data. The balance between the storage overhead and the data transmission speed is realized through a memory cache system comprising a main data node, a copy node and a check node.
EXAMPLE III
Fig. 6 is a flowchart of a memory caching method according to a third embodiment of the present invention, where the method according to the present embodiment is executed by the memory caching system provided in the foregoing embodiment. The memory cache system comprises a plurality of main data nodes, a plurality of copy nodes and a plurality of check nodes.
As shown in fig. 6, the method includes:
and S310, transmitting the read data to the client through the master data node.
And S320, transmitting the read data to the client through the replica node corresponding to the main data node under the condition that the main data node fails.
S330, under the condition that the main data node and the replica node corresponding to the main data node simultaneously fail, data recovery is carried out through the check node to obtain recovery data, and the recovery data are transmitted to a client.
Illustratively, when the primary data node fails and crashes, since the replica node and the failed primary data node have the same data, the replica node is directly promoted to be the primary node, and all work of the failed primary data node is taken over to ensure normal operation of the system. When the main data node works normally and the replica node or the check node fails, the scene system can still provide services to the outside normally, and the external availability and data consistency of the cache system are not influenced. Since the system can still provide service, the entire fault tolerant system will not do any further operations until the node is restarted or a new node is assigned.
When the main data node and the replica node thereof simultaneously have faults, the pure replica mechanism cannot provide normal reading and writing services, so that the system can use the erasure codes to recover data through the check nodes to continuously provide reading and writing services to the outside.
To maximize the ability of the system to provide service, the memory caching system will continue to provide read service in a degraded read mode. And the read command request of the client is forwarded to the check node, the check node exchanges data with other non-failed main data nodes in the check group and performs decoding operation to complete the recovery of the data required by the user, and the recovered data is returned to the client.
In some optional embodiments, the client is configured to determine a master data node where the data to be read is located, and establish a connection with the master data node where the data to be read is located.
For example, the memory caching system may be a distributed management system of a Key-Value pair cache database. The hybrid fault-tolerant architecture of the memory cache system combines the advantages of a copy fault-tolerant mode and erasure code fault tolerance, and simultaneously avoids the defect of single fault tolerance. The fault tolerance scheme pair is shown in table 1.
It can be understood that, in the aspect of storage overhead, a single-copy fault-tolerant mode needs 4 pieces of data because fault-tolerant capability needs to recover the fault of any 3 nodes, and the storage overhead is marked as 4X; a single RS (5,3) erasure code is fault-tolerant, and the corresponding storage overhead is 1.67X; in the hybrid fault-tolerant architecture proposed in this embodiment, the corresponding storage overhead is 2.67X, and the storage overhead is between the single copy fault-tolerant and erasure code fault-tolerant modes. Therefore, compared with a mainstream copy fault-tolerant scheme, the introduction of a small number of copy nodes and check nodes effectively reduces more storage overhead; in the aspect of fault tolerance, the fault tolerance of the three fault tolerance modes of the copy, the erasure code and the mixed fault tolerance of the embodiment can achieve the simultaneous fault of any three nodes. As shown in fig. 7, in the hybrid fault-tolerant manner of this embodiment, the maximum number of nodes in the same code group can support simultaneous failure, which greatly ensures reliable operation of the memory cache system.
TABLE 1
Figure BDA0003914819530000101
The memory caching system provided by the embodiment of the invention can execute the memory caching method provided by any embodiment of the invention, and the memory caching system has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 8 is a flowchart of a memory caching method according to a fourth embodiment of the present invention, where the method of the present embodiment is executed by the memory caching system provided in the foregoing embodiment. The memory cache system comprises a plurality of main data nodes, a plurality of copy nodes and a plurality of check nodes.
As shown in fig. 8, the method includes:
and S410, storing the write data transmitted by the client through the main data node.
And S420, backing up and storing the write-in data transmitted by the client through the copy node.
S430, receiving the written data transmitted by the client through the check node, and encoding the written data transmitted by the client to obtain check data, wherein the check data is used for recovering data.
In some optional embodiments, the method further comprises:
under the condition that a data item corresponding to write-in data transmitted by a client already exists in a main data node, determining data increment information based on the write-in data transmitted by the client and corresponding data in the main data node through the main data node, updating the corresponding data in the main data node into the write-in data transmitted by the client, and sending the data increment information to a check node;
and under the condition that the check node receives the data increment information, receiving the write-in data transmitted by the client through the check node.
The memory caching system provided by the embodiment of the invention can execute the memory caching method provided by any embodiment of the invention, and the memory caching system has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a memory caching method, the method including:
transmitting the read data to a client through the master data node;
under the condition that the main data node fails, reading data are transmitted to a client through a replica node corresponding to the main data node;
and under the condition that the main data node and the replica node corresponding to the main data node simultaneously have faults, performing data recovery through the check node to obtain recovered data, and transmitting the recovered data to a client.
And/or, a memory caching method, the method comprising:
storing write-in data transmitted by a client through the main data node;
backing up and storing write-in data transmitted by a client through the copy node;
and receiving write-in data transmitted by a client through the check node, and encoding the write-in data transmitted by the client to obtain check data, wherein the check data is used for recovering data.
Of course, the storage medium including computer-executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the memory caching method provided in any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the Memory caching method according to the embodiments of the present invention.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A memory caching system, comprising:
the main data nodes are used for transmitting the read data to the client;
the plurality of replica nodes are used for transmitting the read data to the client side by the replica node corresponding to the main data node under the condition that the main data node fails;
and the plurality of check nodes are used for recovering data to obtain recovered data under the condition that the main data node and the replica node corresponding to the main data node simultaneously have faults, and transmitting the recovered data to the client.
2. The system according to claim 1, wherein the client is configured to determine a master data node where data to be read is located, and establish a connection with the master data node where the data to be read is located.
3. A memory caching system, comprising:
the system comprises a plurality of main data nodes, a plurality of data storage nodes and a plurality of data transmission nodes, wherein the main data nodes are used for storing write-in data transmitted by a client;
the plurality of copy nodes are used for backing up and storing write-in data transmitted by the client;
the system comprises a plurality of check nodes and a plurality of storage nodes, wherein the plurality of check nodes are used for receiving write-in data transmitted by a client and encoding the write-in data transmitted by the client to obtain check data, and the check data is used for recovering data.
4. The system of claim 3, wherein the master data node is further configured to:
determining data increment information based on the write data transmitted by the client and corresponding data in the main data node under the condition that a data item corresponding to the write data transmitted by the client already exists in the main data node;
updating the corresponding data in the main data node into the write-in data transmitted by the client;
and sending the data increment information to the check node.
5. The system of claim 4, wherein the check node is further configured to:
and receiving the write-in data transmitted by the client under the condition that the check node receives the data increment information.
6. A memory caching method is characterized by being executed by a memory caching system, wherein the memory caching system comprises a plurality of main data nodes, a plurality of copy nodes and a plurality of check nodes; the method comprises the following steps:
transmitting the read data to a client through the master data node;
under the condition that the main data node fails, reading data are transmitted to a client through a replica node corresponding to the main data node;
and under the condition that the main data node and the replica node corresponding to the main data node simultaneously have faults, performing data recovery through the check node to obtain recovered data, and transmitting the recovered data to a client.
7. The method according to claim 6, wherein the client is configured to determine a master data node where data to be read is located, and establish a connection with the master data node where the data to be read is located.
8. A memory caching method is characterized by being executed by a memory caching system, wherein the memory caching system comprises a plurality of main data nodes, a plurality of copy nodes and a plurality of check nodes; the method comprises the following steps:
storing the write-in data transmitted by the client through the main data node;
backing up and storing write-in data transmitted by a client through the copy node;
and receiving write-in data transmitted by a client through the check node, and encoding the write-in data transmitted by the client to obtain check data, wherein the check data is used for recovering data.
9. The method of claim 8, further comprising:
when a data item corresponding to the write data transmitted by the client exists in the primary data node, determining, by the primary data node, data increment information based on the write data transmitted by the client and corresponding data in the primary data node, updating the corresponding data in the primary data node to the write data transmitted by the client, and sending the data increment information to the check node;
and under the condition that the check node receives the data increment information, receiving the write-in data transmitted by the client through the check node.
10. A computer-readable storage medium storing computer instructions for causing a processor to implement the memory caching method of any one of claims 6 to 9 when executed.
CN202211336623.7A 2022-10-28 2022-10-28 Memory caching system, method and storage medium Pending CN115599585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211336623.7A CN115599585A (en) 2022-10-28 2022-10-28 Memory caching system, method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211336623.7A CN115599585A (en) 2022-10-28 2022-10-28 Memory caching system, method and storage medium

Publications (1)

Publication Number Publication Date
CN115599585A true CN115599585A (en) 2023-01-13

Family

ID=84851472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211336623.7A Pending CN115599585A (en) 2022-10-28 2022-10-28 Memory caching system, method and storage medium

Country Status (1)

Country Link
CN (1) CN115599585A (en)

Similar Documents

Publication Publication Date Title
CN110169040B (en) Distributed data storage method and system based on multilayer consistent hash
CN102891849B (en) Service data synchronization method, data recovery method, data recovery device and network device
US9081841B2 (en) Asynchronous distributed garbage collection for replicated storage clusters
RU2501072C2 (en) Distributed storage of recoverable data
CN106776130B (en) Log recovery method, storage device and storage node
US8117155B2 (en) Collection-based object replication
US10620830B2 (en) Reconciling volumelets in volume cohorts
US20120271795A1 (en) Scalable row-store with consensus-based replication
US20150213100A1 (en) Data synchronization method and system
JP4715774B2 (en) Replication method, replication system, storage device, program
CN106708653B (en) Mixed tax big data security protection method based on erasure code and multiple copies
CN111078667B (en) Data migration method and related device
CN105530294A (en) Mass data distributed storage method
CN103795754A (en) Method and system for data synchronization among multiple systems
EP4213038A1 (en) Data processing method and apparatus based on distributed storage, device, and medium
US7657781B1 (en) System and method for providing redundant data load sharing in a distributed network
US11620087B2 (en) Implicit leader election in a distributed storage network
CN110121694B (en) Log management method, server and database system
CN102833273B (en) Data recovery method and distributed cache system during temporary derangement
US20080133967A1 (en) Distributed object sharing system and method thereof
CN113326006A (en) Distributed block storage system based on erasure codes
CN111404737B (en) Disaster recovery processing method and related device
CN111752892B (en) Distributed file system and implementation method, management system, equipment and medium thereof
CN110121712B (en) Log management method, server and database system
CN116303789A (en) Parallel synchronization method and device for multi-fragment multi-copy database and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination