CN110635941A - Database node cluster fault migration method and device - Google Patents

Database node cluster fault migration method and device Download PDF

Info

Publication number
CN110635941A
CN110635941A CN201910817027.2A CN201910817027A CN110635941A CN 110635941 A CN110635941 A CN 110635941A CN 201910817027 A CN201910817027 A CN 201910817027A CN 110635941 A CN110635941 A CN 110635941A
Authority
CN
China
Prior art keywords
node
slave
master node
nodes
voting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910817027.2A
Other languages
Chinese (zh)
Inventor
王文庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Wave Intelligent Technology Co Ltd
Original Assignee
Suzhou Wave Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Wave Intelligent Technology Co Ltd filed Critical Suzhou Wave Intelligent Technology Co Ltd
Priority to CN201910817027.2A priority Critical patent/CN110635941A/en
Publication of CN110635941A publication Critical patent/CN110635941A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/203Failover techniques using migration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0668Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a device for migrating a database node cluster fault, which comprises the following steps that each slave node respectively executes under the condition that an original master node is detected to be marked as offline: having the slave node exchange replication offsets with other slave nodes to determine an originating voting request delay for the slave node; issuing voting requests to other slave nodes in response to the slave node waiting time having reached the initiate voting request delay and a new master node not yet elected; counting the voting results in response to the slave nodes receiving votes from other slave nodes to determine whether the slave node is elected as a new master node; and in response to the slave node being elected as the new master node, replacing the original master node with the new master node to provide the service to the outside. The method and the system can automatically migrate the fault when the database node cluster fails, liberate manual dependence and ensure data continuity.

Description

Database node cluster fault migration method and device
Technical Field
The present invention relates to the field of databases, and in particular, to a method and an apparatus for migrating a database node cluster failure.
Background
With the continuous advance of informatization, data security and reliability of business operation become more and more important. The disaster recovery backup system can provide powerful guarantee for high availability and high reliability of the business application system. In some important industries, such as finance and communication, due to the importance of user data, disaster recovery of databases draws more and more attention, and the construction of a disaster recovery system becomes an important measure for guaranteeing data security.
The K-DB Standard Cluster (K-SC) is a core function provided for high availability, data protection, disaster recovery and the like of the database. And the K-DB Standby server stores the copy of the database to an independent physical space according to the transaction unit. The original database to be copied is called a Primary DB (hereinafter, referred to as a Primary DB), and the database storing the data copy is called a Standby DB (hereinafter, referred to as a backup DB). The working principle of the K-DB Standard Cluster is that a redo log generated in a main library is transmitted to a Standby library through a background process, and the Standby library updates data through the received redo log.
The K-SC database cluster supports a one-master multi-slave architecture, and after the master database fails and is offline, the switching needs to be manually carried out at the present stage, and the steps are as follows:
1. detecting the downtime of a main database, and making the service unavailable;
2. and checking the replication progress of all slave libraries and determining the slave nodes with smaller synchronization delay with the master node.
3. And executing manual switching, and switching the slave node with smaller synchronization delay with the master node into the master library.
The manual switching mode used in the prior art is highly dependent on the technical ability and experience of an operation engineer, and the service continuity cannot be guaranteed. There is currently no effective solution to this problem.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide a method and an apparatus for migrating a database node cluster failure, which can automatically migrate the failure when the database node cluster fails, release manual dependency, and ensure data continuity.
Based on the foregoing object, a first aspect of the embodiments of the present invention provides a database node cluster failure migration method, including that, when it is detected that an original master node is marked as offline, each slave node executes the following steps:
having the slave node exchange replication offsets with other slave nodes to determine an originating voting request delay for the slave node;
issuing voting requests to other slave nodes in response to the slave node waiting time having reached the initiate voting request delay and a new master node not yet elected;
counting the voting results in response to the slave nodes receiving votes from other slave nodes to determine whether the slave node is elected as a new master node;
and in response to the slave node being elected as the new master node, replacing the original master node with the new master node to provide the service to the outside.
In some embodiments, having the slave node exchange replication offsets with other slave nodes to determine an initiating voting request delay for the slave node comprises:
determining the copy offset of the slave node according to the data updating time of the slave node and the data consistency degree of the slave node and the original master node;
having the slave nodes exchange replication offsets with other slave nodes to obtain a replication offset for each slave node;
and determining the starting voting request delay of each slave node according to the replication offset of each slave node.
In some embodiments, further comprising: and recording the voting result as 0 ticket in response to that the slave node does not receive the vote within a first threshold time after sending out the voting request to other slave nodes.
In some embodiments, further comprising: the voting is selectively sent or not sent to the other slave nodes in response to the slave nodes receiving voting requests from the other slave nodes.
In some embodiments, each slave node has one and only one vote, and the slave node loses the right to vote in response to having sent a vote.
In some embodiments, further comprising: the slave node regains the voting right in response to the election sending the vote not producing a voting result within a second threshold time, where the second threshold is twice the cluster node timeout threshold.
In some embodiments, replacing the original master node with the new master node to provide the service to the outside includes:
setting the slave node as a new master node;
the new main node receives and processes all client requests processed by the original main node;
the broadcast slave node is elected as the new master node to the other slave nodes in the cluster.
In some embodiments, further comprising: the method is terminated in response to receiving from the node that a new master node has been elected.
A second aspect of the present invention provides a database node cluster fault migration apparatus, including:
a preparation module for the slave node to exchange replication offsets with other slave nodes to determine an originating voting request delay for the slave node;
the vote collecting module is used for sending out voting requests to other slave nodes in response to the slave node waiting time reaching the delay of initiating the voting requests and a new master node not being elected yet;
a vote module for counting voting results in response to the slave nodes receiving votes from other slave nodes to determine whether the slave nodes are elected as a new master node;
and the migration module is used for responding to the slave node being elected as the new master node and replacing the original master node with the new master node to provide the service to the outside.
A third aspect of an embodiment of the present invention provides a database node cluster, including:
a master node;
a plurality of slave nodes;
a processor; and
a memory storing program code executable by the processor, the program code, when executed, performing the above-described database node cluster failover method.
The invention has the following beneficial technical effects: according to the method and the device for migrating the database node cluster failures, the slave nodes and other slave nodes exchange copy offset to determine the voting request delay of the slave nodes; issuing voting requests to other slave nodes in response to the slave node waiting time having reached the initiate voting request delay and a new master node not yet elected; counting the voting results in response to the slave nodes receiving votes from other slave nodes to determine whether the slave node is elected as a new master node; the technical scheme that the slave nodes are elected as the new master nodes and the new master nodes are used for replacing the original master nodes to provide services for the outside is responded, so that automatic fault migration can be realized when a database node cluster fails, manual dependence is relieved, and data continuity is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a database node cluster fault migration method provided in the present invention;
fig. 2 is a detailed flowchart of the database node cluster fault migration method provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In view of the above, a first aspect of the embodiments of the present invention proposes an embodiment of a method capable of automatic failover when a cluster of database nodes fails. Fig. 1 is a schematic flowchart illustrating a database node cluster failover method provided in the present invention.
As shown in fig. 1, the method for migrating a cluster of database nodes includes, when it is detected that an original master node is marked as offline, each slave node executing the following steps:
step S101: having the slave node exchange replication offsets with other slave nodes to determine an originating voting request delay for the slave node;
step S103: issuing voting requests to other slave nodes in response to the slave node waiting time having reached the initiate voting request delay and a new master node not yet elected;
step S105: counting the voting results in response to the slave nodes receiving votes from other slave nodes to determine whether the slave node is elected as a new master node;
step S107: and in response to the slave node being elected as the new master node, replacing the original master node with the new master node to provide the service to the outside.
The embodiment of the invention realizes automatic fault transfer of the K-SC database cluster, and after the slave node detects that the master node is marked to be offline, the cluster can automatically promote the appropriate slave node to be the new master node and provide service for the outside. In order to become a new master node, the slave nodes not only need to obtain most votes of the slave nodes in the cluster, but also have data updated compared with other slave nodes as much as possible (the slave nodes which are consistent with the master node data are ensured to be selected as the new master node as much as possible). For this purpose, before sending a voting request to the cluster, the slave node also exchanges the replication offset with other slave nodes in the cluster, carries out ranking according to the replication offset, and initiates a voting delay time according to the ranking setting. The larger the replication offset, the lower the delay of the slave node from the master node, the higher the rank, the earlier the voting request can be initiated, and the more chance to become a new master node.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like. Embodiments of the computer program may achieve the same or similar effects as any of the preceding method embodiments to which it corresponds.
In some embodiments, having the slave node exchange replication offsets with other slave nodes to determine an initiating voting request delay for the slave node comprises:
determining the copy offset of the slave node according to the data updating time of the slave node and the data consistency degree of the slave node and the original master node;
having the slave nodes exchange replication offsets with other slave nodes to obtain a replication offset for each slave node;
and determining the starting voting request delay of each slave node according to the replication offset of each slave node.
In some embodiments, further comprising: and recording the voting result as 0 ticket in response to that the slave node does not receive the vote within a first threshold time after sending out the voting request to other slave nodes.
In some embodiments, further comprising: the voting is selectively sent or not sent to the other slave nodes in response to the slave nodes receiving voting requests from the other slave nodes.
In some embodiments, each slave node has one and only one vote, and the slave node loses the right to vote in response to having sent a vote.
In some embodiments, further comprising: the slave node regains the voting right in response to the election sending the vote not producing a voting result within a second threshold time, where the second threshold is twice the cluster node timeout threshold.
In some embodiments, replacing the original master node with the new master node to provide the service to the outside includes:
setting the slave node as a new master node;
the new main node receives and processes all client requests processed by the original main node;
the broadcast slave node is elected as the new master node to the other slave nodes in the cluster.
In some embodiments, further comprising: the method is terminated in response to receiving from the node that a new master node has been elected.
The method disclosed according to an embodiment of the present invention may also be implemented as a computer program executed by a CPU, which may be stored in a computer-readable storage medium. The computer program, when executed by the CPU, performs the above-described functions defined in the method disclosed in the embodiments of the present invention. The above-described method steps and system elements may also be implemented using a controller and a computer-readable storage medium for storing a computer program for causing the controller to implement the functions of the above-described steps or elements.
The following further illustrates embodiments of the invention according to the specific example shown in fig. 2:
(1) the slave node prepares for work before initiating a voting request. In order to become a new master node, the slave nodes not only need to obtain the master node votes of most management slots in the cluster, but also need to possess data updated as much as possible compared with other slave nodes (the slave nodes consistent with the master node data are selected as new master nodes as far as possible). Therefore, before sending a voting request to the cluster, the slave node also exchanges the replication offset with other slave nodes in the cluster, so as to ensure that the slave node with a large replication offset initiates voting as early as possible.
(2) A voting request is initiated from the node. When the slave node detects that the voting REQUEST time is up and the cluster does not elect a new master node, the slave node sends FAILOVER _ AUTH _ REQUEST information to all nodes in the cluster to REQUEST to obtain votes.
(3) All slave nodes in the cluster vote. After detecting a slave node voting request, other slave nodes try to vote for the node requesting the failover. If the node receiving the voting request can vote, a FAILOVER _ AUTH _ ACK message is returned thereto. In the voting stage, it is noted that: there is only one ticket in each node hand. Once the node casts a vote, voting requests of other nodes are not received in the election. The timeout time for an election is twice the cluster node timeout. Once the election times out, the node will regain the voting right.
(4) Votes are collected from the nodes. And after receiving a FAILOVER _ AUTH _ ACK reply from the node, increasing the number of the supported votes. If it is detected in the heartbeat function that sufficient votes have been obtained (at least half of the node votes obtained) indicating that it has been elected as the new master node, then a failover is attempted. The specific operation of failover is as follows: 1) changing the identity of the current node from the slave node to the master node; 2) receiving all original main nodes responsible for processing client requests; 3) and broadcasting to the cluster and informing other nodes of the cluster that the current node replaces the original master node to become a new master node.
And when the primary failover is completed, the slave node replaces the failed master node to become a new master node. The embodiment of the invention effectively avoids the vote melon division, reduces or lowers the probability of selecting the slave node as the new master node with larger synchronization delay with the master node, quickly and accurately realizes the Failover of the database, and ensures the service continuity.
As can be seen from the foregoing embodiments, the database node cluster failover method provided in the embodiments of the present invention determines the delay of the voting request initiated by the slave node by exchanging the replication offset between the slave node and other slave nodes; issuing voting requests to other slave nodes in response to the slave nodes waiting for the full initiation of the voting request delay and a new master node not yet elected; counting the voting results in response to the slave nodes receiving votes from other slave nodes to determine whether the slave node is elected as a new master node; the technical scheme that the slave nodes are elected as the new master nodes and the new master nodes are used for replacing the original master nodes to provide services for the outside is responded, so that automatic fault migration can be realized when a database node cluster fails, manual dependence is relieved, and data continuity is guaranteed.
It should be particularly noted that, steps in the foregoing embodiments of the database node cluster fault migration method may be intersected, replaced, added, and deleted, so that these reasonable permutation and combination transformations applied to the database node cluster fault migration method also belong to the scope of the present invention, and should not limit the scope of the present invention to the described embodiments.
In view of the above, a second aspect of the embodiments of the present invention provides an embodiment of an apparatus capable of automatic failover when a cluster of database nodes fails. The database node cluster fault migration device comprises:
a preparation module for the slave node to exchange replication offsets with other slave nodes to determine an originating voting request delay for the slave node;
the vote collecting module is used for sending out voting requests to other slave nodes in response to the slave node waiting time reaching the delay of initiating the voting requests and a new master node not being elected yet;
a vote module for counting voting results in response to the slave nodes receiving votes from other slave nodes to determine whether the slave nodes are elected as a new master node;
and the migration module is used for responding to the slave node being elected as the new master node and replacing the original master node with the new master node to provide the service to the outside.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
In view of the foregoing, a third aspect of the embodiments of the present invention provides an embodiment of a database node cluster capable of automatic failover when the database node cluster fails. The database node cluster comprises:
a master node;
a plurality of slave nodes;
a processor; and
a memory storing program code executable by the processor, the program code, when executed, performing the above-described database node cluster failover method.
As can be seen from the foregoing embodiments, the database node cluster failover apparatus and the database node cluster provided in the embodiments of the present invention determine the delay of the voting request initiated by the slave node by exchanging the replication offset between the slave node and other slave nodes; issuing voting requests to other slave nodes in response to the slave node waiting time having reached the initiate voting request delay and a new master node not yet elected; counting the voting results in response to the slave nodes receiving votes from other slave nodes to determine whether the slave node is elected as a new master node; the technical scheme that the slave nodes are elected as the new master nodes and the new master nodes are used for replacing the original master nodes to provide services for the outside is responded, so that automatic fault migration can be realized when a database node cluster fails, manual dependence is relieved, and data continuity is guaranteed.
It should be particularly noted that, in the embodiments of the database node cluster fault migration apparatus and the database node cluster, the working process of each module is specifically described by using the embodiment of the database node cluster fault migration method, and those skilled in the art can easily think that these modules are applied to other embodiments of the database node cluster fault migration method. Of course, since each step in the database node cluster fault migration method embodiment may be intersected, replaced, added, or deleted, these reasonable permutation, combination, and transformation should also belong to the scope of the present invention for the database node cluster fault migration apparatus and the database node cluster, and should not limit the scope of the present invention to the embodiment.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items. The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of an embodiment of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A database node cluster fault migration method is characterized by comprising the following steps which are respectively executed by each slave node under the condition that an original master node is detected to be marked as offline:
having a slave node exchange replication offsets with other slave nodes to determine an originating voting request delay for the slave node;
issuing voting requests to other slave nodes in response to the slave node latency having reached the initiate voting request delay and a new master node not yet elected;
counting voting results in response to the slave node receiving votes from other slave nodes to determine whether the slave node is elected as the new master node;
in response to the slave node being elected as the new master node, providing service externally using the new master node in place of the original master node.
2. The method of claim 1, wherein causing the slave node to exchange the replication offset with other slave nodes to determine an originating voting request delay for the slave node comprises:
determining the copy offset of the slave node according to the data updating time of the slave node and the data consistency degree of the slave node and the original master node;
causing the slave nodes to exchange replication offsets with other slave nodes to obtain the replication offset for each slave node;
determining the originating voting request delay of each slave node according to the replication offset of the slave node.
3. The method of claim 1, further comprising: and recording the voting result as 0 vote in response to that the slave node does not receive the vote within a first threshold time after sending out the voting request to other slave nodes.
4. The method of claim 1, further comprising: selectively sending or not sending votes to other slave nodes in response to the slave node receiving the voting requests from other slave nodes.
5. The method of claim 4, wherein each slave node has one and only one vote, the slave node losing the right to vote in response to having sent a vote.
6. The method of claim 5, further comprising: the slave node regains voting authority in response to the election sending the vote not producing a voting result within a second threshold time, wherein the second threshold is twice the cluster node timeout threshold.
7. The method of claim 1, wherein replacing the original master node with the new master node to provide external services comprises:
setting the slave node as the new master node;
enabling the new main node to receive and process all client requests processed by the original main node;
broadcasting to other slave nodes in the cluster that the slave node is elected as the new master node.
8. The method of claim 1, further comprising: the method is terminated in response to the slave node receiving that a new master node has been elected.
9. A database node cluster failover apparatus, comprising:
a preparation module for a slave node to exchange replication offsets with other slave nodes to determine an originating voting request delay for the slave node;
the vote collecting module is used for responding to the slave node waiting time reaching the delay of the voting request and a new master node not being elected yet and sending the voting request to other slave nodes;
a vote module for counting voting results in response to the slave node receiving votes from other slave nodes to determine whether the slave node has elected the new master node;
and the migration module is used for responding to the slave node being elected as the new master node and replacing the original master node with the new master node to provide services for the outside.
10. A cluster of database nodes, comprising:
a master node;
a plurality of slave nodes;
a processor; and
a memory storing program code executable by a processor, the program code when executed performing the database node cluster failover method of any of claims 1-8.
CN201910817027.2A 2019-08-30 2019-08-30 Database node cluster fault migration method and device Withdrawn CN110635941A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910817027.2A CN110635941A (en) 2019-08-30 2019-08-30 Database node cluster fault migration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910817027.2A CN110635941A (en) 2019-08-30 2019-08-30 Database node cluster fault migration method and device

Publications (1)

Publication Number Publication Date
CN110635941A true CN110635941A (en) 2019-12-31

Family

ID=68969756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910817027.2A Withdrawn CN110635941A (en) 2019-08-30 2019-08-30 Database node cluster fault migration method and device

Country Status (1)

Country Link
CN (1) CN110635941A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459909A (en) * 2020-03-13 2020-07-28 北京许继电气有限公司 Method for constructing PostgreSQL L database cluster
CN113127565A (en) * 2021-04-28 2021-07-16 联通沃音乐文化有限公司 Method and device for synchronizing distributed database nodes based on external observer group
CN113704029A (en) * 2021-09-24 2021-11-26 携程旅游信息技术(上海)有限公司 Node availability management and control method, node, cluster, device and medium
CN114039978A (en) * 2022-01-06 2022-02-11 天津大学四川创新研究院 Decentralized PoW computing power cluster deployment method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459909A (en) * 2020-03-13 2020-07-28 北京许继电气有限公司 Method for constructing PostgreSQL L database cluster
CN113127565A (en) * 2021-04-28 2021-07-16 联通沃音乐文化有限公司 Method and device for synchronizing distributed database nodes based on external observer group
CN113704029A (en) * 2021-09-24 2021-11-26 携程旅游信息技术(上海)有限公司 Node availability management and control method, node, cluster, device and medium
CN114039978A (en) * 2022-01-06 2022-02-11 天津大学四川创新研究院 Decentralized PoW computing power cluster deployment method
CN114039978B (en) * 2022-01-06 2022-03-25 天津大学四川创新研究院 Decentralized PoW computing power cluster deployment method

Similar Documents

Publication Publication Date Title
CN111258822B (en) Data processing method, server, and computer-readable storage medium
US20210406279A1 (en) System and method for maintaining a master replica for reads and writes in a data store
CN110635941A (en) Database node cluster fault migration method and device
CN113014634B (en) Cluster election processing method, device, equipment and storage medium
US9201742B2 (en) Method and system of self-managing nodes of a distributed database cluster with a consensus algorithm
US8719225B1 (en) System and method for log conflict detection and resolution in a data store
CN107919977B (en) Online capacity expansion and online capacity reduction method and device based on Paxos protocol
GB2484086A (en) Reliability and performance modes in a distributed storage system
CN112422320B (en) Master-slave switching method and device of server and server
CN112328421B (en) System fault processing method and device, computer equipment and storage medium
CN107038192B (en) Database disaster tolerance method and device
CN111752488B (en) Management method and device of storage cluster, management node and storage medium
CN113965578A (en) Method, device, equipment and storage medium for electing master node in cluster
CN110830582B (en) Cluster owner selection method and device based on server
CN115168322A (en) Database system, main library election method and device
CN109189854B (en) Method and node equipment for providing continuous service
CN108509296B (en) Method and system for processing equipment fault
CN107181608B (en) Method for recovering service and improving performance and operation and maintenance management system
CN113765690A (en) Cluster switching method, system, device, terminal, server and storage medium
CN113810216A (en) Cluster fault switching method and device and electronic equipment
CN116232893A (en) Consensus method and device of distributed system, electronic equipment and storage medium
CN110502460B (en) Data processing method and node
CN115145715A (en) Distributed transaction processing method, system and related equipment
Jehl et al. Asynchronous reconfiguration for Paxos state machines
CN112667449B (en) Cluster management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20191231