CN112199240A - Method for switching nodes during node failure and related equipment - Google Patents

Method for switching nodes during node failure and related equipment Download PDF

Info

Publication number
CN112199240A
CN112199240A CN201911057449.0A CN201911057449A CN112199240A CN 112199240 A CN112199240 A CN 112199240A CN 201911057449 A CN201911057449 A CN 201911057449A CN 112199240 A CN112199240 A CN 112199240A
Authority
CN
China
Prior art keywords
node
container
main
storage device
standby
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911057449.0A
Other languages
Chinese (zh)
Other versions
CN112199240B (en
Inventor
郑营飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2020/097262 priority Critical patent/WO2021004256A1/en
Publication of CN112199240A publication Critical patent/CN112199240A/en
Application granted granted Critical
Publication of CN112199240B publication Critical patent/CN112199240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2002Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements

Abstract

The application provides a method and related equipment for node switching when a node fails. The main node and the standby node are connected to a storage device at the same time, but only the main node can access the user data in the storage device and the user does not provide services. The standby node may access the status indicia of the primary node in the storage device. The standby node detects the state mark of the main node stored in the storage device during the operation period, determines whether the main node fails according to the state mark, and takes over the main node when the standby node determines that the main node fails according to the state mark. The method can ensure that the standby node can accurately sense the state of the main node under the condition that a plurality of nodes sharing one storage device do not sense each other, and take over the main node when the main node fails, thereby improving the reliability of application.

Description

Method for switching nodes during node failure and related equipment
Technical Field
The invention relates to the technical field of cloud computing storage systems, in particular to a method and related equipment for switching nodes when the nodes fail.
Background
In a cloud computing scenario, applications that provide services to users are typically deployed in containers of virtual or physical machines. In order to ensure the reliability of the service application, each service application corresponds to a main container and at least one standby container, and the main container and the standby container are provided with a common storage device. In a normal working state, only the main container can read and write data in the storage device to provide services for the outside, the standby container cannot read and write the data in the storage device and only can monitor the state of the main container, the standby container replaces the work of the main container when the main container fails, the standby container is upgraded to the main container, and the storage device is read and written to provide services.
At present, physical machines and storage devices communicate with each other through a Small Computer System Interface (SCSI) protocol, for a main container and a standby container deployed in different physical machines, the main container may lock the storage device through a SCSI lock command provided by the SCSI protocol, and as network connections are established between different physical machines, the standby container may monitor a state of the main container through the network connections, and after the main container fails, the standby container may monitor a failure of the main container in time, immediately upgrade the main container to a master container, and continue to provide services to the outside. However, in practical applications, the standby container may be temporarily created and may be created in the same physical machine as the main container, so that the standby container cannot establish a network connection with the main container, and thus the status of the main container cannot be monitored.
Therefore, how to ensure that the standby container accurately senses the state of the main container and switch when the main container fails is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention discloses a node processing method and related equipment when a node fails, which can ensure that a standby node can accurately sense the state of a main node and take over the main node when the main node fails under the condition that a plurality of nodes sharing a storage device are not sensed mutually, thereby improving the reliability of application.
In a first aspect, the present application provides a method for switching between nodes, including: the standby node detects a state mark of a main node stored in storage equipment and determines whether the main node fails according to the state mark, wherein the main node is a node which accesses data in the storage equipment and provides service for a user; and when the standby node determines that the main node fails according to the state mark, the standby node takes over the main node.
In the embodiment of the application, the standby node does not need to establish a heartbeat with the main node to directly sense the state of the main node, and indirectly determines whether the main node fails by detecting the state mark of the main node stored in the storage device, and takes over the main node and provides service to the outside when the main node fails, so that the reliability of the application is improved.
With reference to the first aspect, in a possible implementation manner of the first aspect, the status flag is a heartbeat value of the master node; the standby node periodically detects whether the heartbeat value of the main node stored in the storage equipment is updated or not; and if the heartbeat value of the main node is not updated, determining that the main node fails.
In the embodiment of the application, the master node may periodically update the heartbeat value stored in the storage device, for example, periodically increment by one, so that the standby node may determine whether the master node fails by periodically detecting the heartbeat value, and thus, the standby node may still accurately sense the state of the master node without establishing a heartbeat connection with the master node.
With reference to the first aspect, in a possible implementation manner of the first aspect, the storage device further stores the mark of the master node, and the standby node updates the mark of the master node in the storage device to the mark of the standby node.
In the embodiment of the application, only the mark of one node (i.e. the mark of the master node) is stored in the storage device, and when the master node is taken over by the standby node, the standby node needs to update the mark of the master node in the storage device to the mark of the standby node, so that other standby nodes can judge that a new master node exists at present, access to the storage device is avoided, and data consistency and application reliability are ensured.
With reference to the first aspect, in a possible implementation manner of the first aspect, the device node writes its own tag into the storage device every other first preset time, and reads the node tag stored in the storage device every other first preset time after the first preset time; within a second preset time, when the node marks read continuously by the standby node for N times are the same as the marks of the standby node, stopping writing the marks of the standby node into the storage device; and N is a positive integer greater than or equal to 1.
In the scheme provided by the application, the standby node writes its own mark into the storage device in the process of competing for the master node, and a rule that a mark written later covers a mark written earlier is adopted, so that the probability of successful competition for the mark written later is higher, and if a certain standby node continues for N times, for example, 3 times, and the read mark is the same as the mark of its own, the standby node can be considered to have succeeded competition, and becomes a new master node. By the competition mode, the accuracy of the selection of the main node can be improved, a plurality of nodes are prevented from accessing the storage device at the same time, and the reliability of application is ensured.
With reference to the first aspect, in a possible implementation manner of the first aspect, after the standby node takes over the master node, the heartbeat value stored in the storage device is cleared, and the heartbeat value is periodically updated.
In the embodiment of the application, after the standby node takes over the master node, the storage device stores the mark of the standby node, and the stored heartbeat value is the heartbeat value of the original master node, so that the standby node needs to be cleared to indirectly inform other standby nodes that a new master node exists currently, and the other standby nodes can sense the self state of the standby node by periodically updating the heartbeat value.
With reference to the first aspect, in a possible implementation manner of the first aspect, the standby node periodically reads a flag and a heartbeat value stored in the storage device, and determines whether the flag is the same as the flag of the standby node and whether the heartbeat value is the same as a heartbeat value written by the standby node in a previous period; and determining that the mark stored in the storage device is the same as the mark of the standby node and the heartbeat value is the same as the heartbeat value written by the standby node in the last period, and updating the heartbeat value by the standby node.
In the embodiment of the present application, after taking over the master node, the standby node needs to periodically update the heartbeat value so that other standby nodes can sense the state of the standby node. Before the standby node updates the heartbeat value every time, whether the mark stored in the storage device is the same as the mark of the standby node and whether the heartbeat value is the same as the heartbeat value written in the previous period or not needs to be judged, and the heartbeat value is updated only under the condition that the mark stored in the storage device is the same as the mark of the standby node and the heartbeat value written in the previous period is the same as the heartbeat value written in the previous period.
With reference to the first aspect, in a possible implementation manner of the first aspect, the standby node detects whether a master node flag is stored in the storage device; and if the storage equipment does not store the main node mark, the standby node takes over the main node.
In the embodiment of the application, when the standby node is started, whether the main node exists currently is judged by detecting the main node mark stored in the storage device, if the main node does not exist currently, the standby node directly takes over the main node, the state mark of the main node stored in the storage device does not need to be periodically detected, and the competition efficiency can be improved.
In a second aspect, the present application provides a node, comprising: the system comprises a detection module, a storage module and a processing module, wherein the detection module is used for detecting a state mark of a main node stored in storage equipment and determining whether the main node fails according to the state mark, and the main node is a node which accesses data in the storage equipment and provides service for a user; and the taking-over module is used for taking over the main node when the detection module determines that the main node is in fault according to the state mark.
With reference to the second aspect, in a possible implementation manner of the second aspect, the status flag is a heartbeat value of the master node; the detection module is specifically configured to, when detecting a status flag of a host node stored in a storage device and determining whether the host node is faulty according to the status flag: periodically detecting whether the heartbeat value of the main node stored in the storage equipment is updated or not; and if the heartbeat value of the main node is not updated, determining that the main node fails.
With reference to the second aspect, in a possible implementation manner of the second aspect, the storage device further stores the master node flag, and the takeover module, when taking over the master node, is specifically configured to: and updating the mark of the main node in the storage device to the mark of the node.
With reference to the second aspect, in a possible implementation manner of the second aspect, when the takeover module updates the label of the master node in the storage device to the label of the standby node, the takeover module is specifically configured to: writing the marks of the node equipment into the storage equipment at intervals of a first preset time length, and reading the marks stored in the storage equipment at intervals of the first preset time length after the first preset time length; and within a second preset time length, when the marks read continuously for N times are the same as the marks written in, stopping writing the marks of the nodes into the storage device, wherein N is a positive integer greater than or equal to 1.
With reference to the second aspect, in a possible implementation manner of the second aspect, after taking over the master node, the taking over module is further configured to zero the heartbeat value stored in the storage device, and periodically update the heartbeat value.
With reference to the second aspect, in a possible implementation manner of the second aspect, when the takeover module periodically updates the heartbeat value, the takeover module is specifically configured to: periodically reading the mark and the heartbeat value stored in the storage device, and judging whether the mark is the same as the mark of the standby node or not and whether the heartbeat value is the same as the heartbeat value written by the standby node in the last period or not; and determining that the mark stored in the storage device is the same as the mark of the standby node and the heartbeat value is the same as the heartbeat value written by the standby node in the last period, and updating the heartbeat value.
With reference to the second aspect, in a possible implementation manner of the second aspect, before the detecting module detects a status flag of a master node stored in a storage device, the detecting module is further configured to detect whether the storage device stores the master node flag; the takeover module is further configured to compete for the master node when the detection module detects that no master node mark is stored in the storage device.
In a third aspect, the present application provides a computing device, where the computing device includes a processor and a memory, where the processor and the memory are connected through an internal bus, and the memory stores instructions, and the processor calls the instructions in the memory to execute the method for switching between nodes in the first aspect and in combination with any implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program may implement the first aspect and a flow of the inter-node handover method provided in connection with any one implementation manner of the first aspect.
In a fifth aspect, the present application provides a computer program product, where the computer program includes instructions that, when executed by a computer, enable the computer to perform the first aspect and the flow of the inter-node handover method provided in connection with any one implementation manner of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of a system architecture for communicating via the SCSI protocol according to an embodiment of the present application;
fig. 2 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of an inter-node handover method according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a change in state of a container during a switching process according to an embodiment of the present disclosure;
fig. 5 is a schematic timing relationship diagram in a contention process according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a node according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
First, a part of words and related technologies referred to in the present application will be explained with reference to the accompanying drawings so as to be easily understood by those skilled in the art.
Small Computer System Interface (SCSI) is a stand-alone processor standard for system-level interfacing between computers and hardware devices (e.g., hard disks, optical drives, printers, scanners, etc.). The SCSI is a general interface, a host adapter and a SCSI peripheral controller can be connected to the SCSI bus, a plurality of peripherals hung on the SCSI bus can work simultaneously, and the SCSI interface can transmit data synchronously or asynchronously.
The container is a virtualization technology in a computer operating system, and the technology enables processes to run in relatively independent and isolated environments (including independent file systems, namespaces, resource views and the like), so that the deployment process of software can be simplified, the portability and the safety of the software can be enhanced, and the utilization rate of system resources can be improved.
Generally, in a cloud computing scenario, an application providing a service for a user is deployed in a virtual machine in a container manner, and the virtual machine is generally deployed in a physical machine. In order to ensure the reliability of the application and prevent the application from being unavailable due to the failure of a single container, a main container and a standby container can be set. The main container and the standby container access the same storage device, at the same time, the main container can only read and write data in the storage device and provide services to the outside, and the standby container monitors the state of the main container. When the main container fails, the standby container is upgraded to the main container to provide service to the outside.
As shown in fig. 1, a system architecture diagram in which a plurality of physical machines are connected to a storage device and communicate via the SCSI protocol. As shown, physical machine 121, physical machines 122, …, physical machine 12n are simultaneously connected to storage device 110. One or more containers are deployed in each physical machine, and an application for providing services for a user is deployed in each container. In order to ensure the reliability of the application and prevent the application from being unavailable due to the failure of a single container, a main container and a standby container can be set, and the main container and the standby container are positioned in different physical machines. In order to ensure that only the primary container can access the storage device at the same time, the primary container needs to add an exclusive lock to the storage device through the physical machine where the primary container is located. Currently, a lock command is provided in the SCSI protocol to lock a storage device, and a physical machine may lock the storage device accessed by the container deployed therein through the lock command. In practice, each container is actually allocated a segment of storage space on the storage device, so the physical machine is actually locking the storage space allocated to the container. After locking the storage device accessed by the master container, the master container can access the storage device. However, after the master container fails, the lock applied to the storage device by the master container cannot be released, i.e., the lock remains. The physical machine where the standby container is located is connected with the physical machine where the main container is located through a network, so that the standby container and the main container can also be connected, during the normal operation of the main container, for the standby container which is already in network connection with the main container, the standby container can periodically send heartbeat information to the main container, and when the standby container does not receive the response of the main container within a period of time, it is determined that the main container is in failure, and the standby container can be upgraded to the main container. For the lock residual caused by the failure of the main container, the standby container can use a mandatory override lock command provided in the SCSI protocol to override the exclusive lock added by the main container and add the exclusive lock to the storage device to access the storage device and provide service to the outside. However, in some scenarios, the backup container is temporarily created when it needs to be used, and a physical machine where the temporarily created backup container is located may be the same as a physical machine where the main container is located, or a network connection is not established with the physical machine where the main container is located, so that the newly created backup container cannot determine the state of the main container in a heartbeat manner, and therefore, when the main container fails, the backup container cannot be updated to the main container in time, thereby affecting the reliability of the application.
In order to solve the above problem, the present application provides a method, even if a network connection is not established between the primary and secondary containers, the secondary container may detect a failure of the primary container in time, so as to upgrade the primary container and continue to provide services.
Fig. 2 shows a possible application scenario of an embodiment of the present application. As shown in fig. 2, in this application scenario, physical machine 2100 and physical machine 2200 are connected to storage device 2300. A virtual machine 2110 and a virtual machine 2120 are deployed in the physical machine 2100, a container 2111 runs in the virtual machine 2110, and a container 2121 runs in the virtual machine 2120; a virtual machine 2210 is deployed in the physical machine 2200, and a container 2211 runs in the virtual machine 2210. The container 2111, the container 2121 and the container 2211 form a container cluster, wherein the container 2111 is a main container and the container 2121 and the container 2211 are spare containers. The same application is deployed in the main container and the standby container, and the main container accesses the storage device 2300 to provide services to the outside during normal operation. In other embodiments, container 2111, container 2121, and container 2211 may also be deployed directly on a physical machine.
In this embodiment of the present invention, the storage device 2300 may be a physical storage device, such as a storage array or a hard disk, or may be a segment of storage space on the physical storage device, and the container 2111, the container 2121 and the container 2211 store data generated by applications deployed in the container. Storage device 2300 includes indicia storage area 2310, heartbeat information storage area 2320 and data storage area 2330. The mark storage area 2310 is used for storing marks of the main container, the heartbeat information storage area 2320 is used for storing heartbeat values of the main container, and the data storage area 2330 is used for storing data generated in the operation process of the main container. Where the master container (i.e., container 2111) has access to the entire area of the storage device 2300, while the slave containers (i.e., container 2121 and container 2211) do not have access to the data storage area 2330, but may access the tag storage area 2310 and heartbeat information storage area 2320 to monitor the status of the master container.
When the container 2111 is used as the master container, the heartbeat value in the heartbeat information storage area 2320 is periodically updated, e.g., periodically incremented by one. The container 2121 and the container 2211 periodically monitor the tag storage area 2310 and the heartbeat information storage area 2320 respectively to monitor the state of the container 2111, and if the heartbeat value in the heartbeat information storage area 2320 is found not to be updated for more than a preset time (for example, two monitoring periods), it may be determined that the container 2111 has failed and cannot normally provide service. At this time, the container 2121 and the container 2211 write their own marks into the mark storage area, respectively, and compete to select a new main container, and the process of competing the main containers by the container 2121 and the container 2211 will be described in detail below. Assuming that the container 2121 successfully competes to become the new master container, the tag of the container 2121 will be stored in the tag storage area 2310, the tag of the container 2111 will not be stored, the container 2121 clears the heartbeat value in the heartbeat information storage area and periodically updates the heartbeat value, and the data in the data storage area 2330 is accessed to provide service to the outside. If the container 2211 fails to compete, the monitoring of the tag storage area 2310 and the heartbeat information storage area 2320 is continued to monitor the status of the container 2121.
By setting the mark storage area and the heartbeat information storage area in the storage area, even if network connection is not established between the standby container and the main container, the standby container can determine the state of the main container by detecting heartbeat information written by the main container in the heartbeat information storage area 2320, and can quickly select a new main container and provide services to the outside when the main container fails, thereby ensuring the reliability of the services provided by the containers.
With reference to the application scenario shown in fig. 2, a container switching method provided in an embodiment of the present application will be described below with reference to fig. 3 and fig. 4, where fig. 3 is a flowchart of the container switching method, and fig. 4 is a state change diagram of a container during a container switching process. The present application will be described in detail with reference to any one of the containers as an example, and as shown in fig. 3, the method includes, but is not limited to, the following steps:
s301: when the container is started, whether the mark storage area 2310 of the storage device 2300 is written with the mark of the main container is detected, if the mark storage area is written with the mark of the main container, step S302 is executed, and if the mark storage area is not written with the mark of the main container, step S303 is executed.
After the container is started, it is first detected whether the mark storage area 2310 of the storage device 2300 is written with the mark of the main container. A mark storage area 2310 in the storage device 2300 is used for storing a mark of the main container, wherein the size of the mark storage area 2310 can be set according to actual needs, for example, the size can be set to 512 bytes, which is not limited in this application. The marking of the container may uniquely identify a container.
If the flag storage area 2310 stores the flag of the master container, it indicates that the master container already exists in the container cluster in which the container is located, and the container is in the standby state shown in fig. 4 as a standby container.
If the tag storage area does not store the tag of the master container, which indicates that no master container exists in the container cluster where the container is located, the container may compete for the master container, and the container is in the election state shown in fig. 4.
S302: the container periodically detects whether the heartbeat value of the heartbeat information storage area 2320 changes within a preset time duration, and if so, continues to execute the step S302; if no change has occurred, the container is in the election state shown in FIG. 4 and step S303 is performed.
As shown in fig. 2, the heartbeat information storage area 2320 is configured to store a heartbeat value of the main container, and when the main container operates normally, the heartbeat value of the heartbeat information storage area 2320 is periodically updated, if other auxiliary containers detect that the heartbeat value is updated all the time, it indicates that the main container is in a working state all the time, and if other auxiliary containers detect that the heartbeat value is not updated within a preset time period, for example, heartbeat values detected in two consecutive detection periods are the same, it indicates that the main container fails.
It should be understood that when the standby container detects whether the heartbeat value of the main container is updated, the heartbeat value read in the last period is recorded, then the heartbeat value read in the current period is compared with the heartbeat value recorded in the last period, and if the heartbeat value read in the current period is the same as the heartbeat value recorded in the last period, the heartbeat value of the main container is not updated; if the currently read heartbeat value is different from the recorded heartbeat value, the heartbeat value of the main container is updated.
Illustratively, the heartbeat value of the main container recorded by the standby container is 8, that is, the heartbeat value of the main container is updated to 8 in the last period, the heartbeat value currently read by the standby container is 9, the standby container compares the currently read heartbeat value with the recorded heartbeat value, so that the standby container can determine that the heartbeat value has been updated by the main container, the standby container continues to periodically detect the heartbeat value of the main container, and the recorded heartbeat value is updated to 9.
When the heartbeat value is updated by the main container, the main container mark stored in the mark storage area 2310 may be read first, and whether the read main container mark is the same as the mark of the main container itself is judged, then the heartbeat value stored in the heartbeat information storage area 2320 is read, and whether the read heartbeat value is the same as the heartbeat value written in the previous period is judged, if the read mark is the same as the mark of the main container itself and the read heartbeat value is the same as the heartbeat value written in the previous period, the main container updates the heartbeat value, which may be a numerical value, and the updating of the heartbeat value by the main container is to increment the numerical value, for example, if the heartbeat value stored in the heartbeat information storage area is 15 after the previous period is ended, the heartbeat value is updated to 16 in the current period, and at this time, the main container still stays in the main container state shown in fig. 4; if the read mark is different from the mark of the main container or the read heartbeat value is different from the heartbeat value written in the previous period, it is indicated that an unpredictable fault occurs in the current system, the main container needs to be switched over in state, as shown in fig. 4, at this time, the main container needs to quit restarting, and the whole container cluster needs to re-select a new main container, for example, the network of the main container is unstable, which causes the main container to fail when writing the mark of the main container into the mark storage area 2310 or updating the heartbeat value, and the main container cannot sense the failure, so that the mark of the main container is different from the main container; or the primary container has been disconnected for a longer period of time and subsequently restored (but the primary container itself is not aware), there are other spare containers that have written new indicia in indicia storage area 2310, resulting in the primary container reading indicia different from itself.
It can be understood that the main container periodically reads the mark stored in the mark storage area 2310 and compares the mark with the mark of the main container to determine whether to update the heartbeat value or quit restarting, so that the main container can be restarted in time under extreme conditions (for example, the network connection is interrupted due to instability of the network of the main container, the content in the mark storage area is changed due to unpredictable failure of the storage device 2300, and the like), thereby avoiding continuous reading and writing of data in the storage device 2300 and ensuring consistency of the data and reliability of applications.
S303: the container periodically writes its own mark in the mark storage area 2310, and reads the mark stored in the mark storage area 2310 at the same period.
When the container is started in step S301, it is determined that the mark storage area 2310 does not store the identifier of the main container, that is, the main container does not exist, or when it is determined that the heartbeat information in the heartbeat information storage area 2320 does not change, that is, the main container fails in step S302, the container is in the election state shown in fig. 4, and may compete for the main container.
Since there may be multiple spare containers in a container cluster, such as container 2121 and container 2211 in fig. 2, when there is no main container in the container cluster or the main container fails, the multiple spare containers may compete for the main container at the same time. As shown in fig. 5, in the event of competition, container 2121 first writes its own flag 1 into flag storage area 2310 at time t1, container 2211 writes its own flag 2 into flag storage area 2310 at time t2, and t1 is smaller than t2, so that flag 2 written into container 2211 overlaps flag 1 written into container 2121, that is, flag 2 is stored in flag storage area 2310. After a certain time interval (e.g., a sleep cycle duration) after writing label 1, container 2121 reads the label stored in label storage area 2310 at time t3, where label 2121 reads label 2, and after another sleep cycle interval, again writes its own label, i.e., label 1, to label storage area 2310 at time t 5; after writing the flag 2, the container 2211 reads the flag stored in the flag storage area 2310 at time t4 with a sleep cycle, and at this time, the flag read by the container 2211 is the flag 2, and after another sleep cycle, the container 2211 again writes its own flag, i.e., the flag 2, in the flag storage area 2310 at time t 6.
S304: the container detects whether the mark stored in the mark storage area 2310 read N times consecutively is the same as the mark written in the container within a preset time duration, and if so, performs step S305; if not, go to step S302.
Specifically, as can be seen from the above description of fig. 4, each spare container needs to write its own tag into the tag storage area 2310, then read, and then write again, and the cycle repeats periodically. For the contention method shown in fig. 5, for the first container written with its own mark, for example, the container 2121, the mark read each time is the mark of the last written container, where the mark read each time is different from the mark written, but for the last container written with its own mark, the mark read each time is the same as the mark written, such as the container 2211, if the mark read N times in succession of the container is the same as the mark written, it may be determined that the container has been upgraded to a new master container, and other containers will abandon the contention, no longer write their own mark to the mark storage area 2310, re-detect the heartbeat value of the heartbeat information storage area 2320, and wait for the next contention. N is a positive integer of 1 or more, and may be, for example, 3 or 4, which is not limited in the present application.
It can be seen that, in the process of competitively selecting a new main container, each container periodically executes writing of a mark and reading of the mark and judges whether the mark is consistent or not, so that a unique container with the mark read each time and the mark written in are finally determined, the container is upgraded to the main container, the accuracy of selecting the main container can be improved, the selected main container is guaranteed to be unique, the condition that a plurality of containers access the storage device at the same time is avoided, and the reliability of application is guaranteed.
S305: the container is upgraded to a main container, and data in the storage device is accessed to provide service to the outside.
Specifically, after determining to upgrade to the main container, the container accesses data in the storage device, so as to provide a service to the outside, clears the heartbeat value of the heartbeat information storage area 2320, and then periodically updates the heartbeat value, where the container is in the main container state shown in fig. 4.
It should be understood that steps S301 to S305 involved in the above method embodiments are merely schematic descriptions and should not constitute a specific limitation, and the steps involved may be added, reduced or combined as needed.
In the embodiment illustrated in fig. 3 and 4, when there are multiple spare containers in the container cluster, in a scenario where the main container fails or does not have a main container, the multiple spare containers compete to become the main container, but in a scenario where only one spare container exists in the container cluster, in step S302, if it is detected that the heartbeat of the main container is not updated, that is, the main container fails, the spare container may be directly upgraded to the main container, that is, the flag of the spare container is directly written into the flag storage area, without performing step S304, and the step of determining the main container by writing and reading for multiple times is performed.
In addition, although the above embodiment is described by taking a container as an example, the method provided by the present invention is also applicable to switching between a physical machine and a virtual machine. For the switching method between the physical machine and the virtual machine, except for the different objects to be switched, the other switching methods are the same as the switching method of the container, and are not described herein again.
In the embodiment of the invention, the state mark of the main node, such as the heartbeat information of the main node, is set in the storage device, the heartbeat information is periodically updated by the main node, the standby node can also periodically detect the heartbeat information, but the heartbeat information is not updated, the standby node can compete as the main node, and thus, even if the main node and the standby node do not establish network connection through respective physical machines, the standby node can also timely detect the fault of the main node, thereby competing to continuously provide service for the main node.
In addition, in the embodiment of the invention, because the network connection does not need to be established between the main node and the standby node, for the case that the main node and the standby node are both physical machines, the standby physical machine can determine whether the main physical machine fails or not without establishing the network connection with the main physical machine. For the case that the master node and the backup node are virtual machines and containers, the backup node may be deployed on any physical machine, for example, on the same physical machine as the master node, or on a physical machine that does not establish a network connection with the master node, thereby reducing the limitation when deploying virtual machines and containers.
The method of the embodiments of the present application is described in detail above, and in order to better implement the above-mentioned aspects of the embodiments of the present application, correspondingly, the following also provides related equipment for implementing the above-mentioned aspects in a matching manner.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a node according to an embodiment of the present application. As shown in fig. 6, the node 600 includes a detection module 610 and a take-over module 620. Wherein the content of the first and second substances,
the detecting module 610 is configured to detect a status flag of a host node stored in a storage device, and determine whether the host node fails according to the status flag, where the host node is a node that accesses data in the storage device and provides a service for a user.
Specifically, the detecting module 610 is configured to perform the steps S301, S302, and S304, and optionally perform an optional method of the steps.
A take-over module 620, configured to take over the master node when the detecting module 610 determines that the master node fails according to the status flag.
Specifically, the takeover module 620 is configured to perform the foregoing steps S303 and S305, and optionally perform an optional method in the foregoing steps.
In one possible implementation, the status flag is a heartbeat value of the primary node; the detecting module 610 is specifically configured to, when detecting a status flag of a host node stored in a storage device and determining whether the host node fails according to the status flag: periodically detecting whether the heartbeat value of the main node stored in the storage equipment is updated or not; and if the heartbeat value of the main node is not updated, determining that the main node fails.
In a possible implementation, the storage device further stores the mark of the master node, and the takeover module 620 is specifically configured to update the mark of the master node in the storage device to the mark of the node device when taking over the master node.
In a possible implementation, when the label of the master node in the storage device is updated to the label of the standby node, the takeover module 620 is specifically configured to: writing the marks of the node equipment into the storage equipment at intervals of a first preset time length, and reading the marks stored in the storage equipment at intervals of the first preset time length after the first preset time length; and within a second preset time, when the marks read continuously for N times are the same as the written marks, stopping writing the marks of the node equipment into the storage equipment, wherein N is a positive integer greater than or equal to 1.
In a possible implementation, after taking over the master node, the takeover module 620 is further configured to zero the heartbeat value stored in the storage device, and periodically update the heartbeat value.
In a possible implementation, when the takeover module 620 periodically updates the heartbeat value, the takeover module 620 is specifically configured to: periodically reading the mark and the heartbeat value stored in the storage device, and judging whether the mark is the same as the mark of the standby node or not and whether the heartbeat value is the same as the heartbeat value written by the standby node in the last period or not; and determining that the mark stored in the storage device is the same as the mark of the standby node and the heartbeat value is the same as the heartbeat value written by the standby node in the last period, and updating the heartbeat value.
In a possible implementation, before the detecting module 610 detects the status flag of the master node stored in the storage device, the detecting module 610 is further configured to detect whether the storage device stores the master node flag; the take-over module 620 is further configured to take over the master node when the detection module detects that the master node flag is not stored in the storage device.
It should be understood that the structure of the node is merely an example, and should not be construed as a specific limitation, and various modules of the node may be added, reduced or combined as needed. In addition, the operations and/or functions of the modules in the node are not described herein again for brevity, respectively, in order to implement the corresponding flow of the method described in fig. 3.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the present application. As shown in fig. 7, the computing device 700 includes: a processor 710, a communication interface 720 and a memory 730, said processor 710, communication interface 720 and memory 730 being interconnected by an internal bus 740. It should be understood that the computing device may be a database server.
The computing device 700 may be the physical machine 2110 or 2120 of fig. 2, with a container or virtual machine built in. The functions performed by the container in fig. 2 are actually performed by the processor 710 of the physical machine.
The processor 710 may be formed of one or more general-purpose processors, such as a Central Processing Unit (CPU), or a combination of a CPU and a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The bus 740 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 740 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 7, but not only one bus or type of bus.
Memory 730 may include volatile memory (volatile memory), such as Random Access Memory (RAM); the memory 730 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD), or a solid-state drive (SSD); memory 730 may also include combinations of the above. The program code may be functional modules for implementing the method shown in the node apparatus 600, or may be method steps for implementing the standby node as the execution subject in the method embodiment shown in fig. 6.
The present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, can implement part or all of the steps of any one of the method embodiments described above, and implement the functions of any one of the functional modules described in fig. 6 above.
Embodiments of the present application also provide a computer program product, which when run on a computer or a processor, causes the computer or the processor to perform one or more steps of any of the methods described above. The respective constituent modules of the above-mentioned apparatuses may be stored in the computer-readable storage medium if they are implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should also be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device can be merged, divided and deleted according to actual needs.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. An inter-node handover method, the method comprising:
the standby node detects a state mark of a main node stored in storage equipment and determines whether the main node fails according to the state mark, wherein the main node is a node which accesses data in the storage equipment and provides service for a user;
and when the standby node determines that the main node fails according to the state mark, the standby node takes over the main node.
2. The method of claim 1, wherein the status flag is a heartbeat value of the primary node;
the detecting, by the standby node, a status flag of a master node stored in a storage device, and determining whether the master node fails according to the status flag includes:
the standby node periodically detects whether the heartbeat value of the main node stored in the storage equipment is updated or not;
and if the heartbeat value of the main node is not updated, determining that the main node fails.
3. The method of claim 1 or 2, wherein the storage device further stores the master node flag therein, and wherein the standby node taking over the master node comprises:
and the standby node updates the mark of the main node in the storage equipment into the mark of the standby node.
4. The method of claim 3, wherein the backup node updating the label of the master node in the storage device to the label of the backup node comprises:
the standby node writes marks of the standby node into the storage equipment every other first preset time, and reads the node marks stored in the storage equipment every other first preset time after the first preset time;
within a second preset time, when the node marks read continuously by the standby node for N times are the same as the marks of the standby node, stopping writing the marks of the standby node into the storage device; and N is a positive integer greater than or equal to 1.
5. The method of claim 2, wherein the method further comprises:
and after the standby node takes over the main node, clearing the heartbeat value stored in the storage device, and periodically updating the heartbeat value.
6. The method of any of claims 1-5, wherein prior to the standby node detecting the status flag of the primary node stored in the storage device, the method further comprises:
the standby node detects whether a main node mark is stored in the storage equipment;
and if the storage equipment does not store the main node mark, the standby node competes for the main node.
7. A node, comprising:
the system comprises a detection module, a storage module and a processing module, wherein the detection module is used for detecting a state mark of a main node stored in the storage device and determining whether the main node fails according to the state mark, and the main node is a node which accesses data in the storage device and provides service for a user;
and the taking-over module is used for taking over the main node when the detection module determines that the main node is in fault according to the state mark.
8. The node of claim 7, wherein the status flag is a heartbeat value of the primary node; the detection module is specifically configured to, when detecting a status flag of a host node stored in a storage device and determining whether the host node is faulty according to the status flag:
periodically detecting whether the heartbeat value of the main node stored in the storage equipment is updated or not;
and if the heartbeat value of the main node is not updated, determining that the main node fails.
9. The node according to claim 7 or 8, wherein the storage device further stores the master node label, and the takeover module is specifically configured to update the label of the master node in the storage device to the label of the node when taking over the master node.
10. The node according to claim 8, wherein the takeover module, when updating the label of the primary node in the storage device to the label of the secondary node, is specifically configured to:
writing the marks of the node equipment into the storage equipment at intervals of a first preset time length, and reading the marks stored in the storage equipment at intervals of the first preset time length after the first preset time length;
and within a second preset time length, when the marks read continuously for N times are the same as the marks written in, stopping writing the marks of the nodes into the storage device, wherein N is a positive integer greater than or equal to 1.
11. The node apparatus of claim 8,
the taking-over module is further configured to zero the heartbeat value stored in the storage device after taking over the master node, and periodically update the heartbeat value.
12. The method according to any one of claims 7 to 11, wherein before the detecting module detects the status flag of the master node stored in the storage device, the detecting module is further configured to detect whether the storage device stores the master node flag;
the takeover module is further configured to compete for the master node when the detection module detects that no master node mark is stored in the storage device.
13. A computing device comprising a processor and a memory, the processor and the memory being connected by an internal bus, the memory having instructions stored therein, the processor invoking the instructions in the memory to perform the method of any of claims 1-6.
CN201911057449.0A 2019-07-08 2019-10-29 Method for switching nodes during node failure and related equipment Active CN112199240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/097262 WO2021004256A1 (en) 2019-07-08 2020-06-19 Node switching method in node failure and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910612025X 2019-07-08
CN201910612025 2019-07-08

Publications (2)

Publication Number Publication Date
CN112199240A true CN112199240A (en) 2021-01-08
CN112199240B CN112199240B (en) 2024-01-30

Family

ID=74004723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911057449.0A Active CN112199240B (en) 2019-07-08 2019-10-29 Method for switching nodes during node failure and related equipment

Country Status (2)

Country Link
CN (1) CN112199240B (en)
WO (1) WO2021004256A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109381A1 (en) * 2021-12-16 2023-06-22 中移(苏州)软件技术有限公司 Information processing method and apparatus, and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107814B (en) * 2023-04-04 2023-09-22 阿里云计算有限公司 Database disaster recovery method, equipment, system and storage medium
CN116743550B (en) * 2023-08-11 2023-12-29 之江实验室 Processing method of fault storage nodes of distributed storage cluster

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231681A (en) * 2011-06-27 2011-11-02 中国建设银行股份有限公司 High availability cluster computer system and fault treatment method thereof
US20150378959A1 (en) * 2014-06-30 2015-12-31 Echelon Corporation Multi-protocol serial nonvolatile memory interface
US20160283335A1 (en) * 2015-03-24 2016-09-29 Xinyu Xingbang Information Industry Co., Ltd. Method and system for achieving a high availability and high performance database cluster
US20170116097A1 (en) * 2015-10-22 2017-04-27 Netapp Inc. Implementing automatic switchover
CN106789246A (en) * 2016-12-22 2017-05-31 广西防城港核电有限公司 The changing method and device of a kind of active/standby server
CN107122271A (en) * 2017-04-13 2017-09-01 华为技术有限公司 A kind of method of recovery nodes event, apparatus and system
US20170373926A1 (en) * 2016-06-22 2017-12-28 Vmware, Inc. Dynamic heartbeating mechanism
CN108011737A (en) * 2016-10-28 2018-05-08 华为技术有限公司 A kind of failure switching method, apparatus and system
CN108351824A (en) * 2015-10-30 2018-07-31 Netapp股份有限公司 Method, equipment and medium for executing handover operation between calculate node
CN108880898A (en) * 2018-06-29 2018-11-23 新华三技术有限公司 Active and standby containment system switching method and device
CN109005045A (en) * 2017-06-06 2018-12-14 北京金山云网络技术有限公司 Active and standby service system and host node fault recovery method
CN109302445A (en) * 2018-08-14 2019-02-01 新华三云计算技术有限公司 Host node state determines method, apparatus, host node and storage medium
CN109446169A (en) * 2018-10-22 2019-03-08 北京计算机技术及应用研究所 A kind of double control disk array shared-file system
CN109783280A (en) * 2019-01-15 2019-05-21 上海海得控制系统股份有限公司 Shared memory systems and shared storage method
CN109815049A (en) * 2017-11-21 2019-05-28 北京金山云网络技术有限公司 Node delay machine restoration methods, device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5167028A (en) * 1989-11-13 1992-11-24 Lucid Corporation System for controlling task operation of slave processor by switching access to shared memory banks by master processor

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231681A (en) * 2011-06-27 2011-11-02 中国建设银行股份有限公司 High availability cluster computer system and fault treatment method thereof
US20150378959A1 (en) * 2014-06-30 2015-12-31 Echelon Corporation Multi-protocol serial nonvolatile memory interface
US20160283335A1 (en) * 2015-03-24 2016-09-29 Xinyu Xingbang Information Industry Co., Ltd. Method and system for achieving a high availability and high performance database cluster
US20170116097A1 (en) * 2015-10-22 2017-04-27 Netapp Inc. Implementing automatic switchover
CN108351824A (en) * 2015-10-30 2018-07-31 Netapp股份有限公司 Method, equipment and medium for executing handover operation between calculate node
US20170373926A1 (en) * 2016-06-22 2017-12-28 Vmware, Inc. Dynamic heartbeating mechanism
CN108011737A (en) * 2016-10-28 2018-05-08 华为技术有限公司 A kind of failure switching method, apparatus and system
CN106789246A (en) * 2016-12-22 2017-05-31 广西防城港核电有限公司 The changing method and device of a kind of active/standby server
CN107122271A (en) * 2017-04-13 2017-09-01 华为技术有限公司 A kind of method of recovery nodes event, apparatus and system
CN109005045A (en) * 2017-06-06 2018-12-14 北京金山云网络技术有限公司 Active and standby service system and host node fault recovery method
CN109815049A (en) * 2017-11-21 2019-05-28 北京金山云网络技术有限公司 Node delay machine restoration methods, device, electronic equipment and storage medium
CN108880898A (en) * 2018-06-29 2018-11-23 新华三技术有限公司 Active and standby containment system switching method and device
CN109302445A (en) * 2018-08-14 2019-02-01 新华三云计算技术有限公司 Host node state determines method, apparatus, host node and storage medium
CN109446169A (en) * 2018-10-22 2019-03-08 北京计算机技术及应用研究所 A kind of double control disk array shared-file system
CN109783280A (en) * 2019-01-15 2019-05-21 上海海得控制系统股份有限公司 Shared memory systems and shared storage method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王康;李东静;陈海光;: "分布式存储系统中改进的一致性哈希算法", 《计算机技术与发展》, no. 07, pages 24 - 29 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109381A1 (en) * 2021-12-16 2023-06-22 中移(苏州)软件技术有限公司 Information processing method and apparatus, and storage medium

Also Published As

Publication number Publication date
CN112199240B (en) 2024-01-30
WO2021004256A1 (en) 2021-01-14

Similar Documents

Publication Publication Date Title
US7840662B1 (en) Dynamically managing a network cluster
US8423816B2 (en) Method and computer system for failover
US10049010B2 (en) Method, computer, and apparatus for migrating memory data
US7007192B2 (en) Information processing system, and method and program for controlling the same
CN112199240B (en) Method for switching nodes during node failure and related equipment
US7243266B2 (en) Computer system and detecting method for detecting a sign of failure of the computer system
US9367412B2 (en) Non-disruptive controller replacement in network storage systems
CN110807064B (en) Data recovery device in RAC distributed database cluster system
JP2011060055A (en) Virtual computer system, recovery processing method and of virtual machine, and program therefor
US7685461B2 (en) Method, apparatus and program storage device for performing fault tolerant code upgrade on a fault tolerant system by determining when functional code reaches a desired state before resuming an upgrade
WO2018095107A1 (en) Bios program abnormal processing method and apparatus
CN109446169B (en) Double-control disk array shared file system
CN104036043A (en) High availability method of MYSQL and managing node
CN106874103B (en) Heartbeat implementation method and device
CN108243031B (en) Method and device for realizing dual-computer hot standby
JPH08320835A (en) Fault detecting method for external bus
US7996707B2 (en) Method to recover from ungrouped logical path failures
CN109358982B (en) Hard disk self-healing device and method and hard disk
CN111858187A (en) Electronic equipment and service switching method and device
JP2002049509A (en) Data processing system
CN114884836A (en) High-availability method, device and medium for virtual machine
JP2021043725A (en) Calculation system, calculation method, and program
WO2024000535A1 (en) Partition table update method and apparatus, and electronic device and storage medium
CN111901415B (en) Data processing method and system, computer readable storage medium and processor
CN115269556A (en) Database fault processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220216

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Applicant after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

GR01 Patent grant
GR01 Patent grant