US20200036576A1 - Two-channel-based high-availability - Google Patents

Two-channel-based high-availability Download PDF

Info

Publication number
US20200036576A1
US20200036576A1 US16/048,107 US201816048107A US2020036576A1 US 20200036576 A1 US20200036576 A1 US 20200036576A1 US 201816048107 A US201816048107 A US 201816048107A US 2020036576 A1 US2020036576 A1 US 2020036576A1
Authority
US
United States
Prior art keywords
node
state
determining
bfd
control channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/048,107
Other versions
US10530634B1 (en
Inventor
Kai-Wei Fan
Haihua Luo
Stephen Tan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VMware LLC
Original Assignee
VMware LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VMware LLC filed Critical VMware LLC
Priority to US16/048,107 priority Critical patent/US10530634B1/en
Assigned to VMWARE, INC. reassignment VMWARE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FAN, Kai-wei, LUO, HAIHUA, TAN, STEPHEN
Priority to US16/724,818 priority patent/US11349706B2/en
Application granted granted Critical
Publication of US10530634B1 publication Critical patent/US10530634B1/en
Publication of US20200036576A1 publication Critical patent/US20200036576A1/en
Assigned to VMware LLC reassignment VMware LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: VMWARE, INC.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0668Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0823Errors, e.g. transmission errors
    • H04L43/0829Packet loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • a high availability system is a system that is resilient to failures of the system's components. Typically, this is achieved by providing redundant components so that if one component fails, a redundant component can take over performing the tasks of the failed component.
  • HA devices such as edge nodes
  • the nodes in a cluster may work as a team to provide services even if some of the nodes fail. As long as at least one of the nodes in a cluster remains active, the cluster may provide the services configured on the nodes. Examples of the services may include load balancing, traffic forwarding, data packet processing, VPN services, DNS services, and the like.
  • Nodes in a cluster may operate in either an active mode or a standby mode. If a node in a cluster fails, then, if possible, a surviving node assumes an active role and provides the services that were configured on the failed node.
  • BFD Bidirectional Forwarding Detection
  • HA nodes in a cluster communicate with each other via Bidirectional Forwarding Detection (“BFD’) channels.
  • BFD Bidirectional Forwarding Detection
  • the BFD channel may be configured with an aggressive timer, relying on communications exchanged via the BFD channel may lead to false detections of failures. For example, when no response is received to three consecutive packets sent to a node, an aggressive timer may flag failure of the node even if the node is still healthy. This may happen because the BFD traffic is usually communicated alongside the user traffic over the same channel, and the responses from the nodes are lost due to congestion caused by a high-volume user traffic, not due to the node's failure. Nevertheless, failure to timely detect BFD control packets from the node may trigger failover even if the node is still healthy.
  • the techniques provide two-channel-based HA that relies on communications exchanged via two channels established between hosts hosting the nodes of the cluster.
  • the purpose of using two channels, instead of one, is to improve reliability of the HA support. For example, if one channel fails, then the system may rely on the information obtained via the second channel.
  • the cluster may include a pair of edge nodes, one of which operates in an active mode and another in a standby mode.
  • a pair of channels established between two hosts is configured to provide support for BFD-compliant communications.
  • One of the channels is referred to as an underlay control channel (or an underlay channel), while another channel is referred to as a management control channel (or a management channel).
  • the pair of channel may be implemented either between virtual network interface cards (“VNICs”) of the hosts or between physical network interface cards (“PNICs”) of the hosts.
  • VNICs virtual network interface cards
  • PNICs physical network interface cards
  • the BFD control packets communicated via the channels are monitored by local control planes of the respective hosts. If the pair of channels are implemented between PNICs, then the BFD control packets communicated via the channels are monitored by local control planes of the operating system (“OS”) of the hosts.
  • OS operating system
  • local control planes monitor BFD control packets communicated via both an underlay channel and a management channel.
  • the local control planes may, for example, extract diagnostic codes from the BFD control packets, and use the diagnostic codes to determine whether a neighbor node has failed. For example, if BFD control packets received via either channel indicate that the neighbor node has failed, then the services configured on the neighbor node may be switched over onto another node.
  • FIG. 1 is a block diagram depicting an example physical implementation view of an example logical network environment 10 for implementing two-channel-based HA for a cluster of nodes.
  • FIG. 2A is a block diagram depicting an example implementation of two-channel-based HA for a cluster of nodes.
  • FIG. 2B is a block diagram depicting an example implementation of two-channel-based HA for a cluster of nodes.
  • FIG. 3 is a block diagram depicting an example implementation of two-channel-based HA in physical network interface cards of hosts.
  • FIG. 4A is an example flow chart for implementing a two-channel-based high-availability approach.
  • FIG. 4B is an example flow chart for implementing a two-channel-based high-availability approach.
  • FIG. 5 is a block diagram depicting an example mandatory section of an example of a generic BFD control packet.
  • FIG. 1 is a block diagram depicting an example physical implementation view of an example logical network environment 10 for implementing two-channel-based HA for a cluster of nodes.
  • environment 10 includes two or more hosts 106 A, 106 B, and one or more physical networks 155 .
  • Hosts 106 A, 106 B are used to implement logical routers, logical switches and virtual machines (“VMs”). Hosts 106 A, 106 B are also referred to as computing devices, host computers, host devices, physical servers, server systems or physical machines. Each host may be configured to support several VMs. In the example depicted in FIG. 1 , host 106 A is configured to support a VM 107 A, while host 106 B is configured to support a VM 107 B. Additional VMs may also be supported by hosts 106 A- 106 B.
  • Virtual machines 107 A- 107 B are executed on hosts 106 A, 106 B, respectively, and are examples of virtualized computing instances or workloads.
  • a virtualized computing instance may represent an addressable data compute node or an isolated user space instance.
  • VMs 107 A- 107 B may implement edge nodes, edge node gateways, and the like.
  • Hosts 106 A, 106 B may also be configured to support execution of hypervisors 109 A and 109 B, respectively.
  • Hypervisors 109 A, 109 B are software layers or components that support the execution of multiple VMs, such as VMs 107 A- 107 B.
  • Hypervisors 109 A and 109 B may be configured to implement virtual switches and forwarding tables that facilitate data traffic between VMs 107 A- 107 B.
  • virtual switches and other hypervisor components may reside in a privileged virtual machine (sometimes referred to as a “Domain Zero” or “the root partition”) (not shown).
  • Hypervisors 109 A and 109 B may also maintain mappings between underlying hardware 115 A, 115 B, respectively, and virtual resources allocated to the respective VMs.
  • Hardware component 115 A may include one or more processors 116 A, one or more memory units 117 A, one or more PNICs 118 A, and one or more storage devices 121 A.
  • Hardware component 115 B may include one or more processors 116 B, one or more memory units 117 B, one or more PNICs 118 B, and one or more storage devices 121 B.
  • FIG. 2A is a block diagram depicting an example implementation of two-channel-based HA for a cluster of nodes.
  • hosts 106 A- 106 B may be edge service gateways.
  • Host 106 A provides support for VNIC 190 A and VNIC 190 AA
  • host 106 B provides support for VNIC 190 B and 190 BB.
  • VM 107 A supports, among other things, a local control plane 250 A and a data path process 260 A.
  • VM 107 B supports, among other things, a local control plane 250 B and a data path process 260 B.
  • Hosts 106 A- 106 B also provide support for execution of hypervisors 109 A and 109 B, respectively.
  • hardware 115 A includes, among other things, PNICs 118 A
  • hardware 115 B includes, among other things, PNICs 118 B.
  • a two-channel-based HA for a cluster of nodes is implemented using a pair of 170 A- 170 B channels: an underlay control channel 170 A and a management control channel 170 B.
  • underlay control channel 170 A is established between a VNIC 190 A and a VNIC 190 B, and it is a channel in an underlay network used to communicate overlay traffic.
  • Management control channel 170 B is established between VNIC 190 AA and VNIC 190 BB. Both channels 170 A- 170 B may be used to provide two-channel-based HA for nodes, such as VMs 107 A- 107 B. Both channels 170 A- 170 B are used to communicate BFD control packets.
  • Local control plane 250 A is configured to monitor both channels 170 A- 170 B on VM 107 A side, while local control plane 250 B is configured to monitor both channels 170 A- 170 B on VM 107 B side.
  • local control plane 250 A may monitor BFD control packets detected on interfaces configured for channels 170 A- 170 B to determine whether VM 107 B executing on host 106 B has failed.
  • local control plane 250 A determines whether VM 106 executing on host 106 B has failed, local control plane 250 A implements the following rules: if no BFD control packets has been received via both channels 170 A- 170 B from host 106 B after a timeout, then local control plane 250 A deduces that VM 107 A is unreachable, and thus the services configured on VM 107 B should be switched from VM 107 B onto VM 107 A.
  • a BFD control packet from at least one of channels 170 A- 170 B from host 106 B includes a diagnostic code indicating host 106 B is down
  • local control plane 250 A deduces that VM 170 B is down, and thus the services configured on VM 107 B should be switched from VM 107 B onto VM 107 A.
  • local control plane 250 A deduces that host 106 B is up and so is VM 107 B, and therefore, no switchover is needed at this time.
  • Two-channel-based HA may utilize diagnostic codes included in BFD control packets communicated via underlay control channel 170 A and management control channel 170 B. Diagnostic codes are described in detail in FIG. 5 .
  • An example of a diagnostic code is a code “7,” which indicates an “administrative down” of a node.
  • Local control plane 250 A implemented in VM 107 A receives a BFD control packet with the diagnostic code “7” when VM 107 B hosted on host 106 B enters an administrative-down-state. If VM 107 B enters an administrative-down-state, then BFD control packets with that code are most likely to be detected on interfaces of both channels, and therefore, upon receiving such BFD control packets, local control plane 250 A may generate a message or a request to initiate failover.
  • a local control plane may determine that diagnostic codes included in BFD-compliant control packets detected on interfaces of the two channels are different. In such situations, if any of channels 170 A- 170 B communicated a BFD control message indicating that VM 107 B is down, then, upon receiving such a BFD control packet, local control plane 250 A deduces that VM 107 B is indeed down, and thus local control plane 250 A generates a message or a request to initiate failover.
  • local control plane 250 A awaits receiving a BFD control packets from each channel 170 A- 170 B. If no BFD control packets is received from underlay control channel 170 A (or management control channel 170 B) after a timeout, then local control plane 250 A deduces that either the channel is down or a corresponding VNIC is down. If local control plane 250 A does not receive any BFD control packet from both channels 170 A- 170 B after a timeout, then local control plane 250 A may deduce that VM 107 B is unreachable, and thus VM 107 B is down. In this situation, local control plane 250 A may generate a message or a request to initiate failover.
  • Local control plane 250 B mirror functionalities of local control plane 250 B. More specifically, local control plane 250 B may be configured to monitor both channels 170 A- 170 B and based on BFD control packets detected on interfaces configured for channels 170 A- 170 B on the side of host 106 B, determine whether VM 107 A executing on host 106 A has failed.
  • FIG. 2B is a block diagram depicting an example implementation of two-channel-based HA for a cluster of nodes.
  • hosts 106 A- 106 B are edge service gateways.
  • OS module 135 A on host 106 A supports, among other things, a local control plane 450 A and a data path process 460 A
  • OS module 135 B on host 106 B supports, among other things, a local control plane 450 B and a data path process 460 B.
  • a two-channel-based HA for a cluster of nodes is implemented using a pair of 171 A- 171 B channels: an underlay control channel 171 A and a management control channel 171 B.
  • underlay control channel 171 A is established between a PNIC 118 A and a PNIC 118 B, and it is a channel in an underlay network used to communicate overlay traffic.
  • Management control channel 171 B is established between PNIC 118 AA and PNIC 118 BB. Both channels 171 A- 171 B may be used to provide two-channel-based HA for hosts 106 A- 106 B. Both channels 171 A- 171 B are used to communicate BFD control packets.
  • Local control plane 450 A is configured to monitor both channels 171 A- 171 B on host 106 B side, while local control plane 450 B is configured to monitor both channels 171 A- 171 B on host 106 B side.
  • local control plane 450 A may monitor BFD control packets detected on interfaces configured for channels 171 A- 171 B to determine whether host 106 B has failed.
  • local control plane 450 A determines whether host 106 B executing on host 106 B has failed, implements the following rules: if no BFD control packets have been received via channels 171 A- 171 B from host 106 B after a timeout, then local control plane 450 A deduces that host 106 B is unreachable, and thus the services configured on host 106 B should be switched from host 106 B onto host 106 A.
  • a BFD control packet from at least one of channels 171 A- 171 B from host 106 B includes a diagnostic code indicating that host 106 B is down
  • local control plane 450 A deduces that host 106 B is down, and thus the services configured on host 106 B should be switched from host 106 B onto host 106 A.
  • local control plane 450 A deduces that host 106 B is up, and therefore, no switchover is needed at this time.
  • Two-channel-based HA may utilize diagnostic codes included in BFD control packets communicated via underlay control channel 171 A and management control channel 171 B. Diagnostic codes are described in detail in FIG. 5 .
  • An example of a diagnostic code is a code “7,” which indicates an “administrative down” of a node.
  • Local control plane 450 A implemented in an OS module 135 A receives a BFD control packet with the diagnostic code “7” when host 106 B enters an administrative-down-state. If host 106 B enters an administrative-down-state, then BFD control packets with that code are most likely to be detected on interfaces of both channels. Therefore, upon receiving such a BFD control packets, local control plane 450 A may generate a message or a request to initiate failover.
  • local control plane 450 A may determine that diagnostic codes included in BFD-compliant control packets detected on interfaces of the two channels are different. In such situations, if any of channels 171 A- 171 B communicated a BFD control message indicating that host 106 B is down, then, upon receiving such a BFD control message, local control plane 450 A deduces that host 106 B is indeed down, and thus local control plane 450 A generates a message or a request to initiate failover.
  • local control plane 450 A awaits receiving a BFD control packet from each channel 171 A- 171 B. If no BFD control packets is received from underlay control channel 171 A (or management control channel 171 B) after a timeout, then local control plane 450 A deduces that either the channel is down, or a corresponding host is down. If local control plane 450 A does not receive any BFD control packet from both channels 171 A- 171 B before a timeout, then local control plane 450 A may deduce that host 106 B is unreachable. In this situation, local control plane 450 A may generate a message or a request to initiate failover.
  • Local control plane 450 B mirror functionalities of local control plane 450 B. More specifically, local control plane 450 B may be configured to monitor both channels 171 A- 171 B and based on BFD control packets detected on interfaces configured for channels 171 A- 171 B on the side of host 106 B, determine whether host 106 A has failed.
  • FIG. 3 is a block diagram depicting an example implementation of two-channel-based HA in physical network interface cards of hosts.
  • a PNIC 418 A is configured in hardware 115 A, while a PNIC 418 B is configured in hardware 115 B. Furthermore, a PNIC 419 A is configured in hardware 115 A, while a PNIC 419 B is configured in hardware 115 B.
  • underlay control channel 171 A is established between PNIC 418 A and PNIC 418 B
  • management control channel 171 B is established between PNIC 419 A and PNIC 419 B.
  • local control plane 450 B may determine a diagnostic code for data path process 460 B. Furthermore, local control plane 450 B may encapsulate the diagnostic code in a BFD control packet and copy the BFD control packet on the interface of both underlay control channel 171 A and management control channel 171 B.
  • local control plane 450 A may detect the BFD control packet with the diagnostic code on the interface of either underlay control channel 171 A or management control channel 171 B, and analyze the diagnostic code. If the code is for example, a diagnostic code “6”, then local control plane 450 A may determine that a concatenated path to host 106 B is down, and thus temporarily unavailable. Subsequently, local control plane 450 A may generate a message to initiate failover of services configured on host 106 B onto host 106 A.
  • local control plane 450 A may determine a diagnostic code for data path process 460 A. Furthermore, local control plane 450 A may encapsulate the diagnostic code in a BFD control packet and copy the BFD control packet on the interface of both underlay channel 171 A and management channel 171 B.
  • local control plane 450 B may detect the BFD control packet with the diagnostic code on the interface of underlay channel 171 A or management channel 171 B and analyze the diagnostic code. If the code is for example, a diagnostic code “6”, then local control plane 450 B may determine that a concatenated path to host 106 A is down, and thus temporarily unavailable. Subsequently, local control plane 450 B may generate a message to initiate failover of services configured on host 106 A onto host 106 B.
  • a two-channel-based HA approach is implemented between any two entities of a HA cluster. For example, if a cluster includes two edge service gateways, then local control planes implemented in the edge service gateways cooperate with each other to determine whether any of the two edge service gateways has failed.
  • the local control planes may for example, exchange BFD control packets via an underlay control channel and a management control channel.
  • the BFD control packets exchanged via the channels may be generated based on different information available to the local control channels. Based on the exchanged information, the local control channels determine whether the edge service gateways are down.
  • hosts hosting nodes of an HA cluster may establish their underlay control channels and management control channels at a VNIC-level or at a PNIC level.
  • FIG. 4A is an example flow chart for implementing a two-channel high-availability approach.
  • a cluster comprises two edge service gateways, each hosted on a different host, and that an underlay control channel and a management control channel are established between either corresponding VNICs or PNICs.
  • the channels are configured to facilitate BFD-compliant communications.
  • a local control plane executing on a first node initiates and stores, in memory of the local control plane, for a neighbor node the following: a first state (associated with an underlay control channel) and a second state (associated with a management control channel).
  • step 404 the local control plane determines whether a BFD control packet has been received on either an interface of the underlay control channel or an interface of the management control channel.
  • An example of the BFD control packet is described in FIG. 5 .
  • step 406 the local control plane tests if such a BFD control packet has been received. If it has been, then the local control plane proceeds to performing step 408 ; otherwise, the local control plane proceeds to performing step 430 .
  • step 408 the local control plane determines whether the received BFD control packet was received via the underlay control channel. If it was, then the local control plane proceeds to performing step 410 ; otherwise, the local control plane proceeds to performing step 412 .
  • the local control plane parses a mandatory section of the detected BFD control packet to determine whether any of certain diagnostic codes are set in the mandatory section.
  • the certain diagnostic codes may include selected subset of diagnostic codes 0-31 described in FIG. 5 .
  • the subset may include for example, a diagnostic code “6” that indicates that a concatenated path to the second node is down, and a diagnostic code “7” that indicates that the second node entered a maintenance mode, and thus became unavailable.
  • the subset may also include other diagnostic codes described in FIG. 5 .
  • the local control plane updates the first state using the diagnostic code.
  • step 414 the local control plane determines whether both the first state and the second state indicate that the neighbor node is unreachable.
  • step 416 determines, in step 416 , that the neighboring node is unreachable, then the local control place proceeds to performing step 418 ; otherwise, the local control plane proceeds to performing step 420 .
  • step 418 the local control plane initiates a switchover of services from the neighbor node onto the node on which the local control plane is executed. Then, the local control plane proceeds to step 404 .
  • step 420 the local control plane determines whether any of the first state and the second state indicates that the neighboring node is down.
  • step 422 If the local control plane determined, in step 422 , that the neighboring node is down, then the local control plane proceeds to step 418 ; otherwise, the local control plane proceeds to performing step 404 .
  • step 412 the local control plane parses the detected BFD control packet, extracts a diagnostic code from the packet, and uses the diagnostic code to update the second state. Then the local control plane proceeds to performing step 414 , described above.
  • step 430 the local control plane proceeds to performing step 450 , described in FIG. 4B .
  • FIG. 4B is an example flow chart for implementing a two-channel high-availability approach.
  • step 450 the local control plane determines if a timeout for waiting for a BFD control message from the underlay control channel has expired.
  • step 452 If the local control plane determined, in step 452 , that the timeout has expired, then the local control plane proceeds to performing step 454 ; otherwise, the local control plane proceeds to performing step 456 .
  • step 454 the local control plane sets the first state to indicate that the neighboring node is unreachable.
  • step 462 the local control plane proceeds to performing step 414 .
  • step 456 the local control determines if a timeout for waiting for a BFD control message from the management control channels has expired.
  • step 458 determines that the timeout has expired. If the local control plane determined, in step 458 , that the timeout has expired, then the local control plane proceeds to performing step 460 , described above; otherwise, the local control channel proceeds to performing step 464 .
  • step 464 the local control plane proceeds to performing step 404 , described in FIG. 4A .
  • the process described in FIG. 4A-4B may be repeated for each type of diagnostic codes that the local control plane is implemented to consider.
  • the process may also be repeated for each node in a cluster with which the first node is able to establish both an underlay control channel and a management control channel.
  • FIG. 5 is a block diagram depicting an example mandatory section 520 of an example of a generic BFD control packet 500 .
  • Generic BFD control packet 500 has a mandatory section 520 , and an optional authentication section 530 . If authentication section 530 is present, then the format of authentication section 530 depends on the type of authentication in use. Authentication section 530 is outside of the scope of this disclosure.
  • Mandatory section 520 of BFD control packet 500 includes a version field 502 , a diagnostic field 504 , a state field 506 , a P-F-C-A-D-M flag field 508 , a detection time multiplier field 510 , a BFD control packet length field 512 , and other fields.
  • Diagnostic field 504 is relevant for this disclosure, and therefore it is described in detail below.
  • Diagnostic field 504 includes five bits, and the bits are used to encode diagnostic codes.
  • the diagnostic codes include: 0—no diagnostic, 1—control detection time expired, 2—echo function failed, 3—neighbor signaled session down, 4—forwarding plane reset, 5—path down, 6—concatenated path down, 7—administratively down, 8—reverse concatenated path down, 9-31—reserved for future use.
  • a diagnostic code “6” and a diagnostic code “7” are used in a two-channel-based HA approach.
  • a local control plane, or an entity detecting a problem with a node sends a BFD control packet with a diagnostic code “6” set if a northbound routing goes down, and thus a concatenated path to, or via, the node is down.
  • a local control plane, or an entity detecting a problem with the node sends a BFD control packet with a diagnostic node “7” set if the node enters for example a maintenance mode, and the node is down by an administrator.
  • diagnostic codes such as some codes of the reserved 9-31 codes, may be used in implementing a two-channel-based HA approach.
  • Diagnostic codes included in BFD control packets may be used to determine state of a node.
  • a diagnostic code “0” indicates that a node is operational, while diagnostic codes “6”-“7” indicate that a node is down. If no BFD control message is received on both channels before a timeout, then a node is considered to be unreachable.
  • an approach provides mechanisms for a two-channel-based HA in a cluster of nodes for detecting failures of nodes efficiently and reliably.
  • the approach allows reducing, if not eliminating, false detections of node failures, and unnecessary failovers in the clusters.
  • two-channel-based HA relies on communications exchanged via two channels established between hosts hosting nodes of a cluster.
  • the two channels provide support for the BFD-based communications.
  • Local control planes implemented in the hosts hosting the nodes monitor BFD control packets exchanged via both channels.
  • the BFD control packets may include diagnostic codes that indicate status or problems with the nodes. Based on the diagnostic codes, the local control planes may determine whether failover is necessary.
  • the present approach may be implemented using a computing system comprising one or more processors and memory.
  • the one or more processors and memory may be provided by one or more hardware machines.
  • a hardware machine includes a communications bus or other communication mechanisms for addressing main memory and for transferring data between and among the various components of hardware machine.
  • the hardware machine also includes one or more processors coupled with the bus for processing information.
  • the processor may be a microprocessor, a system on a chip (SoC), or other type of hardware processor.
  • Main memory may be a random-access memory (RAM) or other dynamic storage device. It may be coupled to a communications bus and used for storing information and software instructions to be executed by a processor. Main memory may also be used for storing temporary variables or other intermediate information during execution of software instructions to be executed by one or more processors.
  • RAM random-access memory
  • Main memory may also be used for storing temporary variables or other intermediate information during execution of software instructions to be executed by one or more processors.
  • stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings may be specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method for providing two-channel-based high-availability in a cluster of nodes is disclosed. In an embodiment, a method comprises: initiating, by a local control plane executing on a first node, a first state for an underlay control channel and a second state for a management control channel; detecting a bidirectional forwarding detection (“BFD”) control packet from a second node; determining whether the BFD control packet has been received from the underlay control channel; in response to determining that the BFD control packet was received from the underlay control channel: parsing the BFD control packet to extract a first diagnostic code; updating the first state with the first diagnostic code; determining whether both the first state and the second state indicate that the second node is unreachable; in response to determining that the second node is unreachable, initiating a switchover of services configured on the second node.

Description

    BACKGROUND
  • A high availability system is a system that is resilient to failures of the system's components. Typically, this is achieved by providing redundant components so that if one component fails, a redundant component can take over performing the tasks of the failed component.
  • HA devices, such as edge nodes, may be grouped into clusters. The nodes in a cluster may work as a team to provide services even if some of the nodes fail. As long as at least one of the nodes in a cluster remains active, the cluster may provide the services configured on the nodes. Examples of the services may include load balancing, traffic forwarding, data packet processing, VPN services, DNS services, and the like.
  • Nodes in a cluster may operate in either an active mode or a standby mode. If a node in a cluster fails, then, if possible, a surviving node assumes an active role and provides the services that were configured on the failed node.
  • Unfortunately, detecting failures of nodes in node clusters is often inefficient and difficult. Typically, HA nodes in a cluster communicate with each other via Bidirectional Forwarding Detection (“BFD’) channels. However, since the BFD channel may be configured with an aggressive timer, relying on communications exchanged via the BFD channel may lead to false detections of failures. For example, when no response is received to three consecutive packets sent to a node, an aggressive timer may flag failure of the node even if the node is still healthy. This may happen because the BFD traffic is usually communicated alongside the user traffic over the same channel, and the responses from the nodes are lost due to congestion caused by a high-volume user traffic, not due to the node's failure. Nevertheless, failure to timely detect BFD control packets from the node may trigger failover even if the node is still healthy.
  • SUMMARY
  • Techniques are presented herein for providing HA support by a node cluster. The techniques provide two-channel-based HA that relies on communications exchanged via two channels established between hosts hosting the nodes of the cluster. The purpose of using two channels, instead of one, is to improve reliability of the HA support. For example, if one channel fails, then the system may rely on the information obtained via the second channel. The cluster may include a pair of edge nodes, one of which operates in an active mode and another in a standby mode.
  • In an embodiment, a pair of channels established between two hosts is configured to provide support for BFD-compliant communications. One of the channels is referred to as an underlay control channel (or an underlay channel), while another channel is referred to as a management control channel (or a management channel). The pair of channel may be implemented either between virtual network interface cards (“VNICs”) of the hosts or between physical network interface cards (“PNICs”) of the hosts.
  • If the pair of channels are implemented between VNICs, then the BFD control packets communicated via the channels are monitored by local control planes of the respective hosts. If the pair of channels are implemented between PNICs, then the BFD control packets communicated via the channels are monitored by local control planes of the operating system (“OS”) of the hosts.
  • In an embodiment, local control planes monitor BFD control packets communicated via both an underlay channel and a management channel. The local control planes may, for example, extract diagnostic codes from the BFD control packets, and use the diagnostic codes to determine whether a neighbor node has failed. For example, if BFD control packets received via either channel indicate that the neighbor node has failed, then the services configured on the neighbor node may be switched over onto another node.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings:
  • FIG. 1 is a block diagram depicting an example physical implementation view of an example logical network environment 10 for implementing two-channel-based HA for a cluster of nodes.
  • FIG. 2A is a block diagram depicting an example implementation of two-channel-based HA for a cluster of nodes.
  • FIG. 2B is a block diagram depicting an example implementation of two-channel-based HA for a cluster of nodes.
  • FIG. 3 is a block diagram depicting an example implementation of two-channel-based HA in physical network interface cards of hosts.
  • FIG. 4A is an example flow chart for implementing a two-channel-based high-availability approach.
  • FIG. 4B is an example flow chart for implementing a two-channel-based high-availability approach.
  • FIG. 5 is a block diagram depicting an example mandatory section of an example of a generic BFD control packet.
  • DETAILED DESCRIPTION
  • In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the method described herein. It will be apparent, however, that the present approach may be practiced without these specific details. In some instances, well-known structures and devices are shown in a block diagram form to avoid unnecessarily obscuring the present approach.
  • 1. Example Physical Implementations
  • FIG. 1 is a block diagram depicting an example physical implementation view of an example logical network environment 10 for implementing two-channel-based HA for a cluster of nodes. In the depicted example, environment 10 includes two or more hosts 106A, 106B, and one or more physical networks 155.
  • Hosts 106A, 106B are used to implement logical routers, logical switches and virtual machines (“VMs”). Hosts 106A, 106B are also referred to as computing devices, host computers, host devices, physical servers, server systems or physical machines. Each host may be configured to support several VMs. In the example depicted in FIG. 1, host 106A is configured to support a VM 107A, while host 106B is configured to support a VM 107B. Additional VMs may also be supported by hosts 106A-106B.
  • Virtual machines 107A-107B are executed on hosts 106A, 106B, respectively, and are examples of virtualized computing instances or workloads. A virtualized computing instance may represent an addressable data compute node or an isolated user space instance. VMs 107A-107B may implement edge nodes, edge node gateways, and the like.
  • Hosts 106A, 106B may also be configured to support execution of hypervisors 109A and 109B, respectively.
  • Hypervisors 109A, 109B are software layers or components that support the execution of multiple VMs, such as VMs 107A-107B. Hypervisors 109A and 109B may be configured to implement virtual switches and forwarding tables that facilitate data traffic between VMs 107A-107B. In certain embodiments, virtual switches and other hypervisor components may reside in a privileged virtual machine (sometimes referred to as a “Domain Zero” or “the root partition”) (not shown). Hypervisors 109A and 109B may also maintain mappings between underlying hardware 115A, 115B, respectively, and virtual resources allocated to the respective VMs.
  • Hardware component 115A may include one or more processors 116A, one or more memory units 117A, one or more PNICs 118A, and one or more storage devices 121A.
  • Hardware component 115B may include one or more processors 116B, one or more memory units 117B, one or more PNICs 118B, and one or more storage devices 121B.
  • 2. Example Two-Channel High-Availability Configuration
  • 2.1. Example Vnic-Based Configuration
  • FIG. 2A is a block diagram depicting an example implementation of two-channel-based HA for a cluster of nodes. In the depicted example, hosts 106A-106B may be edge service gateways. Host 106A provides support for VNIC 190A and VNIC 190AA, while host 106B provides support for VNIC 190B and 190BB. VM 107A supports, among other things, a local control plane 250A and a data path process 260A. VM 107B supports, among other things, a local control plane 250B and a data path process 260B.
  • Hosts 106A-106B also provide support for execution of hypervisors 109A and 109B, respectively.
  • In the depicted example, hardware 115A includes, among other things, PNICs 118A, while hardware 115B includes, among other things, PNICs 118B.
  • In the depicted example, a two-channel-based HA for a cluster of nodes is implemented using a pair of 170A-170B channels: an underlay control channel 170A and a management control channel 170B.
  • In an embodiment, underlay control channel 170A is established between a VNIC 190A and a VNIC 190B, and it is a channel in an underlay network used to communicate overlay traffic. Management control channel 170B is established between VNIC 190AA and VNIC 190BB. Both channels 170A-170B may be used to provide two-channel-based HA for nodes, such as VMs 107A-107B. Both channels 170A-170B are used to communicate BFD control packets.
  • Local control plane 250A is configured to monitor both channels 170A-170B on VM 107A side, while local control plane 250B is configured to monitor both channels 170A-170B on VM 107B side. For example, local control plane 250A may monitor BFD control packets detected on interfaces configured for channels 170A-170B to determine whether VM 107B executing on host 106B has failed.
  • In an embodiment, to determine whether VM 106 executing on host 106B has failed, local control plane 250A implements the following rules: if no BFD control packets has been received via both channels 170A-170B from host 106B after a timeout, then local control plane 250A deduces that VM 107A is unreachable, and thus the services configured on VM 107B should be switched from VM 107B onto VM 107A. However, if a BFD control packet from at least one of channels 170A-170B from host 106B includes a diagnostic code indicating host 106B is down, then local control plane 250A deduces that VM 170B is down, and thus the services configured on VM 107B should be switched from VM 107B onto VM 107A. In other situations, local control plane 250A deduces that host 106B is up and so is VM 107B, and therefore, no switchover is needed at this time.
  • Two-channel-based HA may utilize diagnostic codes included in BFD control packets communicated via underlay control channel 170A and management control channel 170B. Diagnostic codes are described in detail in FIG. 5.
  • An example of a diagnostic code is a code “7,” which indicates an “administrative down” of a node. Local control plane 250A implemented in VM 107A receives a BFD control packet with the diagnostic code “7” when VM 107B hosted on host 106B enters an administrative-down-state. If VM 107B enters an administrative-down-state, then BFD control packets with that code are most likely to be detected on interfaces of both channels, and therefore, upon receiving such BFD control packets, local control plane 250A may generate a message or a request to initiate failover.
  • However, in some situations, a local control plane may determine that diagnostic codes included in BFD-compliant control packets detected on interfaces of the two channels are different. In such situations, if any of channels 170A-170B communicated a BFD control message indicating that VM 107B is down, then, upon receiving such a BFD control packet, local control plane 250A deduces that VM 107B is indeed down, and thus local control plane 250A generates a message or a request to initiate failover.
  • In some situations, local control plane 250A awaits receiving a BFD control packets from each channel 170A-170B. If no BFD control packets is received from underlay control channel 170A (or management control channel 170B) after a timeout, then local control plane 250A deduces that either the channel is down or a corresponding VNIC is down. If local control plane 250A does not receive any BFD control packet from both channels 170A-170B after a timeout, then local control plane 250A may deduce that VM 107B is unreachable, and thus VM 107B is down. In this situation, local control plane 250A may generate a message or a request to initiate failover.
  • Functionalities of local control plane 250B mirror functionalities of local control plane 250B. More specifically, local control plane 250B may be configured to monitor both channels 170A-170B and based on BFD control packets detected on interfaces configured for channels 170A-170B on the side of host 106B, determine whether VM 107A executing on host 106A has failed.
  • 2.2. Example Vnic-Based Configuration
  • FIG. 2B is a block diagram depicting an example implementation of two-channel-based HA for a cluster of nodes. In the depicted example, hosts 106A-106B are edge service gateways. OS module 135A on host 106A supports, among other things, a local control plane 450A and a data path process 460A, while OS module 135B on host 106B supports, among other things, a local control plane 450B and a data path process 460B.
  • In the depicted example, a two-channel-based HA for a cluster of nodes is implemented using a pair of 171A-171B channels: an underlay control channel 171A and a management control channel 171B.
  • In an embodiment, underlay control channel 171A is established between a PNIC 118A and a PNIC 118B, and it is a channel in an underlay network used to communicate overlay traffic. Management control channel 171B is established between PNIC 118AA and PNIC 118BB. Both channels 171A-171B may be used to provide two-channel-based HA for hosts 106A-106B. Both channels 171A-171B are used to communicate BFD control packets.
  • Local control plane 450A is configured to monitor both channels 171A-171B on host 106B side, while local control plane 450B is configured to monitor both channels 171A-171B on host 106B side. For example, local control plane 450A may monitor BFD control packets detected on interfaces configured for channels 171A-171B to determine whether host 106B has failed.
  • In an embodiment, to determine whether host 106B executing on host 106B has failed, local control plane 450A implements the following rules: if no BFD control packets have been received via channels 171A-171B from host 106B after a timeout, then local control plane 450A deduces that host 106B is unreachable, and thus the services configured on host 106B should be switched from host 106B onto host 106A. However, if a BFD control packet from at least one of channels 171A-171B from host 106B includes a diagnostic code indicating that host 106B is down, then local control plane 450A deduces that host 106B is down, and thus the services configured on host 106B should be switched from host 106B onto host 106A. In the remaining situations, local control plane 450A deduces that host 106B is up, and therefore, no switchover is needed at this time.
  • Two-channel-based HA may utilize diagnostic codes included in BFD control packets communicated via underlay control channel 171A and management control channel 171B. Diagnostic codes are described in detail in FIG. 5.
  • An example of a diagnostic code is a code “7,” which indicates an “administrative down” of a node. Local control plane 450A implemented in an OS module 135A receives a BFD control packet with the diagnostic code “7” when host 106B enters an administrative-down-state. If host 106B enters an administrative-down-state, then BFD control packets with that code are most likely to be detected on interfaces of both channels. Therefore, upon receiving such a BFD control packets, local control plane 450A may generate a message or a request to initiate failover.
  • However, in some situations, local control plane 450A may determine that diagnostic codes included in BFD-compliant control packets detected on interfaces of the two channels are different. In such situations, if any of channels 171A-171B communicated a BFD control message indicating that host 106B is down, then, upon receiving such a BFD control message, local control plane 450A deduces that host 106B is indeed down, and thus local control plane 450A generates a message or a request to initiate failover.
  • In some situations, local control plane 450A awaits receiving a BFD control packet from each channel 171A-171B. If no BFD control packets is received from underlay control channel 171A (or management control channel 171B) after a timeout, then local control plane 450A deduces that either the channel is down, or a corresponding host is down. If local control plane 450A does not receive any BFD control packet from both channels 171A-171B before a timeout, then local control plane 450A may deduce that host 106B is unreachable. In this situation, local control plane 450A may generate a message or a request to initiate failover.
  • Functionalities of local control plane 450B mirror functionalities of local control plane 450B. More specifically, local control plane 450B may be configured to monitor both channels 171A-171B and based on BFD control packets detected on interfaces configured for channels 171A-171B on the side of host 106B, determine whether host 106A has failed.
  • 3. Example Two-Channel High-Availability Configuration
  • FIG. 3 is a block diagram depicting an example implementation of two-channel-based HA in physical network interface cards of hosts.
  • In the depicted example, a PNIC 418A is configured in hardware 115A, while a PNIC 418B is configured in hardware 115B. Furthermore, a PNIC 419A is configured in hardware 115A, while a PNIC 419B is configured in hardware 115B.
  • Moreover, underlay control channel 171A is established between PNIC 418A and PNIC 418B, while management control channel 171B is established between PNIC 419A and PNIC 419B.
  • Depending on status of data path process 460B, local control plane 450B may determine a diagnostic code for data path process 460B. Furthermore, local control plane 450B may encapsulate the diagnostic code in a BFD control packet and copy the BFD control packet on the interface of both underlay control channel 171A and management control channel 171B.
  • Subsequently, local control plane 450A may detect the BFD control packet with the diagnostic code on the interface of either underlay control channel 171A or management control channel 171B, and analyze the diagnostic code. If the code is for example, a diagnostic code “6”, then local control plane 450A may determine that a concatenated path to host 106B is down, and thus temporarily unavailable. Subsequently, local control plane 450A may generate a message to initiate failover of services configured on host 106B onto host 106A.
  • Similarly, depending on status of data path process 460A, local control plane 450A may determine a diagnostic code for data path process 460A. Furthermore, local control plane 450A may encapsulate the diagnostic code in a BFD control packet and copy the BFD control packet on the interface of both underlay channel 171A and management channel 171B.
  • Subsequently, local control plane 450B may detect the BFD control packet with the diagnostic code on the interface of underlay channel 171A or management channel 171B and analyze the diagnostic code. If the code is for example, a diagnostic code “6”, then local control plane 450B may determine that a concatenated path to host 106A is down, and thus temporarily unavailable. Subsequently, local control plane 450B may generate a message to initiate failover of services configured on host 106A onto host 106B.
  • 4. Example Workflow
  • In an embodiment, a two-channel-based HA approach is implemented between any two entities of a HA cluster. For example, if a cluster includes two edge service gateways, then local control planes implemented in the edge service gateways cooperate with each other to determine whether any of the two edge service gateways has failed. The local control planes may for example, exchange BFD control packets via an underlay control channel and a management control channel. The BFD control packets exchanged via the channels may be generated based on different information available to the local control channels. Based on the exchanged information, the local control channels determine whether the edge service gateways are down.
  • In an embodiment, hosts hosting nodes of an HA cluster may establish their underlay control channels and management control channels at a VNIC-level or at a PNIC level.
  • FIG. 4A is an example flow chart for implementing a two-channel high-availability approach. In the depicted example, it is assumed that a cluster comprises two edge service gateways, each hosted on a different host, and that an underlay control channel and a management control channel are established between either corresponding VNICs or PNICs. The channels are configured to facilitate BFD-compliant communications.
  • In step 402, a local control plane executing on a first node initiates and stores, in memory of the local control plane, for a neighbor node the following: a first state (associated with an underlay control channel) and a second state (associated with a management control channel).
  • In step 404, the local control plane determines whether a BFD control packet has been received on either an interface of the underlay control channel or an interface of the management control channel. An example of the BFD control packet is described in FIG. 5.
  • In step 406, the local control plane tests if such a BFD control packet has been received. If it has been, then the local control plane proceeds to performing step 408; otherwise, the local control plane proceeds to performing step 430.
  • In step 408, the local control plane determines whether the received BFD control packet was received via the underlay control channel. If it was, then the local control plane proceeds to performing step 410; otherwise, the local control plane proceeds to performing step 412.
  • In step 410, the local control plane parses a mandatory section of the detected BFD control packet to determine whether any of certain diagnostic codes are set in the mandatory section. The certain diagnostic codes may include selected subset of diagnostic codes 0-31 described in FIG. 5. The subset may include for example, a diagnostic code “6” that indicates that a concatenated path to the second node is down, and a diagnostic code “7” that indicates that the second node entered a maintenance mode, and thus became unavailable. The subset may also include other diagnostic codes described in FIG. 5.
  • Also, in this step, the local control plane updates the first state using the diagnostic code.
  • In step 414, the local control plane determines whether both the first state and the second state indicate that the neighbor node is unreachable.
  • If the local control plane determines, in step 416, that the neighboring node is unreachable, then the local control place proceeds to performing step 418; otherwise, the local control plane proceeds to performing step 420.
  • In step 418, the local control plane initiates a switchover of services from the neighbor node onto the node on which the local control plane is executed. Then, the local control plane proceeds to step 404.
  • In step 420, the local control plane determines whether any of the first state and the second state indicates that the neighboring node is down.
  • If the local control plane determined, in step 422, that the neighboring node is down, then the local control plane proceeds to step 418; otherwise, the local control plane proceeds to performing step 404.
  • In step 412, the local control plane parses the detected BFD control packet, extracts a diagnostic code from the packet, and uses the diagnostic code to update the second state. Then the local control plane proceeds to performing step 414, described above.
  • In step 430, the local control plane proceeds to performing step 450, described in FIG. 4B.
  • FIG. 4B is an example flow chart for implementing a two-channel high-availability approach.
  • In step 450, the local control plane determines if a timeout for waiting for a BFD control message from the underlay control channel has expired.
  • If the local control plane determined, in step 452, that the timeout has expired, then the local control plane proceeds to performing step 454; otherwise, the local control plane proceeds to performing step 456.
  • In step 454, the local control plane sets the first state to indicate that the neighboring node is unreachable.
  • In step 462, the local control plane proceeds to performing step 414.
  • In step 456, the local control determines if a timeout for waiting for a BFD control message from the management control channels has expired.
  • If the local control plane determined, in step 458, that the timeout has expired, then the local control plane proceeds to performing step 460, described above; otherwise, the local control channel proceeds to performing step 464.
  • In step 464, the local control plane proceeds to performing step 404, described in FIG. 4A.
  • The process described in FIG. 4A-4B may be repeated for each type of diagnostic codes that the local control plane is implemented to consider. The process may also be repeated for each node in a cluster with which the first node is able to establish both an underlay control channel and a management control channel.
  • 5. Example Diagnostic Codes Used in Two-Channel-Based HA Approach
  • FIG. 5 is a block diagram depicting an example mandatory section 520 of an example of a generic BFD control packet 500. Generic BFD control packet 500 has a mandatory section 520, and an optional authentication section 530. If authentication section 530 is present, then the format of authentication section 530 depends on the type of authentication in use. Authentication section 530 is outside of the scope of this disclosure.
  • Mandatory section 520 of BFD control packet 500 includes a version field 502, a diagnostic field 504, a state field 506, a P-F-C-A-D-M flag field 508, a detection time multiplier field 510, a BFD control packet length field 512, and other fields. Diagnostic field 504 is relevant for this disclosure, and therefore it is described in detail below.
  • Diagnostic field 504 includes five bits, and the bits are used to encode diagnostic codes. In an embodiment, the diagnostic codes include: 0—no diagnostic, 1—control detection time expired, 2—echo function failed, 3—neighbor signaled session down, 4—forwarding plane reset, 5—path down, 6—concatenated path down, 7—administratively down, 8—reverse concatenated path down, 9-31—reserved for future use.
  • In an embodiment, a diagnostic code “6” and a diagnostic code “7” are used in a two-channel-based HA approach. A local control plane, or an entity detecting a problem with a node, sends a BFD control packet with a diagnostic code “6” set if a northbound routing goes down, and thus a concatenated path to, or via, the node is down. A local control plane, or an entity detecting a problem with the node, sends a BFD control packet with a diagnostic node “7” set if the node enters for example a maintenance mode, and the node is down by an administrator.
  • In an embodiment, other diagnostic codes, such as some codes of the reserved 9-31 codes, may be used in implementing a two-channel-based HA approach.
  • Diagnostic codes included in BFD control packets may be used to determine state of a node. In a mapping 550, a diagnostic code “0” indicates that a node is operational, while diagnostic codes “6”-“7” indicate that a node is down. If no BFD control message is received on both channels before a timeout, then a node is considered to be unreachable.
  • 6. Improvements Provided by Certain Embodiments
  • In an embodiment, an approach provides mechanisms for a two-channel-based HA in a cluster of nodes for detecting failures of nodes efficiently and reliably. The approach allows reducing, if not eliminating, false detections of node failures, and unnecessary failovers in the clusters.
  • In an embodiment, two-channel-based HA relies on communications exchanged via two channels established between hosts hosting nodes of a cluster. The two channels provide support for the BFD-based communications. Local control planes implemented in the hosts hosting the nodes monitor BFD control packets exchanged via both channels. The BFD control packets may include diagnostic codes that indicate status or problems with the nodes. Based on the diagnostic codes, the local control planes may determine whether failover is necessary.
  • 7. Implementation Mechanisms
  • The present approach may be implemented using a computing system comprising one or more processors and memory. The one or more processors and memory may be provided by one or more hardware machines. A hardware machine includes a communications bus or other communication mechanisms for addressing main memory and for transferring data between and among the various components of hardware machine. The hardware machine also includes one or more processors coupled with the bus for processing information. The processor may be a microprocessor, a system on a chip (SoC), or other type of hardware processor.
  • Main memory may be a random-access memory (RAM) or other dynamic storage device. It may be coupled to a communications bus and used for storing information and software instructions to be executed by a processor. Main memory may also be used for storing temporary variables or other intermediate information during execution of software instructions to be executed by one or more processors.
  • 8. General Considerations
  • Although some of various drawings may illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings may be specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
  • The foregoing description, for purpose of explanation, has been described regarding specific embodiments. However, the illustrative embodiments above are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen to best explain the principles underlying the claims and their practical applications, to thereby enable others skilled in the art to best use the embodiments with various modifications as are suited to the uses contemplated.
  • Any definitions set forth herein for terms contained in the claims may govern the meaning of such terms as used in the claims. No limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of the claim in any way. The specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
  • As used herein the terms “include” and “comprise” (and variations of those terms, such as “including,” “includes,” “comprising,” “comprises,” “comprised” and the like) are intended to be inclusive and are not intended to exclude further features, components, integers or steps.
  • References in this document to “an embodiment,” indicate that the embodiment described or illustrated may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described or illustrated in connection with an embodiment, it is believed to be within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly indicated.
  • Various features of the disclosure have been described using process steps. The functionality/processing of a given process step could potentially be performed in different ways and by different systems or system modules. Furthermore, a given process step could be divided into multiple steps and/or multiple steps could be combined into a single step. Furthermore, the order of the steps can be changed without departing from the scope of the present disclosure.
  • It will be understood that the embodiments disclosed and defined in this specification extend to alternative combinations of the individual features and components mentioned or evident from the text or drawings. These different combinations constitute various alternative aspects of the embodiments.

Claims (20)

What is claimed is:
1. A method for providing two-channel-based high-availability in a node cluster, the method comprising:
initiating, by a local control plane executing on a first node, a first state for an underlay control channel and a second state for a management control channel;
detecting, by the local control plane, a bidirectional forwarding detection (“BFD”) control packet from a second node;
determining, by the local control plane, whether the BFD control packet has been received from the underlay control channel;
in response to determining that the BFD control packet was received from the underlay control channel:
parsing, by the local control plane, the BFD control packet to extract a first diagnostic code;
updating the first state with the first diagnostic code;
determining whether both the first state and the second state indicate that the second node is unreachable; and
in response to determining that both the first state and the second state indicate that the second node is unreachable, initiating a switchover of services configured on the second node.
2. The method of claim 1, further comprising:
in response to determining that the BFD control packet was not received from the underlay control channel:
determining, by the local control plane, whether the BFD control packet has been received from the management control channel;
in response to determining that the BFD control packet was received from the management control channel:
parsing, by the local control plane, the BFD control packet to extract a second diagnostic code;
updating the second state with the second diagnostic code;
determining whether both the first state and the second state indicate that the second node is unreachable; and
in response to determining that both the first state and the second state indicate that the second node is unreachable, initiating a switchover of services configured on the second node.
3. The method of claim 2, further comprising:
in response to determining that both the first state and the second state do not indicate that the second node is unreachable:
determining whether any of the first state and the second state indicates that the second node is down:
in response to determining that any of the first state and the second state indicates that the second node is down, initiating a switchover of services configured on the second node.
4. The method of claim 3, further comprising:
in response to determining that no BFD control packet has been received from the underlay control channel before a timeout, update the first state to indicate that the second node is unreachable.
5. The method of claim 4, further comprising:
in response to determining that no BFD control packet has been received from the management control channel before a timeout, update the second state to indicate that the second node is unreachable.
6. The method of claim 5, wherein the underlay control channel is established between a first virtual network interface card (“VNIC”) configured on the first node and a first VNIC configured on the second node; and wherein a management control channel established between a second VNIC configured on the first node and a second VNIC configured on the second node.
7. The method of claim 6, wherein the underlay control channel is established between a first physical network interface card (“PNIC”) configured on the first node and a first PNIC configured on the second node; and wherein the management control channel established between a second PNIC configured on the first node and a second PNIC configured on the second node.
8. One or more non-transitory computer-readable storage media storing one or more computer instructions which, when executed by one or more processors, cause the one or more processors to provide two-channel-based high-availability in a node cluster, and to perform:
initiating, by a local control plane executing on a first node, a first state for an underlay control channel and a second state for a management control channel;
detecting, by the local control plane, a bidirectional forwarding detection (“BFD”) control packet from a second node;
determining, by the local control plane, whether the BFD control packet has been received from the underlay control channel;
in response to determining that the BFD control packet was received from the underlay control channel:
parsing, by the local control plane, the BFD control packet to extract a first diagnostic code;
updating the first state with the first diagnostic code;
determining whether both the first state and the second state indicate that the second node is unreachable; and
in response to determining that both the first state and the second state indicate that the second node is unreachable, initiating a switchover of services configured on the second node.
9. The one or more non-transitory computer-readable storage media of claim 8, comprising additional instructions which, when executed by the one or more processors, cause the one or more processors to perform:
in response to determining that the BFD control packet was not received from the underlay control channel:
determining, by the local control plane, whether the BFD control packet has been received from the management control channel;
in response to determining that the BFD control packet was received from the management control channel:
parsing, by the local control plane, the BFD control packet to extract a second diagnostic code;
updating the second state with the second diagnostic code;
determining whether both the first state and the second state indicate that the second node is unreachable; and
in response to determining that both the first state and the second state indicate that the second node is unreachable, initiating a switchover of services configured on the second node.
10. The one or more non-transitory computer-readable storage media of claim 9, comprising additional instructions which, when executed by the one or more processors, cause the one or more processors to perform:
in response to determining that both the first state and the second state do not indicate that the second node is unreachable:
determining whether any of the first state and the second state indicates that the second node is down:
in response to determining that any of the first state and the second state indicates that the second node is down, initiating a switchover of services configured on the second node.
11. The one or more non-transitory computer-readable storage media of claim 10, comprising additional instructions which, when executed by the one or more processors, cause the one or more processors to perform:
in response to determining that no BFD control packet has been received from the underlay control channel before a timeout, update the first state to indicate that the second node is unreachable.
12. The one or more non-transitory computer-readable storage media of claim 11, comprising additional instructions which, when executed by the one or more processors, cause the one or more processors to perform:
in response to determining that no BFD control packet has been received from the management control channel before a timeout, update the second state to indicate that the second node is unreachable.
13. The one or more non-transitory computer-readable storage media of claim 12, wherein the underlay control channel is established between a first virtual network interface card (“VNIC”) configured on the first node and a first VNIC configured on the second node; and wherein a management control channel established between a second VNIC configured on the first node and a second VNIC configured on the second node.
14. The one or more non-transitory computer-readable storage media of claim 13, wherein the underlay control channel is established between a first physical network interface card (“PNIC”) configured on the first node and a first PNIC configured on the second node; and wherein the management control channel established between a second PNIC configured on the first node and a second PNIC configured on the second node.
15. A local control plane implemented in a host computer and configured to provide two-channel-based high availability in a node cluster, the local control plane comprising:
one or more processors;
one or more memory units; and
one or more non-transitory computer-readable storage media storing one or more computer instructions which, when executed by the one or more processors, cause the one or more processors to perform:
initiating, by a local control plane executing on a first node, a first state for an underlay control channel and a second state for a management control channel;
detecting, by the local control plane, a bidirectional forwarding detection (“BFD”) control packet from a second node;
determining, by the local control plane, whether the BFD control packet has been received from the underlay control channel;
in response to determining that the BFD control packet was received from the underlay control channel:
parsing, by the local control plane, the BFD control packet to extract a first diagnostic code;
updating the first state with the first diagnostic code;
determining whether both the first state and the second state indicate that the second node is unreachable; and
in response to determining that both the first state and the second state indicate that the second node is unreachable, initiating a switchover of services configured on the second node.
16. The local control plane of claim 15, storing additional instructions which, when executed by the one or more processes, cause the one or more processors to perform:
in response to determining that the BFD control packet was not received from the underlay control channel:
determining, by the local control plane, whether the BFD control packet has been received from the management control channel;
in response to determining that the BFD control packet was received from the management control channel:
parsing, by the local control plane, the BFD control packet to extract a second diagnostic code;
updating the second state with the second diagnostic code;
determining whether both the first state and the second state indicate that the second node is unreachable; and
in response to determining that both the first state and the second state indicate that the second node is unreachable, initiating a switchover of services configured on the second node.
17. The local control plane of claim 16, storing additional instructions which, when executed by the one or more processes, cause the one or more processors to perform:
in response to determining that both the first state and the second state do not indicate that the second node is unreachable:
determining whether any of the first state and the second state indicates that the second node is down:
in response to determining that any of the first state and the second state indicates that the second node is down, initiating a switchover of services configured on the second node.
18. The local control plane of claim 17, storing additional instructions which, when executed by the one or more processes, cause the one or more processors to perform:
in response to determining that no BFD control packet has been received from the underlay control channel before a timeout, update the first state to indicate that the second node is unreachable.
19. The local control plane of claim 18, storing additional instructions which, when executed by the one or more processes, cause the one or more processors to perform:
in response to determining that no BFD control packet has been received from the management control channel before a timeout, update the second state to indicate that the second node is unreachable.
20. The local control plane of claim 19, wherein the underlay control channel is established between a first virtual network interface card (“VNIC”) configured on the first node and a first VNIC configured on the second node; and wherein a management control channel established between a second VNIC configured on the first node and a second VNIC configured on the second node.
US16/048,107 2018-07-27 2018-07-27 Two-channel-based high-availability Active US10530634B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/048,107 US10530634B1 (en) 2018-07-27 2018-07-27 Two-channel-based high-availability
US16/724,818 US11349706B2 (en) 2018-07-27 2019-12-23 Two-channel-based high-availability

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/048,107 US10530634B1 (en) 2018-07-27 2018-07-27 Two-channel-based high-availability

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/724,818 Continuation US11349706B2 (en) 2018-07-27 2019-12-23 Two-channel-based high-availability

Publications (2)

Publication Number Publication Date
US10530634B1 US10530634B1 (en) 2020-01-07
US20200036576A1 true US20200036576A1 (en) 2020-01-30

Family

ID=69058791

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/048,107 Active US10530634B1 (en) 2018-07-27 2018-07-27 Two-channel-based high-availability
US16/724,818 Active US11349706B2 (en) 2018-07-27 2019-12-23 Two-channel-based high-availability

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/724,818 Active US11349706B2 (en) 2018-07-27 2019-12-23 Two-channel-based high-availability

Country Status (1)

Country Link
US (2) US10530634B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11349706B2 (en) 2018-07-27 2022-05-31 Vmware, Inc. Two-channel-based high-availability
US11418382B2 (en) 2018-07-17 2022-08-16 Vmware, Inc. Method of cooperative active-standby failover between logical routers based on health of attached services

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11258711B2 (en) 2020-06-04 2022-02-22 Vmware, Inc. Split-brain prevention in a high availability system during workload migration
US11477270B1 (en) 2021-07-06 2022-10-18 Vmware, Inc. Seamless hand-off of data traffic in public cloud environments

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7515529B2 (en) * 2004-12-14 2009-04-07 Cisco Technology, Inc. Efficient mechanism for fast recovery in case of border router node failure in a computer network
US7512063B2 (en) * 2004-12-14 2009-03-31 Cisco Technology, Inc. Border router protection with backup tunnel stitching in a computer network
WO2007073761A1 (en) * 2005-12-28 2007-07-05 Telecom Italia S.P.A. Method and system for providing user access to communication services, and related computer program product
US8441919B2 (en) * 2006-01-18 2013-05-14 Cisco Technology, Inc. Dynamic protection against failure of a head-end node of one or more TE-LSPs
US8543718B2 (en) * 2006-03-02 2013-09-24 Cisco Technology, Inc. Technique for efficiently and dynamically maintaining bidirectional forwarding detection on a bundle of links
US9043487B2 (en) * 2006-04-18 2015-05-26 Cisco Technology, Inc. Dynamically configuring and verifying routing information of broadcast networks using link state protocols in a computer network
US8208372B2 (en) * 2006-06-02 2012-06-26 Cisco Technology, Inc. Technique for fast activation of a secondary head-end node TE-LSP upon failure of a primary head-end node TE-LSP
US8374092B2 (en) * 2006-08-28 2013-02-12 Cisco Technology, Inc. Technique for protecting against failure of a network element using multi-topology repair routing (MTRR)
CN101136789A (en) * 2006-08-30 2008-03-05 华为技术有限公司 Method and device for implementing terminal-to-terminal link detection, routing strategy rearrangement
CN101212400B (en) * 2006-12-25 2011-06-15 华为技术有限公司 Method and system for negotiating bidirectional forwarding detection session identifier for pseudo wire
US8259720B2 (en) 2007-02-02 2012-09-04 Cisco Technology, Inc. Triple-tier anycast addressing
US8068409B2 (en) 2007-12-18 2011-11-29 Motorola Solutions, Inc. Fast OSPF inactive router detection
US8014275B1 (en) * 2008-12-15 2011-09-06 At&T Intellectual Property L, L.P. Devices, systems, and/or methods for monitoring IP network equipment
CN102045185B (en) * 2009-10-21 2014-07-16 中兴通讯股份有限公司 User information backup method and device
US8553533B2 (en) * 2010-12-10 2013-10-08 Cisco Technology, Inc. System and method for providing improved failover performance for pseudowires
EP2466797A1 (en) * 2010-12-17 2012-06-20 Telefonaktiebolaget L M Ericsson AB (Publ) Interworking for OAM information exchange
US9363168B2 (en) * 2011-03-31 2016-06-07 Telefonaktiebolaget Lm Ericsson (Publ) Technique for operating a network node
ES2832725T3 (en) * 2011-05-31 2021-06-11 Huawei Tech Co Ltd Method to perform disaster tolerance backup
IN2014KN00845A (en) * 2011-10-17 2015-10-02 Ericsson Telefon Ab L M
US9172636B2 (en) * 2012-02-28 2015-10-27 Cisco Technology, Inc. Efficient link repair mechanism triggered by data traffic
US9628285B2 (en) * 2012-06-01 2017-04-18 Telefonaktiebolaget L M Ericsson (Publ) Increasing failure coverage of MoFRR with dataplane notifications
US8913482B2 (en) * 2012-06-01 2014-12-16 Telefonaktiebolaget L M Ericsson (Publ) Enhancements to PIM fast re-route with upstream activation packets
US8948001B2 (en) * 2012-06-26 2015-02-03 Juniper Networks, Inc. Service plane triggered fast reroute protection
US9197553B2 (en) 2013-03-29 2015-11-24 Cisco Technology, Inc. Using a virtual internet protocol address to represent dually connected hosts in an internet protocol overlay network
CN103281252B (en) 2013-05-14 2017-04-26 华为技术有限公司 Message flow control method and device based on multi-path transmission
US9225597B2 (en) * 2014-03-14 2015-12-29 Nicira, Inc. Managed gateways peering with external router to attract ingress packets
US10182105B2 (en) * 2014-12-31 2019-01-15 Juniper Networks, Inc. Policy based framework for application management in a network device having multiple packet-processing nodes
US10079779B2 (en) * 2015-01-30 2018-09-18 Nicira, Inc. Implementing logical router uplinks
US9838316B2 (en) * 2015-03-05 2017-12-05 Juniper Networks, Inc. Overload functionality in overlay networks
US10210058B1 (en) * 2015-03-31 2019-02-19 Juniper Networks, Inc. Application aware inter-chassis redundancy
US10250562B1 (en) * 2015-03-31 2019-04-02 Juniper Networks, Inc. Route signaling driven service management
EP3350963B1 (en) * 2015-10-28 2021-08-04 Huawei Technologies Co., Ltd. Control traffic in software defined networks
US9992154B2 (en) 2016-06-30 2018-06-05 Juniper Networks, Inc. Layer 3 convergence for EVPN link failure
US10075534B1 (en) * 2016-07-25 2018-09-11 Juniper Networks, Inc. Method, system, and apparatus for reducing control traffic in connection with neighbor reachability confirmations
US10314049B2 (en) * 2016-08-30 2019-06-04 Nxgen Partners Ip, Llc Using LTE control channel to send openflow message directly to small cells to reduce latency in an SDN-based multi-hop wireless backhaul network
US10594544B2 (en) 2018-07-17 2020-03-17 Vmware, Inc. Method for moving logical resources without explicit negotiations in a high availability, active-active service router cluster
US10530634B1 (en) 2018-07-27 2020-01-07 Vmware, Inc. Two-channel-based high-availability

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11418382B2 (en) 2018-07-17 2022-08-16 Vmware, Inc. Method of cooperative active-standby failover between logical routers based on health of attached services
US11349706B2 (en) 2018-07-27 2022-05-31 Vmware, Inc. Two-channel-based high-availability

Also Published As

Publication number Publication date
US20200127884A1 (en) 2020-04-23
US10530634B1 (en) 2020-01-07
US11349706B2 (en) 2022-05-31

Similar Documents

Publication Publication Date Title
US11349706B2 (en) Two-channel-based high-availability
CN110912780B (en) High-availability cluster detection method, system and controlled terminal
US10630570B2 (en) System and method for supporting well defined subnet topology in a middleware machine environment
US9219641B2 (en) Performing failover in a redundancy group
CN109344014B (en) Main/standby switching method and device and communication equipment
JP5817308B2 (en) Server, server system, and server redundancy switching method
EP2037364A1 (en) Method and system for assigning a plurality of macs to a plurality of processors
US10560550B1 (en) Automatic configuration of a replacement network device in a high-availability cluster
US20140032753A1 (en) Computer system and node search method
US9559894B2 (en) System and method for supporting high available (HA) network communication in a middleware machine environment
US10873498B2 (en) Server network interface level failover
EP3806395B1 (en) Virtual network function (vnf) deployment method and apparatus
US10680944B2 (en) Arbitrating mastership between redundant control planes of a virtual node
US20150113313A1 (en) Method of operating a server system with high availability
WO2013019339A1 (en) Hardware failure mitigation
CN112042170B (en) DHCP implementation on nodes for virtual machines
US20160205033A1 (en) Pool element status information synchronization method, pool register, and pool element
US20200028730A1 (en) Method for moving logical resources without explicit negotiations in a high availability, active-active service router cluster
US11409621B2 (en) High availability for a shared-memory-based firewall service virtual machine
US10333867B2 (en) Active-active load-based teaming
US11418382B2 (en) Method of cooperative active-standby failover between logical routers based on health of attached services
US20200274799A1 (en) Multi-vrf and multi-service insertion on edge gateway virtual machines
US10367681B2 (en) Maintenance of data forwarder connection state information
Lee et al. Fault localization in NFV framework
US10931565B2 (en) Multi-VRF and multi-service insertion on edge gateway virtual machines

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: VMWARE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FAN, KAI-WEI;LUO, HAIHUA;TAN, STEPHEN;REEL/FRAME:046498/0170

Effective date: 20180726

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: VMWARE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:VMWARE, INC.;REEL/FRAME:067102/0395

Effective date: 20231121