WO2021031070A1 - Method and system for isolating a leaf switch in a network - Google Patents

Method and system for isolating a leaf switch in a network Download PDF

Info

Publication number
WO2021031070A1
WO2021031070A1 PCT/CN2019/101379 CN2019101379W WO2021031070A1 WO 2021031070 A1 WO2021031070 A1 WO 2021031070A1 CN 2019101379 W CN2019101379 W CN 2019101379W WO 2021031070 A1 WO2021031070 A1 WO 2021031070A1
Authority
WO
WIPO (PCT)
Prior art keywords
leaf switch
server
switch
notification
leaf
Prior art date
Application number
PCT/CN2019/101379
Other languages
French (fr)
Inventor
Haiyang ZHENG
Xiangning YU
Yongcan WANG
Yongfeng Liu
Guohui Wang
Haiyong Wang
Original Assignee
Alibaba Group Holding Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Limited filed Critical Alibaba Group Holding Limited
Priority to CN201980099305.3A priority Critical patent/CN114223182B/en
Priority to PCT/CN2019/101379 priority patent/WO2021031070A1/en
Publication of WO2021031070A1 publication Critical patent/WO2021031070A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies

Definitions

  • a server center e.g., a massively scalable data center
  • HA high availability
  • a Clos network topology has been widely adopted in the MSDC to deliver a high-bandwidth, low-latency, and non-blocking connectivity.
  • a Clos network can be based on a spine-and-leaf topology, including a plurality of spine switches and a plurality of leaf switches.
  • a leaf switch can be connected to all spine switches to improve resilience and scalability and be connected to more than one server. It is appreciated that a server can also be connected to more than one leaf switch. Due to hardware failures or software upgrades, a leaf switch may have to be isolated from the Clos network for maintenance or upgrade. However, the isolation of a leaf switch can cause on-the-fly traffic to be dropped, leading to undesired service disruption.
  • Embodiments of the disclosure provide a method for isolating a first leaf switch in a network, the first leaf switch being connected to a server in the network.
  • the method can include: in response to receiving a request for isolating the first leaf switch in the network, sending, via the first leaf switch, a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the first leaf switch; determining whether an acknowledgement to the notification is received from the server; and in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.
  • Embodiments of the disclosure further provide a first leaf switch connected to a server in a network.
  • the first leaf switch can include: a memory storing a set of instructions; and at least one processor coupled with the memory and configured to execute the set of instructions to cause the first leaf switch to perform: in response to receiving a request for isolating the switch in the network, sending a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the first leaf switch; determining whether an acknowledgement to the notification is received from the server; and in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.
  • Embodiments of the disclosure also provide a non-transitory computer readable medium that stores a set of instructions that is executable by at least one processor of a leaf switch to cause the leaf switch to perform a method for isolating the leaf switch in a network.
  • the leaf switch can be connected to a server in the network.
  • the method can include: in response to receiving a request for isolating the first leaf switch in the network, sending, via the first leaf switch, a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the first leaf switch; determining whether an acknowledgement to the notification is received from the server; and in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.
  • FIG. 1 illustrates a schematic diagram of a Clos network.
  • FIG. 2 illustrates a schematic diagram of an exemplar network, according to some embodiments of the disclosure.
  • FIG. 3 illustrates a schematic diagram of a network after the isolation, according to some embodiments of the disclosure.
  • FIG. 4 is a flowchart of a method for isolating a first leaf switch in a network, according to some embodiments of the disclosure.
  • FIG. 5 illustrates a block diagram of an exemplary leaf switch, according to some embodiments of the disclosure.
  • isolating a leaf switch from a Clos network can disrupt the network, for example, causing on-the-fly traffic to be dropped.
  • the techniques described in this disclosure can minimize these types of disruptions.
  • the terms “comprises, ” “comprising, ” or any other variation thereof are intended to cover a non-exclusive inclusion, such that a process, method, composition, article, or apparatus that comprises a list of elements does not include only those elements, but may include other elements not expressly listed or inherent to such process, method, composition, article, or apparatus.
  • the term “exemplary” is used in the sense of “example” rather than “ideal. ”
  • FIG. 1 illustrates a schematic diagram of a network 100. While network 100 is envisioned to be a Clos network, it is appreciated that any network having at least a three-layer architecture can be used.
  • Clos network 100 is a three-layer architecture, including a spine layer 110, a leaf layer 120, and a server layer 130.
  • Spine layer 110 is the backbone of Clos network 100 and is responsible for interconnecting all leaf switches in leaf layer 120, and can include a plurality of spine switches (e.g., spine switches 112, 114, 116, and 118) .
  • Leaf layer 120 can provide access to devices such as servers, and include a plurality of leaf switches (e.g., leaf switches 122, 124, 126, and 128) .
  • server layer 130 can include a plurality of servers (e.g., servers 132, 134, 136, and 138) .
  • the plurality of leaf switches can be connected to the plurality of spine switches in a full-mesh topology.
  • each leaf switch e.g., leaf switch 122
  • each spine switch e.g., spine switches 112, 114, 116, and 118
  • a link between the leaf switch (e.g., 122) and the spine switches can be randomly chosen among the plurality of links, and therefore, traffic load between leaf layer 120 and spine layer 130 can be evenly distributed.
  • These links between the leaf switch and the spine switches can also be referred to as L3 links.
  • Each leaf switch (e.g., leaf switch 122) can also be connected to at least one server (e.g., server 132 and 134) in server layer 130.
  • each server e.g., server 132
  • each server can be connected to at least two leaf switches (e.g., leaf switches 122 and 124) to ensure connectivity.
  • server 132 can establish a first link with leaf switch 122 and a second link with leaf switch 124.
  • the first link and second link between a server and leaf switches can be referred to as L2 links.
  • Clos network 100 Under this three-layer architecture of Clos network 100, if oversubscription of Clos network 100 occurs, a process for expanding capacity of Clos network 100 can be straightforward. For example, an additional spine switch can be added and linked to every leaf switch, providing an addition of interlayer bandwidth between spine layer 110 and leaf layer 120 to reduce the oversubscription.
  • a new leaf switch can be added by simply connecting the new leaf switch to every spine switch.
  • the isolation can cause undesired service disruption.
  • a L2 link between a leaf switch and a server can be shut down from a side of the leaf switch without the server being aware of the shutdown. Therefore, the server may continuously send traffic towards the leaf switch until the server detects the shutdown and switches over the traffic onto an alternative L2 link. Therefore, the traffic sent to the leaf switch before the switchover will never be processed and will have to be dropped, causing the undesired service disruption.
  • Embodiments of the disclosure further provide methods and systems for isolating a leaf switch in a network, while minimizing traffic disruptions.
  • FIG. 2 illustrates a schematic diagram of an exemplar network 200, according to some embodiments of the disclosure.
  • network 200 can include spine switches 212 and 214, leaf switches 222 and 224, and a server 232.
  • Each of leaf switches 222 and 224 is connected to both spine switches 212 and 214.
  • server 232 is connected to both leaf switches 222 and 224.
  • the connection between server 232 and leaf switch 222 can be referred to as a first L2 link
  • the connection between server 232 and leaf switch 224 can be referred to as a second L2 link.
  • leaf switch 222 can receive a request for isolating leaf switch 222 from network 200.
  • the request can be made by, for example, an administrator of network 200 for the purpose of maintenance, software upgrade, or the like. It is appreciated that the request can also be made by network 200 itself. For example, when network 200 detects a malfunction of leaf switch 222, network 200 can automatically request an isolation of switch 222 before it causes further service disruption.
  • leaf switch 222 when leaf switch 222 receives the request for isolation, leaf switch 222 can determine whether the second L2 link associated with leaf switch 224 has enough bandwidth to process traffic associated with leaf switch 222.
  • leaf switch 222 can indicate that the isolation cannot be performed. For example, leaf switch 222 can generate a message informing the administrator of network 200 that the isolation cannot be performed for the moment. It is appreciated that the bandwidth of the second L2 link can be released due to dropdown on traffic on the second L2 link. For example, the second L2 link may not have enough bandwidth to process the traffic associated with leaf switch 222 at a first moment but may have the appropriate bandwidth at a second moment. Therefore, in some embodiments, the message generated by leaf switch 222 can further indicate another time for performing this isolation.
  • leaf switch 222 can continue to process the request for isolation. For example, in response to the request for isolating leaf switch 222, leaf switch 222 can send a notification 202 to server 232. Notification 202 can be used to notify server 232 that egress traffic to leaf switch 222 should be stopped. In some embodiments, notification 202 can include an identification (e.g., a media access control (MAC) address) of leaf switch 222. After receiving notification 202, server 232 can stop sending egress traffic to leaf switch 222. The egress traffic that was supposed to be sent to leaf switch 222 can now be sent to another leaf switch (e.g., leaf switch 224) , and therefore, can eventually reach the spine layer.
  • MAC media access control
  • ingress traffic towards server 232 can be continuously sent by leaf switch 222.
  • server 232 can send to leaf switch 222 an acknowledgement 204 to the notification 202.
  • Acknowledgement 204 can inform leaf switch 222 that server 232 is aware of the request.
  • acknowledgement 204 can further inform leaf switch 222 that the egress traffic has been sent to another leaf switch (e.g., leaf switch 224) .
  • a link aggregation control protocol can be used to manage L2 links and communication between a leaf switch and server.
  • the LACP can allow a network device (e.g., leaf switch 222) to negotiate an automatic bundling of links by sending LACP data units (LACPDU) to a peer device (e.g., server 232) , which also implements the LACP.
  • LACPDU LACP data units
  • peer device e.g., server 232
  • notification 202 and acknowledgement 204 can be transmitted between leaf switch 222 and server 232 using the LACPDU.
  • leaf switch 222 can determine whether acknowledgement 204 is received from server 232. In some embodiments, leaf switch 222 can further determine whether acknowledgement 204 is received from server 232 within a given period of time (e.g., 3 seconds) . Due to a variety of reasons, acknowledgement 204 may not be received by leaf switch 222 within the given period of time. For example, these reasons can include at least one of leaf switch 222 failing to send out notification 202, server 232 failing to receive notification 202, server 232 failing to send out acknowledgement 204, leaf switch 222 failing to receive acknowledgement 204, and the like.
  • a given period of time e.g. 3 seconds
  • leaf switch 222 can resend notification 202 to server 232 to further notify server 232 of this request for isolating.
  • leaf switch 222 can stop ingress traffic towards server 232. Therefore, before the determination that acknowledgement 204 is received, server 232 has already stopped egress traffic towards leaf switch 222. And after the determination that acknowledgement 204 is received, leaf switch 222 can stop the ingress traffic towards server 232. In other words, after acknowledgement 204 is received, traffic (including the egress traffic and the ingress traffic of server 232) between leaf switch 222 and server 232 can be fully stopped.
  • leaf switch 222 is not disconnected from server 232. If leaf switch 222 is disconnected from server 232 immediately after acknowledgement 204 is received, the on-the-fly traffic may not be processed completely.
  • leaf switch 222 can be disconnected from server 232.
  • the period of time is configurable and can be set to a few milliseconds. It is appreciated that, though leaf switch 222 has been isolated from server 232, traffic between spine switches of the spine layer and server 232 can be communicated through, for example, leaf switch 224 after leaf switch 222 is disconnected.
  • leaf switch 222 can also use a last packet of the traffic towards server 232 as a confirmation. In response to receiving the confirmation, server 232 can further confirm that all traffic has been processed and that it is safe to disconnect leaf switch 222.
  • leaf switch 222 when leaf switch 222 is back online, an L2 link can be re-established between leaf switch 222 and server 232.
  • notification 202 and acknowledgement 204 can be used to coordinate sequential terminations of the egress traffic and the ingress traffic of server 232 during the isolation of leaf switch 222, such that on-the-fly traffic between leaf switch 222 and server 232 can be drained before the isolation to avoid traffic disruption.
  • the LACP can be used to manage links and communications between a leaf switch and a server.
  • the LACP can be further used to perform the sequential terminations of the egress traffic and the ingress traffic of the server.
  • an LACP port state field of the LACPDU can be used as a synchronization field to coordinate a leaf switch and a server during an isolation.
  • the LACP port state field can include at least three bits, each of which is a flag indicating a particular status of a sender’s port. Table 1 below shows exemplary meanings of three bits of the LACP port state field, including “Synchronization, ” “Collecting, ” and “Distributing. ”
  • Bit “Synchronization” can be used to indicate whether a sender device is in synchronization with a receiver device. As shown in above Table 1, if bit “Synchronization” is “0, ” it indicates that the receiver and the sender are out of synchronization and that the receiver device can re-synchronize a number of physical ports of the receiver and the sender. The re-synchronization can also be referred to as “flapping. ”
  • the physical port after being synchronized, can be aggregated to make a single high-bandwidth data path to provide better connectivity.
  • the aggregated physical ports can also be referred to as a link aggregation group (LAG) .
  • LAG link aggregation group
  • leaf switch 222 when leaf switch 222 uses the LACPDU to send notification 202 to server 232, leaf switch 222 can also set the LACP port state field to “101. ” Therefore, after receiving notification 202, server 232 can read port states of leaf switch and stop egress traffic towards the leaf switch and send an acknowledgement (e.g., acknowledgement 204 of FIG. 2) .
  • acknowledgement e.g., acknowledgement 204 of FIG. 2
  • the three bits of the port state field of server 232 can be set to “110, ” indicating that server 232 is still receiving traffic from leaf switch 222 and expects a confirmation of no traffic transmission from leaf switch 222. Because server 232 continuously processes the on-the-fly traffic on a link between leaf switch 222 and server 232, the on-the-fly traffic can be drained from the link, such that the traffic disruption can be avoided or minimized when leaf switch 222 is isolated.
  • leaf switch 222 can further confirm no traffic is being transmitted on the link between leaf switch 222 and server 232. For example, a last packet of the on-the-fly traffic sent by leaf switch 222 can be used as the confirmation.
  • FIG. 3 illustrates a schematic diagram of a network 300 after the isolation, according to some embodiments of the disclosure.
  • leaf switch 222 has been disconnected from server 232 after the acknowledgement is received at leaf switch 222. It is appreciated that traffic sent by server 232 can still reach spine switches 212 and 214 via leaf switch 224. Therefore, users of network 300 would not perceive the isolation of leaf switch 222.
  • leaf switch 222 and server 232 when the three bits of the synchronization field of leaf switch 222 and server 232 are “111, ” it can indicate leaf switch 222 and server 232 are performing bidirectional communication.
  • the above multiplexing of the LACP port state field of the LACPDU can be activated when a leaf switch receives a request for isolating the leaf switch from the network. It is also appreciated that the synchronization field can be transmitted between the leaf switch and the server using three different bits of the LACP port state field of the LACPDU. In some embodiments, the synchronization field can be transmitted using a data unit other than the LACPDU.
  • FIG. 4 is a flowchart of a method 400 for isolating a first leaf switch in a network, according to some embodiments of the disclosure.
  • the network can further include a second leaf switch and a spine switch. Both the first and second leaf switches can be connected to a server (e.g., server 134 of server layer 130) , and the spine switch is connected to both the first and second leaf switches.
  • Method 400 can be executed by an electronic device.
  • the electronic device may include a memory storing a set of instructions and at least one processor to execute the set of instructions to cause the electronic device to perform method 400.
  • the electronic device may be a leaf switch (e.g., leaf switch 222 of FIGS. 2-3) of a leaf layer (e.g., leaf layer 120) .
  • method 400 may include the following steps.
  • the first leaf switch in response to receiving a request for isolating the first leaf switch in the network, can send a notification to the server.
  • the notification can indicate that the server is to stop sending egress traffic to the first leaf switch .
  • the notification can further include a first port state of the first leaf switch.
  • the notification can be carried by a first Link Aggregation Control Protocol Data Unit (LACPDU) , and the first port state can be indicated by an LACP port state field of the first LACPDU.
  • the LACP port state field can include three bits to indicate a port state of a sender (e.g., the first leaf switch) .
  • a first LACP port state field of the first LACPDU can be “101, ” indicating that the port of the first leaf switch is still distributing traffic but not receiving traffic and that the first leaf switch is expecting the server to stop egress traffic towards the first leaf switch. Accordingly, after sending the notification, the first leaf switch can continuously send the ingress traffic to the server for processing.
  • the first leaf switch can determine whether an acknowledgement to the notification is received from the server.
  • the server can stop sending the egress traffic to the first leaf switch and send back the acknowledgement to the first leaf switch.
  • the acknowledgement can be carried by a second LACPDU.
  • a second port state of the server can be indicated by an LACP port state field of the second LACPDU.
  • the LACP port state field of the second LACPDU can be “101, ” indicating that the port of the server is still receiving traffic from the first leaf switch and expecting a confirmation of no traffic transmission from the first leaf switch.
  • the server is also connected to the second leaf switch.
  • the notification can further cause the egress traffic from the server to be sent to the spine switch via the second leaf switch.
  • the first leaf switch in response to the determination that the acknowledgement is received, can stop ingress traffic towards the server.
  • the first leaf switch in response to the determination that the acknowledgement is not received within a first period of time at step 404, can send another notification to the server at step 402. It is appreciated that, if notifications are sent for a given number of times, the first leaf switch can generate an error code indicating the isolation has failed.
  • the first leaf switch can disconnect the first leaf switch from the server. And traffic between the spine switch and the server is communicated through the second leaf switch after the first leaf switch is disconnected. In some embodiments, in a second period of time after the ingress traffic towards the server is stopped at the first leaf switch, the first leaf switch can disconnect the first leaf switch from the server. In some embodiments, when a last packet of the ingress traffic is received by the server, the server can further confirm the last packet has been processed. In response to this confirmation, the first leaf switch can disconnect the first leaf switch from the server.
  • FIG. 5 illustrates a block diagram of an exemplary leaf switch 500, according to some embodiments of the disclosure.
  • Leaf switch 500 can be connected to a server in a network and configured to execute method 400.
  • the network can further include a spine switch.
  • Leaf switch 500 can include a plurality of network ports 502a-502n, a memory 504, and a processor 506 coupled with the plurality of network ports 502a-502n and memory 504.
  • Network ports 502a-502n can be used to transceive traffic of a spine switch and a server.
  • Memory 504 can store a set of instructions for executing method 400.
  • memory 504 can further store an address look-up table including addresses of devices in a network and corresponding ports.
  • Processor 506 can execute the set of instructions to cause leaf switch 500 to perform method 400.
  • Embodiments of the disclosure also provide a computer program product.
  • the computer program product may include a non-transitory computer readable storage medium having computer readable program instructions thereon for causing a processor to carry out the above-described methods.
  • the computer readable storage medium may be a tangible device that can store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM) , a static random access memory (SRAM) , flash memory, a portable compact disc read-only memory (CD-ROM) , a digital versatile disk (DVD) , a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • the computer readable program instructions for carrying out the above-described methods may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or source code or object code written in any combination of one or more programming languages, including an object-oriented programming language, and conventional procedural programming languages.
  • the computer readable program instructions may execute entirely on a computer system as a stand-alone software package, or partly on a first computer and partly on a second computer remote from the first computer. In the latter scenario, the second, remote computer may be connected to the first computer through any type of network, including a local area network (LAN) or a wide area network (WAN) .
  • LAN local area network
  • WAN wide area network
  • the computer readable program instructions may be provided to a one or more processors of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the one or more processors of the computer or other programmable data processing apparatus, create means for implementing the above-described methods.
  • a block in the flow charts or diagrams may represent a software program, segment, or portion of code, which comprises one or more executable instructions for implementing specific functions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the diagrams and/or flow charts, and combinations of blocks in the diagrams and flow charts may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure provides methods and systems for isolating a leaf switch in a network, the leaf switch being connected to a server in the network. The method can include: in response to receiving a request for isolating the leaf switch in the network, sending, via the leaf switch, a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the leaf switch; determining whether an acknowledgement to the notification is received from the server; and in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.

Description

METHOD AND SYSTEM FOR ISOLATING A LEAF SWITCH IN A NETWORK BACKGROUND
A server center (e.g., a massively scalable data center) can include a plurality of networked servers and switches, to provide zero-downtime service, including remote storage service, cloud processing service, distribution of large amounts of data, and the like. Due to the zero-downtime requirement, high availability (HA) is critical in the massively scalable data center (MSDC) .
A Clos network topology has been widely adopted in the MSDC to deliver a high-bandwidth, low-latency, and non-blocking connectivity. A Clos network can be based on a spine-and-leaf topology, including a plurality of spine switches and a plurality of leaf switches. In the spine-and-leaf topology, a leaf switch can be connected to all spine switches to improve resilience and scalability and be connected to more than one server. It is appreciated that a server can also be connected to more than one leaf switch. Due to hardware failures or software upgrades, a leaf switch may have to be isolated from the Clos network for maintenance or upgrade. However, the isolation of a leaf switch can cause on-the-fly traffic to be dropped, leading to undesired service disruption.
SUMMARY OF THE DISCLOSURE
Embodiments of the disclosure provide a method for isolating a first leaf switch in a network, the first leaf switch being connected to a server in the network. The method can include: in response to receiving a request for isolating the first leaf switch in the network, sending, via the first leaf switch, a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the first leaf switch; determining whether an  acknowledgement to the notification is received from the server; and in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.
Embodiments of the disclosure further provide a first leaf switch connected to a server in a network. The first leaf switch can include: a memory storing a set of instructions; and at least one processor coupled with the memory and configured to execute the set of instructions to cause the first leaf switch to perform: in response to receiving a request for isolating the switch in the network, sending a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the first leaf switch; determining whether an acknowledgement to the notification is received from the server; and in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.
Embodiments of the disclosure also provide a non-transitory computer readable medium that stores a set of instructions that is executable by at least one processor of a leaf switch to cause the leaf switch to perform a method for isolating the leaf switch in a network. The leaf switch can be connected to a server in the network. The method can include: in response to receiving a request for isolating the first leaf switch in the network, sending, via the first leaf switch, a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the first leaf switch; determining whether an acknowledgement to the notification is received from the server; and in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments and various aspects of the present disclosure are illustrated in the following detailed description and the accompanying figures. Various features shown in the figures are not drawn to scale.
FIG. 1 illustrates a schematic diagram of a Clos network.
FIG. 2 illustrates a schematic diagram of an exemplar network, according to some embodiments of the disclosure.
FIG. 3 illustrates a schematic diagram of a network after the isolation, according to some embodiments of the disclosure.
FIG. 4 is a flowchart of a method for isolating a first leaf switch in a network, according to some embodiments of the disclosure.
FIG. 5 illustrates a block diagram of an exemplary leaf switch, according to some embodiments of the disclosure.
DETAILED DESCRIPTION
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
As described above with respect to conventional systems, isolating a leaf switch from a Clos network can disrupt the network, for example, causing on-the-fly traffic to be dropped. The techniques described in this disclosure can minimize these types of disruptions.
As used herein, the terms “comprises, ” “comprising, ” or any other variation thereof are intended to cover a non-exclusive inclusion, such that a process, method, composition, article, or apparatus that comprises a list of elements does not include only those elements, but may include other elements not expressly listed or inherent to such process, method, composition, article, or apparatus. The term “exemplary” is used in the sense of “example” rather than “ideal. ”
FIG. 1 illustrates a schematic diagram of a network 100. While network 100 is envisioned to be a Clos network, it is appreciated that any network having at least a three-layer architecture can be used.
As shown in FIG. 1, Clos network 100 is a three-layer architecture, including a spine layer 110, a leaf layer 120, and a server layer 130. Spine layer 110 is the backbone of Clos network 100 and is responsible for interconnecting all leaf switches in leaf layer 120, and can include a plurality of spine switches (e.g.,  spine switches  112, 114, 116, and 118) . Leaf layer 120 can provide access to devices such as servers, and include a plurality of leaf switches (e.g.,  leaf switches  122, 124, 126, and 128) . And server layer 130 can include a plurality of servers (e.g.,  servers  132, 134, 136, and 138) .
In this three-layer architecture, the plurality of leaf switches can be connected to the plurality of spine switches in a full-mesh topology. In other words, each leaf switch (e.g., leaf switch 122) is connected to each and every spine switch (e.g.,  spine switches  112, 114, 116, and 118) in spine layer 110 for generating a plurality of links. A link between the leaf switch (e.g., 122) and the spine switches can be randomly chosen among the plurality of links, and therefore, traffic load between leaf layer 120 and spine layer 130 can be evenly distributed. These links between the leaf switch and the spine switches can also be referred to as L3 links.
Each leaf switch (e.g., leaf switch 122) can also be connected to at least one server (e.g., server 132 and 134) in server layer 130. On the other hand, each server (e.g., server 132) can be connected to at least two leaf switches (e.g., leaf switches 122 and 124) to ensure connectivity. In other words, for example, server 132 can establish a first link with leaf switch 122 and a second link with leaf switch 124. The first link and second link between a server and leaf switches can be referred to as L2 links.
Under this three-layer architecture of Clos network 100, if oversubscription of Clos network 100 occurs, a process for expanding capacity of Clos network 100 can be straightforward. For example, an additional spine switch can be added and linked to every leaf switch, providing an addition of interlayer bandwidth between spine layer 110 and leaf layer 120 to reduce the oversubscription.
Similarly, a new leaf switch can be added by simply connecting the new leaf switch to every spine switch. However, when an existing leaf switch is being isolated from a Clos network, the isolation can cause undesired service disruption. For example, conventionally, a L2 link between a leaf switch and a server can be shut down from a side of the leaf switch without the server being aware of the shutdown. Therefore, the server may continuously send traffic towards the leaf switch until the server detects the shutdown and switches over the traffic onto an alternative L2 link. Therefore, the traffic sent to the leaf switch before the switchover will never be processed and will have to be dropped, causing the undesired service disruption.
Embodiments of the disclosure further provide methods and systems for isolating a leaf switch in a network, while minimizing traffic disruptions.
FIG. 2 illustrates a schematic diagram of an exemplar network 200, according to some embodiments of the disclosure. As shown in FIG. 2, network 200 can include  spine switches  212 and 214,  leaf switches  222 and 224, and a server 232. Each of  leaf switches  222 and 224 is connected to both  spine switches  212 and 214. And server 232 is connected to both  leaf switches  222 and 224. The connection between server 232 and leaf switch 222 can be referred to as a first L2 link, and the connection between server 232 and leaf switch 224 can be referred to as a second L2 link.
In some embodiments, leaf switch 222 can receive a request for isolating leaf switch 222 from network 200. The request can be made by, for example, an administrator of network 200 for the purpose of maintenance, software upgrade, or the like. It is appreciated that the request can also be made by network 200 itself. For example, when network 200 detects a malfunction of leaf switch 222, network 200 can automatically request an isolation of switch 222 before it causes further service disruption.
In some embodiments, when leaf switch 222 receives the request for isolation, leaf switch 222 can determine whether the second L2 link associated with leaf switch 224 has enough bandwidth to process traffic associated with leaf switch 222.
If the second L2 link associated with leaf switch 224 is not capable of processing the extra traffic associated with leaf switch 222, leaf switch 222 can indicate that the isolation cannot be performed. For example, leaf switch 222 can generate a message informing the administrator of network 200 that the isolation cannot be performed for the moment. It is appreciated that the bandwidth of the second L2 link can be released due to dropdown on traffic on the second L2 link. For example, the second L2 link may not have enough bandwidth to process the traffic associated with leaf switch 222 at a first moment but may have the appropriate bandwidth at a second moment. Therefore, in some embodiments, the message generated by leaf switch 222 can further indicate another time for performing this isolation.
If the second L2 link associated with leaf switch 224 is not capable of processing the extra traffic associated with leaf switch 222, leaf switch 222 can continue to process the request for isolation. For example, in response to the request for isolating leaf switch 222, leaf switch 222 can send a notification 202 to server 232. Notification 202 can be used to notify server 232 that egress traffic to leaf switch 222 should be stopped. In some embodiments,  notification 202 can include an identification (e.g., a media access control (MAC) address) of leaf switch 222. After receiving notification 202, server 232 can stop sending egress traffic to leaf switch 222. The egress traffic that was supposed to be sent to leaf switch 222 can now be sent to another leaf switch (e.g., leaf switch 224) , and therefore, can eventually reach the spine layer.
It is appreciated that, when the egress traffic to leaf switch 222 is stopped, ingress traffic towards server 232 can be continuously sent by leaf switch 222.
Then, server 232 can send to leaf switch 222 an acknowledgement 204 to the notification 202. Acknowledgement 204 can inform leaf switch 222 that server 232 is aware of the request. In some embodiments, acknowledgement 204 can further inform leaf switch 222 that the egress traffic has been sent to another leaf switch (e.g., leaf switch 224) .
A link aggregation control protocol (LACP) can be used to manage L2 links and communication between a leaf switch and server. The LACP can allow a network device (e.g., leaf switch 222) to negotiate an automatic bundling of links by sending LACP data units (LACPDU) to a peer device (e.g., server 232) , which also implements the LACP. In some embodiments, notification 202 and acknowledgement 204 can be transmitted between leaf switch 222 and server 232 using the LACPDU.
Accordingly, leaf switch 222 can determine whether acknowledgement 204 is received from server 232. In some embodiments, leaf switch 222 can further determine whether acknowledgement 204 is received from server 232 within a given period of time (e.g., 3 seconds) . Due to a variety of reasons, acknowledgement 204 may not be received by leaf switch 222 within the given period of time. For example, these reasons can include at least one of leaf switch 222 failing to send out notification 202, server 232 failing to receive notification 202,  server 232 failing to send out acknowledgement 204, leaf switch 222 failing to receive acknowledgement 204, and the like.
In response to the determination that acknowledgement 204 is not received, leaf switch 222 can resend notification 202 to server 232 to further notify server 232 of this request for isolating.
In response to the determination that acknowledgement 204 is received, leaf switch 222 can stop ingress traffic towards server 232. Therefore, before the determination that acknowledgement 204 is received, server 232 has already stopped egress traffic towards leaf switch 222. And after the determination that acknowledgement 204 is received, leaf switch 222 can stop the ingress traffic towards server 232. In other words, after acknowledgement 204 is received, traffic (including the egress traffic and the ingress traffic of server 232) between leaf switch 222 and server 232 can be fully stopped.
It is appreciated that some ingress traffic towards server 232 can still be sent before acknowledgement 204 is received and becomes on-the-fly traffic. Therefore, before the on-the-fly traffic towards server 232 is processed, it is preferable that leaf switch 222 is not disconnected from server 232. If leaf switch 222 is disconnected from server 232 immediately after acknowledgement 204 is received, the on-the-fly traffic may not be processed completely.
As the processing of the on-the-fly traffic can be quick, in a period of time after the ingress traffic towards the server being stopped at leaf switch 222, leaf switch 222 can be disconnected from server 232. The period of time is configurable and can be set to a few milliseconds. It is appreciated that, though leaf switch 222 has been isolated from server 232, traffic between spine switches of the spine layer and server 232 can be communicated through, for example, leaf switch 224 after leaf switch 222 is disconnected.
In some embodiments, leaf switch 222 can also use a last packet of the traffic towards server 232 as a confirmation. In response to receiving the confirmation, server 232 can further confirm that all traffic has been processed and that it is safe to disconnect leaf switch 222.
It is appreciated that, when leaf switch 222 is back online, an L2 link can be re-established between leaf switch 222 and server 232.
It can be seen from the above that, notification 202 and acknowledgement 204 can be used to coordinate sequential terminations of the egress traffic and the ingress traffic of server 232 during the isolation of leaf switch 222, such that on-the-fly traffic between leaf switch 222 and server 232 can be drained before the isolation to avoid traffic disruption.
As discussed above, the LACP can be used to manage links and communications between a leaf switch and a server. In some embodiments, the LACP can be further used to perform the sequential terminations of the egress traffic and the ingress traffic of the server.
In some embodiments, an LACP port state field of the LACPDU can be used as a synchronization field to coordinate a leaf switch and a server during an isolation. The LACP port state field can include at least three bits, each of which is a flag indicating a particular status of a sender’s port. Table 1 below shows exemplary meanings of three bits of the LACP port state field, including “Synchronization, ” “Collecting, ” and “Distributing. ”
Figure PCTCN2019101379-appb-000001
Figure PCTCN2019101379-appb-000002
Table 1
Bit “Synchronization” can be used to indicate whether a sender device is in synchronization with a receiver device. As shown in above Table 1, if bit “Synchronization” is “0, ” it indicates that the receiver and the sender are out of synchronization and that the receiver device can re-synchronize a number of physical ports of the receiver and the sender. The re-synchronization can also be referred to as “flapping. ” The physical port, after being synchronized, can be aggregated to make a single high-bandwidth data path to provide better connectivity. The aggregated physical ports can also be referred to as a link aggregation group (LAG) .
If bit “Synchronization” is “1, ” it indicates that the receiver and the sender are synchronized and that at least one of collecting and distributing can be perform. As shown in Table 1, when the three bits of the port state field are “101, ” it can indicate that the sender device is transmitting traffic to the receiver and expects the sender to stop sending traffic. More particularly, in embodiments of the disclosure, the three bits of the port state field of a leaf switch being “101” can indicate that the leaf switch is still sending ingress traffic towards servers connected with the leaf switch, and expects that each of the servers to stop sending egress traffic  to the leaf switch. For example, when leaf switch 222 uses the LACPDU to send notification 202 to server 232, leaf switch 222 can also set the LACP port state field to “101. ” Therefore, after receiving notification 202, server 232 can read port states of leaf switch and stop egress traffic towards the leaf switch and send an acknowledgement (e.g., acknowledgement 204 of FIG. 2) .
In response to notification 202, when server 232 sends acknowledgement 204, the three bits of the port state field of server 232 can be set to “110, ” indicating that server 232 is still receiving traffic from leaf switch 222 and expects a confirmation of no traffic transmission from leaf switch 222. Because server 232 continuously processes the on-the-fly traffic on a link between leaf switch 222 and server 232, the on-the-fly traffic can be drained from the link, such that the traffic disruption can be avoided or minimized when leaf switch 222 is isolated.
After acknowledgement 204 is received at leaf switch 222, leaf switch 222 can further confirm no traffic is being transmitted on the link between leaf switch 222 and server 232. For example, a last packet of the on-the-fly traffic sent by leaf switch 222 can be used as the confirmation.
Then, leaf switch 222 is ready to be disconnected from the server. FIG. 3 illustrates a schematic diagram of a network 300 after the isolation, according to some embodiments of the disclosure. As shown in FIG. 3, leaf switch 222 has been disconnected from server 232 after the acknowledgement is received at leaf switch 222. It is appreciated that traffic sent by server 232 can still reach  spine switches  212 and 214 via leaf switch 224. Therefore, users of network 300 would not perceive the isolation of leaf switch 222.
Referring back to FIG. 2 and Table 1, when the three bits of the synchronization field of leaf switch 222 and server 232 are “111, ” it can indicate leaf switch 222 and server 232 are performing bidirectional communication.
It is appreciated that the above multiplexing of the LACP port state field of the LACPDU can be activated when a leaf switch receives a request for isolating the leaf switch from the network. It is also appreciated that the synchronization field can be transmitted between the leaf switch and the server using three different bits of the LACP port state field of the LACPDU. In some embodiments, the synchronization field can be transmitted using a data unit other than the LACPDU.
FIG. 4 is a flowchart of a method 400 for isolating a first leaf switch in a network, according to some embodiments of the disclosure. In addition to the first leaf switch, the network can further include a second leaf switch and a spine switch. Both the first and second leaf switches can be connected to a server (e.g., server 134 of server layer 130) , and the spine switch is connected to both the first and second leaf switches. Method 400 can be executed by an electronic device. The electronic device may include a memory storing a set of instructions and at least one processor to execute the set of instructions to cause the electronic device to perform method 400. For example, the electronic device may be a leaf switch (e.g., leaf switch 222 of FIGS. 2-3) of a leaf layer (e.g., leaf layer 120) . Referring to FIG. 4, method 400 may include the following steps.
At step 402, in response to receiving a request for isolating the first leaf switch in the network, the first leaf switch can send a notification to the server. The notification can indicate that the server is to stop sending egress traffic to the first leaf switch . In some embodiments, the notification can further include a first port state of the first leaf switch. For example, the notification can be carried by a first Link Aggregation Control Protocol Data Unit (LACPDU) , and the first port state can be indicated by an LACP port state field of the first LACPDU. In some embodiments, the LACP port state field can include three bits to indicate a  port state of a sender (e.g., the first leaf switch) . At this step 402, a first LACP port state field of the first LACPDU can be “101, ” indicating that the port of the first leaf switch is still distributing traffic but not receiving traffic and that the first leaf switch is expecting the server to stop egress traffic towards the first leaf switch. Accordingly, after sending the notification, the first leaf switch can continuously send the ingress traffic to the server for processing.
At step 404, the first leaf switch can determine whether an acknowledgement to the notification is received from the server. When the server receives the notification, the server can stop sending the egress traffic to the first leaf switch and send back the acknowledgement to the first leaf switch. The acknowledgement can be carried by a second LACPDU. Similarly, a second port state of the server can be indicated by an LACP port state field of the second LACPDU. At this step 404, the LACP port state field of the second LACPDU can be “101, ” indicating that the port of the server is still receiving traffic from the first leaf switch and expecting a confirmation of no traffic transmission from the first leaf switch.
As discussed above, the server is also connected to the second leaf switch. In some embodiments, the notification can further cause the egress traffic from the server to be sent to the spine switch via the second leaf switch.
At step 406, in response to the determination that the acknowledgement is received, the first leaf switch can stop ingress traffic towards the server.
In some embodiments, in response to the determination that the acknowledgement is not received within a first period of time at step 404, the first leaf switch can send another notification to the server at step 402. It is appreciated that, if notifications are sent for a given number of times, the first leaf switch can generate an error code indicating the isolation has failed.
At step 408, the first leaf switch can disconnect the first leaf switch from the server. And traffic between the spine switch and the server is communicated through the second leaf switch after the first leaf switch is disconnected. In some embodiments, in a second period of time after the ingress traffic towards the server is stopped at the first leaf switch, the first leaf switch can disconnect the first leaf switch from the server. In some embodiments, when a last packet of the ingress traffic is received by the server, the server can further confirm the last packet has been processed. In response to this confirmation, the first leaf switch can disconnect the first leaf switch from the server.
FIG. 5 illustrates a block diagram of an exemplary leaf switch 500, according to some embodiments of the disclosure. Leaf switch 500 can be connected to a server in a network and configured to execute method 400. The network can further include a spine switch.
Leaf switch 500 can include a plurality of network ports 502a-502n, a memory 504, and a processor 506 coupled with the plurality of network ports 502a-502n and memory 504.
Network ports 502a-502n can be used to transceive traffic of a spine switch and a server. Memory 504 can store a set of instructions for executing method 400. In addition, memory 504 can further store an address look-up table including addresses of devices in a network and corresponding ports. Processor 506 can execute the set of instructions to cause leaf switch 500 to perform method 400.
Embodiments of the disclosure also provide a computer program product. The computer program product may include a non-transitory computer readable storage medium having computer readable program instructions thereon for causing a processor to carry out the above-described methods.
The computer readable storage medium may be a tangible device that can store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM) , a static random access memory (SRAM) , flash memory, a portable compact disc read-only memory (CD-ROM) , a digital versatile disk (DVD) , a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
The computer readable program instructions for carrying out the above-described methods may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or source code or object code written in any combination of one or more programming languages, including an object-oriented programming language, and conventional procedural programming languages. The computer readable program instructions may execute entirely on a computer system as a stand-alone software package, or partly on a first computer and partly on a second computer remote from the first computer. In the latter scenario, the second, remote computer may be connected to the first computer through any type of network, including a local area network (LAN) or a wide area network (WAN) .
The computer readable program instructions may be provided to a one or more processors of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the one or more processors of the computer or other programmable data processing apparatus, create means for implementing the above-described methods.
The flow charts and diagrams in the figures illustrate the exemplary architecture, functionality, and operation of possible implementations of devices, methods, and computer program products according to various embodiments of the specification. In this regard, a block in the flow charts or diagrams may represent a software program, segment, or portion of code, which comprises one or more executable instructions for implementing specific functions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the diagrams and/or flow charts, and combinations of blocks in the diagrams and flow charts, may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is appreciated that certain features of the specification, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the specification, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the specification. Certain features described in the context of various embodiments are not to be considered  essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although the specification has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.

Claims (20)

  1. A method for isolating a first leaf switch in a network, the first leaf switch being connected to a server in the network, comprising:
    in response to receiving a request for isolating the first leaf switch in the network, sending, via the first leaf switch, a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the first leaf switch;
    determining whether an acknowledgement to the notification is received from the server; and
    in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.
  2. The method according to claim 1, further comprising:
    disconnecting the first leaf switch from the server.
  3. The method according to claim 2, wherein the network further comprises a second leaf switch connected to the server and a spine switch connected to the first leaf switch and the second leaf switch.
  4. The method according to claim 3, wherein the notification further causes the egress traffic from the server to be sent to the spine switch via the second leaf switch.
  5. The method according to any one of claims 1-4, wherein
    after sending the notification, the first leaf switch sends the ingress traffic to the server for processing.
  6. The method according to any one of claims 1-5, further comprises:
    in response to the determination that the acknowledgement is not received within a first period of time, sending another notification to the server.
  7. The method according to claim 3, wherein disconnecting the first leaf switch from the server causes traffic between the spine switch and the server to be communicated through the second leaf switch.
  8. The method according to any one of claims 2-7, wherein disconnecting the first leaf switch from the server further comprising:
    in a second period of time after the ingress traffic towards the server is stopped at the first leaf switch, disconnecting the first leaf switch from the server.
  9. The method according to any one of claims 1-8, wherein the notification is carried by a first Link Aggregation Control Protocol Data Unit (LACPDU) , and the acknowledgement is carried by a second LACPDU.
  10. The method according to any one of claims 1-9, wherein the notification includes a first port state of the first leaf switch, and the acknowledgement includes a second port state of the server.
  11. A first leaf switch connected to a server in a network, comprising:
    a memory storing a set of instructions; and
    at least one processor coupled with the memory and configured to execute the set of instructions to cause the first leaf switch to perform:
    in response to receiving a request for isolating the switch in the network, sending a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the first leaf switch;
    determining whether an acknowledgement to the notification is received from the server; and
    in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.
  12. The first leaf switch according to claim 11, wherein the at least one processor is further configured to execute the set of instructions to cause the first switch to further perform:
    disconnecting the first leaf switch from the server.
  13. The first leaf switch according to claim 12, wherein the network further comprises a second leaf switch connected to the server and a spine switch connected to the first leaf switch and the second leaf switch.
  14. The first leaf switch according to claim 13, wherein
    the notification further causes the egress traffic from the server to be sent to the spine switch via the second leaf switch.
  15. The first leaf switch according to any one of claims 11-14, wherein the at least one processor is further configured to execute the set of instructions to cause the first switch to further perform:
    after sending the notification, sending the ingress traffic to the server for processing.
  16. The first leaf switch according to any one of claims 11-15, wherein the at least one processor is further configured to execute the set of instructions to cause the first switch to further perform:
    in response to the determination that the acknowledgement is not received within a first period of time, sending another notification to the server.
  17. The first leaf switch according to claim 13, wherein disconnecting the switch from the server further comprising:
    traffic between the spine switch and the server is communicated through the second leaf switch after the first leaf switch is disconnected.
  18. The first leaf switch according to any one of claims 12-17, wherein disconnecting the first switch from the server further comprising:
    in a second period of time after the ingress traffic towards the server is stopped at the first leaf switch, disconnecting the first leaf switch from the server.
  19. The first leaf switch according to any one of claims 11-18, wherein the notification is carried by a first Link Aggregation Control Protocol Data Unit (LACPDU) , and the acknowledgement is carried by a second LACPDU.
  20. A non-transitory computer readable medium that stores a set of instructions that is executable by at least one processor of a leaf switch to cause the leaf switch to perform a method for isolating the leaf switch in a network, the leaf switch being connected to a server in the network, the method comprising:
    in response to receiving a request for isolating the first leaf switch in the network, sending, via the first leaf switch, a notification to the server, wherein the notification indicates that the server is to stop sending egress traffic to the first leaf switch;
    determining whether an acknowledgement to the notification is received from the server; and
    in response to the determination that the acknowledgement is received, stopping ingress traffic towards the server.
PCT/CN2019/101379 2019-08-19 2019-08-19 Method and system for isolating a leaf switch in a network WO2021031070A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980099305.3A CN114223182B (en) 2019-08-19 2019-08-19 Method and system for isolating leaf switches in a network
PCT/CN2019/101379 WO2021031070A1 (en) 2019-08-19 2019-08-19 Method and system for isolating a leaf switch in a network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/101379 WO2021031070A1 (en) 2019-08-19 2019-08-19 Method and system for isolating a leaf switch in a network

Publications (1)

Publication Number Publication Date
WO2021031070A1 true WO2021031070A1 (en) 2021-02-25

Family

ID=74660161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101379 WO2021031070A1 (en) 2019-08-19 2019-08-19 Method and system for isolating a leaf switch in a network

Country Status (2)

Country Link
CN (1) CN114223182B (en)
WO (1) WO2021031070A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1647051A (en) * 2002-04-05 2005-07-27 思科技术公司 Apparatus and method for defining a static fibre channel fabric
US20100023658A1 (en) * 2008-07-25 2010-01-28 Broadcom Corporation System and method for enabling legacy medium access control to do energy efficent ethernet
CN103067291A (en) * 2012-12-24 2013-04-24 杭州华三通信技术有限公司 Method and device of up-down link correlation
WO2013170619A1 (en) * 2012-05-17 2013-11-21 Hangzhou H3C Technologies Co., Ltd. Configuring state of an s channel
US20160191374A1 (en) * 2014-12-31 2016-06-30 Juniper Networks, Inc. Fast convergence on link failure in multi-homed ethernet virtual private networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1647051A (en) * 2002-04-05 2005-07-27 思科技术公司 Apparatus and method for defining a static fibre channel fabric
US20100023658A1 (en) * 2008-07-25 2010-01-28 Broadcom Corporation System and method for enabling legacy medium access control to do energy efficent ethernet
WO2013170619A1 (en) * 2012-05-17 2013-11-21 Hangzhou H3C Technologies Co., Ltd. Configuring state of an s channel
CN103067291A (en) * 2012-12-24 2013-04-24 杭州华三通信技术有限公司 Method and device of up-down link correlation
US20160191374A1 (en) * 2014-12-31 2016-06-30 Juniper Networks, Inc. Fast convergence on link failure in multi-homed ethernet virtual private networks

Also Published As

Publication number Publication date
CN114223182B (en) 2024-01-05
CN114223182A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
US10148450B2 (en) System and method for supporting a scalable flooding mechanism in a middleware machine environment
US8842518B2 (en) System and method for supporting management network interface card port failover in a middleware machine environment
EP3041179A1 (en) A method and apparatus for use in network management
US7974186B2 (en) Connection recovery device, method and computer-readable medium storing therein processing program
US10033629B2 (en) Switch, device and method for constructing aggregated link
US8432788B2 (en) Intelligent failback in a load-balanced networking environment
US20130114593A1 (en) Reliable Transportation a Stream of Packets Using Packet Replication
WO2021008591A1 (en) Data transmission method, device, and system
CN106059936B (en) The method and device of cloud system Multicast File
EP3038296B1 (en) Pool element status information synchronization method, pool register and pool element
JP2023130372A (en) Communication device, method, program, and recording medium
US8060628B2 (en) Technique for realizing high reliability in inter-application communication
WO2011140633A1 (en) Scalable reliable failover in a network
WO2016191985A1 (en) Method and device for redundant data transmission over multiple wireless links
US10680888B2 (en) State synchronization between a controller and a switch in a communications network
WO2021031070A1 (en) Method and system for isolating a leaf switch in a network
CN106533771B (en) Network equipment and control information transmission method
CN112217735A (en) Information synchronization method and load balancing system
WO2016086693A1 (en) Message transmission method, backbone switch and access switch
JP6499625B2 (en) Communication apparatus and communication method
KR20180099143A (en) Apparatus and method for recovering tcp-session
US11374856B1 (en) System and method for performing synchronization of maximum transmission unit with router redundancy
US20150200813A1 (en) Server connection apparatus and server connection method
WO2015100551A1 (en) Information transmission method, device and system
WO2016095750A1 (en) Communication method and device in virtual switching cluster

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19942356

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19942356

Country of ref document: EP

Kind code of ref document: A1